Graphics / Video - General Information
| FAQs |
General Information | Video Compatibility |
Graphics Terminologies :
The section below provides an overview of the most common terminologies in the
computer graphics industry. For current listing of recommended video cards, go here.
| B |
C | D | E | F | G | H | I | J | K | L | M |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
3D Graphics Accelerator -
a graphics card which can (1) draw points, lines and (more generally) polygons, using only the polygons' 3-dimensional vertices, and (2) which can map textures on polygons and/or which can shade polygons.
The Accelerated Graphics Port is a new graphics subsystem architecture developed by Intel. AGP is the combination of two features:
- a shared-memory design (i.e., some operations related to graphics will be performed inside the system's main memory);
- a faster bus than the current PCI bus
Both features are linked: it is the faster bus that makes the shared-memory design viable performance-wise. The primary purpose of the shared-memory design is to allow
developers to use a larger number of textures and/or larger textures, without a corresponding increase in the amount of dedicated graphics memory, which would result in
more expensive systems. In other words, the purpose of the shared memory design is to bring better picture quality without driving costs up. Without a faster bus than the current
PCI bus, this shared memory design would result in a big performance hit. For FAQ on AGP, please go here.
all the imperfections induced by the fact that the screen and the color palette are
discrete rather than continuous spaces. For example, to paint a given image on the screen, you may want to cover only part of a pixel with a given color (which is impossible) or
you may want to use color #255.5 (for example) for a particular pixel (which is also impossible). In the first case, you have what is called pixel-position aliasing, which is the
kind of aliasing that generates "jaggies". [ top ]
Alpha Channel -
colors that can be defined by Red, Green, and Blue values. But, in addition to RGB values, they can also be given alpha values, which will be used to blend several colors together. See
Alpha Blending, Anti Aliasing, and Blending.
- a technique which can be used to produce transparency effects (i.e. to
represent such things as glass, fog, mist, water, etc.), to blend two textures, or to map a texture on top of another texture without totally hiding it. More generally, this technique
can be used to blend what has already been rendered and is already in the frame buffer with another texture. Each colored pixel inside each texture is given an alpha value that
represents its degree of transparency. The alpha values are then used to blend the colors of the two textures on a per-pixel basis; i.e. to compute weighted averages of colors of the two textures for each pixel.
Anti-aliasing - smoothing lines and curves by blending several colors. In most cases, drawing a perfectly smooth diagonal or curve on the screen implies coloring only parts of
some pixels with the color of the line or curve; but this is impossible, because the screen is a discrete set of pixels. Instead, to draw a diagonal line or a curve between two points on
the screen, you usually have to draw several smaller lines. This is what causes jaggies to appear. Jaggies can be eliminated by blending several colors where they appear. For
example, assume that you want to draw a thick black diagonal "line" on a white background. Unless this thick diagonal "line" is parallel to one of the screen's main
diagonal, it will be jagged. The solution is to color the pixels where the jaggies appear in various shades of gray. That way, the edges of the thick "line" will blend more gradually
into the background. This will smooth, but also blur the edges of the line. [ top ]
API - Application Programming Interface. A library of routines, functions and objects that can be used to develop applications. An API may also provide a
hardware abstraction layer, that is, an interface to various hardware devices.
Blending - mixing two colors or textures on a polygon, or on a pixel-by-pixel basis. The
simplest way to blend two colors on a polygon is to use each color in turn for every other pixel. The same kind of technique can be used with textures. Of course, this technique
only works with blocks of pixels. On a per pixel basis, two colors are blended by computing weighted averages of their Red, Green and Blue values. Each color's weight in the weighted
averages can be stored in the alpha channel. Then, each color's weight in the blend will be given by its alpha value. See Alpha Blending.
- Color Look-Up Table. A table which establishes a correspondence between the global palette (64K colors, for example), and the subset of colors - or the limited palette
(made of 16 or 256 colors) - used by a particular texture. Allows 16-bit colors inside a texture to be stored in 4-bit or 8-bit format. See palletized textures.
- a set of APIs developed by Microsoft. DirectX is made of several components, each of which can be used to access a particular class of hardware devices: DirectInput for
input devices, DirectSound for sound cards, DirectDraw for graphics cards, Direct3D for 3D accelerators, etc.
Direct3D - Microsoft's API for 3D graphics. This is One of the components of DirectX
supported by all gaming-oriented 3D accelerators so far. [ top ]
- "creating" a new color by blending several colors which are already available. This technique can be used to make 256-color images "look like" 64K-color images, or to
give the illusion that an image was rendered with 64K colors while it was rendered with 256 colors only.
- the number of pixels that a card can raster over a given tie period. Usually
measured in millions of pixels per second. A card's fill rate determines the number of polygons of a given size that this card will be able to raster over a given period of time, for
a given level of picture quality. Or the picture quality that the card can achieve for a given number of polygons (in general, the relationship between the fill rate and the number of
polygons that can be rendered is sub-linear). Usually, a 3D accelerator's fill rate is sensitive to the rendering functions that the card is asked to perform. Features such as
anti-aliasing, bi-linear filtering, z-buffering, dithering, etc. can have a negative impact on a card's fill rate. There, you can't compare two fillrates unless they were obtained using the
same rendering functions.
Filtering - a technique for smoothing textures mapped onto 3D objects. When a source texture is mapped on a 3D object, it may get stretched, which may cause it to appear
blocky. For example, if you take a texture and keep stretching it, you will see larger blocks of pixels of the same color appear on your screen. The reason is that, as you keep
stretching the source texture to map it on an object, each pixel in this texture gets mapped to an increasing number of pixels in the destination texture. This is what happens when
you move closer to an object in a game: as the size of the object on your screen increases, small blocks of pixels of the same color become large blocks of pixels of the same color.
Filtering attenuates this effect. When this method is used, the color of each pixel inside the destination texture is determined as the weighted average of the colors of several
pixels inside the source texture. Or, equivalently, the color of the destination pixel is determined by performing linear interpolations between the colors of several pixels inside the source texture. [ top ]
Filtering (Bi-Linear) - When this filtering method is used, the color of each pixel on the
destination texture is determined as a blend (a weighted average) of the colors of four adjacent pixels inside the source texture.
- the combination of (1) mip-mapping and (2) bi-linear filtering inside two mipmaps. First, two versions of a source texture corresponding to two adjacent levels
of detail are selected. Then, bi-linear filtering is performed inside each version of the source texture. Finally, the colors of the two pixels produced by bi-linear filtering are
blended together. When this method is used, two source textures and four samples inside each texture are used to determine the color of the destination pixel.
- blending part of a scene with a given color. This technique can be used to represent fog but also to make distant fade away.
- Hardware Abstraction Layer. A component of an API (Direct3D for example) which represents the functions of a particular system's hardware which are useful to the API. In
the case of Direct3D, the HAL is a "virtual 3D accelerator" made of all the 3D acceleration functions useful to Direct3D which are available on a particular system. In other words, the
HAL is the interface between the application and the particular 3D accelerator present in a system. [ top ]
HEL - Hardware Emulation Layer. In the case of Direct3D, a "virtual rasterization device" made of all the rasterization functions that Direct3D can emulate. The HEL allows
developers to write applications without having to worry about which rasterization functions will be available on each particular system, as long as they don't use 3D rasterization
functions which Direct3D cannot emulate.
- using different versions of the same source texture (or different mipmaps)
for different types of polygons, or for different pixels (depending on their positions). Each version of the source texture corresponds to a particular level of detail (LOD). For example,
one version will be used for " small " polygons, and another one will be used for " large " polygons; the texture for small polygons will be used on distant objects, while the texture
for large polygons will be used on objects seen from up-close. Bi-linear filtering attenuates the fact that, as a texture is stretched, each pixel inside that texture is mapped to an
increasing number of pixels inside the destination texture. Mip mapping attenuates the consequences of the fact that, as an object becomes smaller and smaller, if only one
source texture is used, an increasing number pixels inside that texture are mapped to the same pixel inside the destination texture.
Mip Mapping (Tile Based) - the particular version of a texture which will be used is
determined for a whole polygon (rather than for each pixel). As a result, textures in a scene will be made of a large number of smaller tiles taken from different source textures.
Mip Mapping (Per Pixel)
- the particular version of a texture which will be used is determined for each pixel (rather than for a polygon as a whole). [ top
Mip Mapping (Nearest LOD) - the particular version of a texture used for a given pixel is
the version corresponding to the LOD which is closest to that pixel's LOD. In other words, only one version of the source texture is used to determine the color of the pixel. This technique can be combined either with
point sampling or with filtering (inside the particular version of the texture, which is used). When point sampling is used in combination with nearest LOD mip-mapping
, only one version of the texture, and only one sample inside that version of the texture are used to determine the color of the destination pixel. When bi-linear filtering is used in combination with
nearest LOD mip-mapping, only one version of the texture, and four samples inside that version of the texture are used to determine the color of the destination pixel.
Mip Mapping (Tri-Linear)
- same thing as tri-linear filtering. While nearest LOD mip mapping uses only one mipmap (i.e., one version of the source texture) to determine the
color of the destination pixel, tri-linear mip mapping uses two versions of the source texture.
Open GL - a set of specifications for a cross-platform 3D graphics API developed initially by
Silicon Graphics Inc. There are several implementations of Open GL, provided by different vendors. Microsoft provides a Win32 version. Open GL includes routines for shading,
texture mapping, texture filtering, anti-aliasing, lighting, geometry transformations, etc. Most of these functions can be hardware-accelerated. Note that Open GL is not
object-oriented. For more information about OpenGL, visit the OpenGL WWW Center, or see the APIs page.
- a set of textures, each of which uses its own limited palette of colors
(a subset of the global palette). For example, even if there are 64K colors available, each texture may use no more than 16 or 256 colors. The colors used by each texture are referenced in a Color Look-Up Table
(CLUT). CLUT 4 palettized textures use 4-bit palettes (16-color palletes), while CLUT 8 palletized textures use 8-bit palettes (256-color palettes).
A color look-up table stores the values in the global pallette (a 16-bit pallette, for example) of the 16 or 256 colors used by a particular texture. This allows the color
information inside the texture to be stored in 4-bit or 8-bit format, even if the global pallete contains 64K colors. This way, more and/or larger textures can be stored in a given
space. Palletized textures allow developers to use a larger number of textures and/or higher definition textures. A note about the Matrox Mystique : So far, the Matrox Mystique is the
only gaming-oriented 3D accelerator which supports CLUT8 palletized textures (in addition to CLUT4 palletized textures), something which is supposed to make up for the fact that this card is not
capable of bi-linear filtering : the position of Matrox is that the combination of tile-based mip mapping, point sampling, and high-resolution palletized textures can produce images which will look
almost as nice as with bi-linear filtering. Based on my personal experience, this claim is not totally unjustified : games developed specifically for the Mystique look very nice, albeit not quite as nice
as games which use bi-linear filtering. However, although Direct3D supports palletized textures, not many games use them so far. Moreover, 3Dfx Interactive's Voodoo Graphics chip and Rendition's
Vérité chip seem to be able to maintain the same fill rate with bi-linear filtering as with point sampling. Finally, it should be noted that bi-linear filtering is not the only rendering function which
is not implemented in the Mystique. [ top ]
- a texture-mapping technique which is simpler than bi-linear filtering. The color of a pixel inside a destination texture is determined by the color of the pixel whose
position inside the source texture is the closest to the position of the target pixel inside the destination texture (for this reason, this method is also called the nearest neighbor method).
This technique does not attenuate the consequences of the stretching which may occur when a texture is mapped onto a 3D object.
- the process of transforming a 3D image into a set of colored pixels, i.e. the process of giving a color to each pixel, depending on light sources, the position of the object that the pixel represents, textures, etc.
- a term which is often used as a synonym for rasterization, but which can also refer to the whole process of creating a 3D image. [ top
- 3D graphics API developed by Criterion Software. A very user-friendly API which includes functions for shading, texture mapping, texture filtering, performing
geometry transformations, clipping, and lighting. In addition, RenderWare includes a script language for creating and storing objects using text files (of course, objects can also be
created during run-time). Finally, RenderWare includes a large set of primitives and models. Starting with version 2, RenderWare can use loadable drivers for hardware
accelerators. Even without hardware acceleration, RenderWare can achieve fill rates in excess of 18 MP/S on a Pentium 166. The games Back to Baghdad and Scorched Planet, for example, were written with this API.
- Red-Green-Blue. Each color is defined by a value on the Red scale, a value on the Green scale, and a value on the Blue scale.
- Red-Green-Blue-Alpha. In addition to Red, Green and Blue values, a color can have an
alpha-value. Alpha values are used to blend colors at the pixel level. See alpha-channel.
Shading - applying different shades of the same color to an object in order to represent the
effects of light. For example, applying different shades of the same color (or a gradient of colors) to a flat square can make it look like a curved surface which is exposed to a
light-source. In the same vein, shading can be used to give some volume to a 2D polygon or disk (i.e., to transform a square into a curved panel, or a disk into a ball). For this
reason, shading comes into play very early in the process of creating a 3D image, when models of 3D objects are created. [ top ]
- all pixels inside a polygon are given the same shade.(In this case, shading is performed across polygons, over an object's surface, but not inside each polygon).
Shading (Interpolative) - each pixel inside a polygon is given a particular shade, which is determined by interpolating between the polygon's vertices, or between the polygon's edges.
- a variant of interpolative shading which replaces the normal vector at a given vertice by an average of the normal vectors of the adjacent polygons [yes, I know
this isn't much of an explanation]. Visually, relative to flat or interpolative shading, Gouraud shading makes the concavity or convexity of an object's surface appear smoother.
- mapping or applying a given texture onto a polygon. [ top ]
The painter's algorithm
- can be used to perform HSR, as an alternative to Z-buffering. Called "the painter's algorithm" because it performs HSR in the same way as a painter :
rendering everything from back to front. This algorithm performs a series of tests on polygons to determine whether a given point will be visible or not from the player's point of
view, which means that it must know the geometry of the scene. The more polygons there are in a scene, and the slower this method is (the time taken by this method increases with the square of the number of polygons).
Z-Buffer - a point in a 3-dimensional space has three coordinates-say x, y and z. Assuming that the x-axis and the y-axis define the plane in which the screen is included,
the z-axis measures the distance from a point to the screen (or the player). The z-buffer is a portion of memory used to store the coordinate on the z-axis of the closest opaque point
for each value of x and y (i.e., if the resolution is 640x480, the Z-buffer is a 640x480 array). All other points with the same coordinates on the x and y axes, but with higher
coordinates on the z axis will be invisible to the player and, therefore, will not be drawn. The z-buffer is used for hidden surface removal. For example, consider two points whose
coordinates are given by (x1,y1,z1) and (x2,y2,z2), respectively. Assume that these two points have the same coordinates on the x and y axes (i.e., x1=x2 and y1=y2) but
different coordinates on the z-axis ; for example, assume that z2<z1. Then, if point #2 is opaque, only this point will be visible by the player-point #1 will be hidden. In this case, y2
will be stored in the Z-buffer for x=x1=x2 and y=y1=y2. Now, consider a third point with coordinates (x3,y3,z3). Again, assume that x3=x1=x2 and y3=y1=y2. If z3>z2, then point
#3 will not be visible by the player, and z3 will not be stored in the Z-buffer. However, if z3<z2, then point #3 will be visible, and z2 will be replaced by z3 in the Z-buffer. In the
end, only those pixels whose z-coordinates are stored in z-buffer are plotted on the screen. [ top ]
Z-buffering - the process of creating the Z-buffer. Most 3D accelerators can handle Z-buffering, although this may have a negative impact on their performances. Z-buffering
can also be performed in software.