A fundamental operation that is used extensively
in computer graphics and visualization is the
process of * scan conversion* or * rasterization*. Given a
polygon in image space, this process determines the pixels that
intersect the polygon. This process is utilized in visible-surface
algorithms, incremental-shading techniques, polygon-fill algorithms,
ray-tracing-acceleration algorithms,
and a number of other tasks that are critical to the understanding of
the computer graphics field.

For a pdf version of these notes look here.

** Scan Conversion of Trapezoids in Device Space**

Scan Conversion is the process of finding the screen pixels that intersect a polygon. To do this, we find it convienent to move to a copy of image space that is scaled to closely correspond to the pixels in our display window. This space, commonly called device space is parameterized so that the lower-left-hand corner is at , and so that the pixels can all be indexed by integers.

To scan convert a polygon in this space, we will split the polygon into a set of trapezoids as is shown below, and then scan convert each trapezoid. Each trapezoid will be of a special form where the top and bottom edges of the trapezoid are parallel to the scanlines (i.e., of a constant value). We will also consider ``degenerate'' trapezoids - triangles - which have either the top of bottom edge of zero length. In the following illustration, the polygon is split into five trapezoids. The top and bottom trapezoids are actually triangles.

The union of all pixels that intersect the set of trapezoids will be the set of pixels that intersect the polygon.

The idea here is easy. We will establish an ``edge tracker'' which follows the endpoints of the lines formed by intersecting each scanline with the trapezoid.

This edge tracker can be easily defined as an simple data structure which is initialized for the scanline at the top of each trapezoid and updated for each subsequent scanline.

** Initializing an Edge Tracker**

Suppose we are given an edge defined by two points and , defined in device space. If we define , and , then the following illustration shows how to calculate the initial point (only the and values are shown for simplicity).

We see that the initial point is

** Updating an Edge Tracker for Subsequent
Scanlines**

Suppose we are given an edge defined by two points and , defined in device space. If we define , and , then the following illustration shows how to update the edge tracker (only the and values are shown for simplicity).

Here we have again similar triangles, and it can be seen that to move the tracker from one scanline to another, we only need update the endpoint

We can calculate directly from the similar triangles:

** The Endpoint Data Structure**

The easiest way to implement an trapezoid edge tracker is to
create a simple data structure that holds both the
information for the endpoints and the increments that are
necessary to move the endpoint from scanline to scanline.
The node typically contains (at least) the following
information.

where
is the endpoint,
is the increment
added to the
coordinate to move from scanline to
scanline,
is the increment
added to the
coordinate to move from scanline to
scanline, and
is the lower bound on the trapezoid
that enables us to tell our algorithm when to stop tracking.

** Identifying the Intersecting Pixels on Each
Scanline**

To find the pixels that intersect a trapezoid, we create two edge trackers and iteratively work down (scanline by scanline) through the trapezoid. For each scanline, this enables us to determine the endpoints of a line that forms the intersection of the scanline and the polygon.

To find the pixels that intersect the trapezoid on a scanline, we only need find the lower-left-hand corner of each pixel. If we have calculated endpoints and on the scanline with integer value , then the pixels intersecting the trapezoid are indexed by

** Incrementing the Z Value for Consecutive Pixels**

Establishing an increment for the
value as we move from
scanline to scanline is quite straightforward. The same is
true for as we move from pixel to pixel - however * the
two increments are different*. Both increments can be
calculated directly from the normal vector to the trapezoid
(trapezoids are assumed to be planar, as are polygons).

To see this, let be the normal vector to the trapezoid that is defined in device space. If we have two points and in the plane of the trapezoid, then we know that

0 | ||

which can be solved to give

** Establishing a Depth Value at Each Pixel**

Since a pixel is identified by its device-space coordinate at the lower-left-hand corner of the pixel, it is useful in our rendering algorithms to be able to assign a value for a trapezoid at this point. We have already shown how to calculate the values at the endpoints, we only need modify this value so that it is appropriate for the coordinate of the lower-left-hand corner of the pixel. So given an endpoint , consider the figure below.

From the figure, we can see that we should subtract from , a portion of the horizontal increment that corresponds to the distance of from the left side of the pixel. That is,

Using the above calculations, we can use the endpoint data structure to set up edge-tracking mechanisms, and can both enumerate the pixels within a trapezoid and assign a value to each pixel.

** Summary**

Scan Conversion is a procedure that is use repeatedly in computer graphics algorithms. It is a simple procedure, designed around an edge-tracking paradigm, which can be implemented by adding simply-calculated increments onto base values. One of the primary uses of the algorithm is to establish depth () values at each pixel for polygons in the scene, this enables us to retain only the visible polygons in our final renderings. However, we can also utilize quantities relating to color, texture, and parameterization as information in our endpoint nodes and increment them as well. These quantities are included in the algorithm in a manner similar to the coordinate above.

**Return to
the Graphics Notes Home Page**

**Return
to the Geometric Modeling Notes Home Page**

**Return
to the UC Davis Visualization and Graphics Group Home Page**

**This document maintained by Ken
Joy**

All contents copyright (c) 1996, 1997, 1998,
1999

Computer Science Department

University of California, Davis

All rights reserved.

1999-12-06