Advanced features in opengl

Page 1

levon altunyan S P E C I A L F E AT U R E S I N O P E N G L


S P E C I A L F E AT U R E S I N O P E N G L levon altunyan

Homework Abteilung Informatik und Angewandte Kognitionswissenschaft Lehrstuhl Computergraphik u. Wissenschaftliches Rechnen Faculty of Engineering University of Duisburg-Essen April 24, 2009, 2009 - Juni 18, 2009


Levon Altunyan: Special features in OpenGL, Homework, Š April 24, 2009, 2009 - Juni 18, 2009 supervisors: Prof. Dr. Wolfram Luther

location: campus Duisburg time frame: April 24, 2009, 2009 - Juni 18, 2009


CONTENTS

i introduction 1 introduction

1 2

ii special features in opengl 3 2 improving performance 4 2.1 Display List 5 2.2 Vertex Arrays 6 2.3 Marbles Example 8 3 alpha blending and antialiasing 13 3.1 Changing the Blending Equation 14 3.2 Disk Blender 15 4 Fog 17 4.1 Fog Equations 18 4.2 Fog Coordinates 18 5 Selection and Feedback 20 5.1 Selection 20 5.2 The Basic Steps 23 5.3 Feedback 23 5.4 The Feedback Buffer 24 5.5 Feedback Data 24 6 Fragment Operations 26 6.1 Multisample Operations 26 6.2 Alpha Test 27 6.3 Stencil Test 27 6.4 Dissolve Effect with Stencil Buffer 28 7 summary 31 bibliography

32

iv


LIST OF FIGURES

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11

Arrays for position, color, and texture coordinate attributes 6 Triangle Strip 7 Immediate Mode [2] 9 Vertex Array On [2] 10 Display Lists On [2] 11 Disk Blender - Example 1 16 Disk Blender - Example 2 16 Fog Example [2] 19 Selection Hierchy [4] 22 Selection Example [1] 24 Using Stencil to Dissolve Between Images [5] 29

L I S T O F TA B L E S

Table 1 Table 2 Table 3 Table 4 Table 5 Table 6 Table 7 Table 8

Valid Vertex Array Sizes and Data Types OpenGL Blending Factors 14 Available Blend Equation Modes 15 Three OpenGL Supported Fog Equations Feedback Buffer Types 25 Feedback Buffer Tokens 25 Fragment Test 27 Stencil Update Values 28

ACRONYMS

API

Application Programming Interface

v

8

18


Part I INTRODUCTION


1

INTRODUCTION

OpenGL (Open Graphics Library) is a standard specification defining a cross-language, cross-platform API for writing applications that produce 2D and 3D computer graphics. The interface consists of over 250 different function calls which can be used to draw complex threedimensional scenes from simple primitives. OpenGL was developed by Silicon Graphics Inc. (SGI) and is widely used in CAD, virtual reality, scientific visualization, information visualization, and flight simulation. It is also used in video games, where it competes with Direct3D on Microsoft Windows platforms. The features which OpenGL provides can be divided to two different big groups - basic and special ones. This work is dealing with some of the functions which fall into the group of advanced methods. Since the list of topics which are under the special features category is vast, the number of discussed topics is restricted to the most important ones suggested in literature [6].

2


Part II S P E C I A L F E AT U R E S I N O P E N G L


2

IMPROVING PERFORMANCE

Fundamental progress has to do with the reinterpretation of basic ideas. — Alfred North Whitehead In this section two techniques (display lists and vertex arrays code) for optimization of the OpenGL code will be described. Furthermore, the information and procedures needed to render geometry structures will be explained. Moreover, an example which uses the listed methods will be given. In many graphics applications, and in virtually all games, maintaining an interactive frame rate and smooth animation is of utmost importance. Although rapid advancements in graphics hardware have lessened the need to optimize every single line of code, programmers still need to focus on writing efficient code that, through the graphics Application Programming Interface (API), harnesses the full power of the underlying hardware. The work done every time you call an OpenGL command is not inconsequential. Commands are compiled, or converted, from OpenGL’s high-level command language into low-level hardware commands understood by the hardware. For complex geometry, or just large amounts of vertex data, this process is performed many thousands of times, just to draw a single image on screen. This is, of course, the aforementioned problem with immediate mode rendering. How does our new knowledge of the command buffer help with this situation? Often, the geometry or other OpenGL data remains the same from frame to frame. For example, a spinning torus is always composed of the same set of triangle strips, with the same vertex data, recalculated with expensive trigonometric functions every frame. The only thing changing frame to frame is the model view matrix. A solution to this needlessly repeated overhead is to save a chunk of precomputed data from the command buffer that performs some repetitive rendering task, such as drawing the torus. This chunk of data can later be copied back to the command buffer all at once, saving the many function calls and compilation work done to create the data. OpenGL provides a facility to create a preprocessed set of OpenGL commands (the chunk of data) that can then be quickly copied to the command buffer for more rapid execution.

4


2.1 display list

2.1

display list

This precompiled list of commands is called a display list, and creating one or more of them is an easy and straightforward process, which has the following properties: • Used in immediate mode to organize data and improve performance • Maximize performance by knowing when and how to use display lists • Reduce cost of repeatedly transmitting data over a network if repeatedly used commands are stored in a display list. Some graphics hardware may store display lists in dedicated memory or may store the data in an optimized form that is more compatible with the graphics hardware or software. The advantages of display list can be divided to the following groups suggested by [1]: • Matrix operations. Most matrix operations require OpenGL to compute inverses. Both the computed matrix and its inverse might be stored by a particular OpenGL implementation in a display list. • Raster bitmaps and images. The format in which the raster data is specified is not likely to be the ideal one for the hardware. When a display list is compiled, OpenGL might transform the data into the representation preferred by the hardware. This can have a significant effect on the speed of raster character drawing, since character strings usually consist of a series of small bitmaps. • Lights, material properties, and lighting models. When a scene is drawn with complex lighting conditions one might change the materials for each item in the scene. Setting the materials can be slow since it might involve significant calculations. If we put the material definitions in display lists, these calculations don’t have to be done each time we switch materials, since only the results of the calculations need to be stored; as a result, rendering light scene might be faster. • Polygon stipples patterns. Furthermore, the user should also consider the following disadvantages:

5


2.2 vertex arrays

• Very small lists may not perform well since there is some overhead when executing a list • Immutability of the contents of a display list. – Display list cannot be changed and their contents cannot be read. – If data needs to be maintained separately from the display list (e.g. continued data processing) then a lot of additional memory may be required. 2.2

vertex arrays

Figure 1: Arrays for position, color, and texture coordinate attributes

The second method to optimize an OpenGL code is the use of vertex arrays. OpenGL requires many function calls to render geometric primitives. OpenGL has vertex array routines that allow us to specify a lot of vertex related data with just a few arrays and to access that data with equally few function calls. Assuming the case of a mesh which represents a triangle strip illustrated in Figure 2, each of the circled vertices is a redundant vertex. In other words, each of these vertices is shared by more than three triangles, but since a triangle strip can represent, at most, three triangles per vertex, each of the circled vertices needs to be sent to the video card more than once. This results in using additional bandwidth to send the

6


2.2 vertex arrays

Figure 2: Triangle Strip

data to the video card. In addition, the vertex is likely to transformed and lit more than once. These two operations waste bandwidth and processing cycles. To address these issues, OpenGL include vertex arrays. Vertex arrays offer the following advantages: • Large batches of data can be sent with a small number of function calls. • Through the use of indexed vertex arrays, vertices can be send exactly once per triangle mesh, reducing bandwidth and potentially avoiding redundant transformation and lighting. Using vertex array routines, all 20 vertices in a 20-sided polygon can be put into one array and called with one function. If each vertex also has a surface normal, all 20 surface normals can be put into another array and also called with one function. Arranging data in vertex arrays may increase the performance of the application. Using vertex arrays reduces the number of function calls, which improves performance. Also, using vertex arrays may allow reuse of already processed shared vertices. There are three steps to using vertex arrays to render geometry: 1. Activate (enable) up to eight arrays, each storing a different type of data: vertex coordinates, surface normals, RGBA colors, secondary colors, color indices, fog coordinates, texture coordinates, or polygon edge flags. 2. Put data into the array or arrays. The arrays are accessed by the addresses of (that is, pointers to) their memory locations. In the client-server model, this data is stored in the client’s address space.

7


2.3 marbles example

3. Draw geometry with the data. OpengGL obtains the data from all activated arrays by dereferencing the pointers. In the clientserver model, the data is transferred to the servers address space. There are three ways to do this: • Accessing individual array elements (randomly hopping around) • Creating a list of individual array elements (methodically hopping around) • Processing sequential array elements The dereferencing method of choice may depend on the type of problem which is encountered. Version 1.4 of OpenGL adds support for multiple array access from a single function call. [1] Table 1: Valid Vertex Array Sizes and Data Types Command glColorPointer

Elements 3,4

glEdgeFlagPointer glFogCoordPointer glNormalPointer

1 1 3

glSecondaryColorPointer

3

glTexCoordPointer

1,2,3,4

glVertexPointer

2,3,4

Valid Data Types GL_BYT E, GL_UNSIGNED_BYT E, GL_SHORT , GL_UNSIGNED_SHORT , GL_INT , GL_UNSIGNED_INT , GL_FLOAT , GL_DOUBLE None specified (always GLboolean) GL_FLOAT , GL_DOUBLE GL_BYT E, GL_SHORT , GL_INT , GL_FLOAT , GL_DOUBLE, GL_BYT E, GL_UNSIGNED_BYT E, GL_SHORT , GL_INT , GL_UNSIGNED_INT , GL_FLOAT , GL_DOUBLE GL_SHORT , GL_INT GL_FLOAT , GL_DOUBLE, GL_SHORT , GL_INT , GL_FLOAT , GL_DOUBLE,

With the discussed methods, the vertex arrays will look similar to Figure 1, with each vertex attribute stored in a separate array. Each of these arrays is passed independently via one of the gl*Pointer() functions, and when a draw command is made, OpenGL assembles data from each array to form complete vertices. 2.3

marbles example

As an example of the use of Display lists and Vertex arrays a short demo program (Marbles) presented by Astle et al. [2] is included. A large number of bouncing marbles inside a glass case with a

8


2.3 marbles example

Figure 3: Immediate Mode [2]

mirrored floor is drawn. Each marble shared the same data but is colored and positioned independently. Immediate mode is used by default. Using the following keys enable the vertex array and display list special features: <SPACE> Toggles vertex arrays for the marbles using glDrawElements(). <TAB> Toggles display lists for everything. <C> Toggles compiled vertex arrays. A definite improvement in frame rate when enabling vertex arrays should be observed on fig. 4. Display lists (fig. 5) may or may not improve performance, depending on the underlying hardware. When display lists are enabled, the marbles freeze in place. This is due to the fact that each marble is positioned independently inside of the display list. Once the display list is compiled, the data within it can’t be changed, so the marbles can’t move relative to each other. However, the marbles can be moved as a group. Furthermore, when enabling compiled vertex arrays, the marble colors may change to a single color. This is because all of the marbles share the same base set of data. When the vertices get locked and cached away, the changes in the material may not get picked up. Let’s look at the most relevant code for this example. First, after generating marbles, the vertex arrays are set up as follows: bool CGfxOpenGL::Init() { ... InitializeMarbles():

9


2.3 marbles example

Figure 4: Vertex Array On [2] glVertexPointer(3, GL_FLOAT, 0, m_positions); glNormalPointer(GL_FLOAT, 0, m_texCoords); glTexCoordPointer(3, GL_FLOAT, 0, m_texCoords); ... }

The texture coordinate data is being used for both texture coordinates and normals because with cube-mapped spheres, the values are idenctical. The relevant code for display lists happens in Render(), as shown below: void CGfxOpenGL::Render() { ... if (m_useList) { // use the existing list if there is one if (m_list) { glCallList(m_list); return; } else // otherwise, create a new one { m_list = glGenLists(1); glNewList(m_list, GL_COMPILE_AND_EXECUTE);

10


2.3 marbles example

Figure 5: Display Lists On [2] } } glLightfv(GL_LIGHTO, GL_POSITION, LIGHT_POSITION); DrawFloor(); DrawReflection(); DrawMarbles(GL_FALSE); DrawBox(); if(m_useList) glEndList();

Finally, the vertex arrays are put to use inside of DrawMarbles(): void CGfxOpenGL::DrawSphere() { if (m_useVertexArrays) { for (int i = 0; i < m_numStrips; ++i) { glDrawElements(GL_TRIANGLE_STRIP, m_vertsPerStrip, GL_UNSIGNED_INT, &m_indexArray[i * m_vertsPerStrip]); } } else // draw using immediate mode instead {

11


2.3 marbles example

for (int i = 0 ; i < m_numStrips; ++i) { glBegin(GL_TRIANGLE_STRIP); for (int j = 0; j < m_vertsPerStrip; ++j) { int index = m_indexArray[i * m_vertsPerStrip + j]; glNormal3fv(m_texCoords[index].v); glTexCoord3fv(m_texCoords[index].v); glVertex3fv(m_positions[index].v); } glEnd(); } } }

12


ALPHA BLENDING AND ANTIALIASING

The blending function combines color values from a source and a destination. The final effect is that parts of our scene appear translucent. The color blending functions support effects such as transparency that can be used to simulate windows, drink glasses, and other transparent objects. Blending in OpenGL provides pixel-level control of RGBA color storage in the color buffer. Blending operations cannot be used in color index mode and are disabled in color index windows. To enable blending in RGBA windows, you must first call glEnable(GL_BLEND). After this, you call glBlendFunc with two arguments: the source and the destination colors’ blending functions. By default, these arguments are GL_ONE and GL_ZERO, respectively, which is equivalent to glDisable(GL_BLEND). Transparency is perhaps the most typical use of blending, often used for windows, bottles, and other 3D objects that you can see through. Transparency can also be used to combine multiple images, or for "soft" brushes in a paint program. This combination takes the source color and scales it based on the alpha component, and then adds the destination pixel color scaled by 1 minus the alpha value. Stated more simply, this blending function takes a fraction of the current drawing color and overlays it on the pixel on the screen. The alpha component of the color can be from 0 (completely transparent) to 1 (completely opaque), as follows: Rdestination = Rsource ∗ Asource + Rdestination ∗ (1 − Asource ) Gdestination = Gsource ∗ Asource + Gdestination ∗ (1 − Asource ) Bdestination = Bsource ∗ Asource + Bdestination ∗ (1 − Asource ) Because only the source alpha component is used, there is no need that the underlying graphics board supports alpha color planes in the color buffer. This is important because the standard Microsoft OpenGL implementation does not support alpha color planes. Something to remember with alpha-blended transparency is that the normal depth-buffer test can interfere with the effect you’re trying to achieve. To make sure that transparent polygons and lines are drawn properly, they should be always drawn from back to front. Table 2 may seem a bit bewildering, so let’s look at a common blending function combination described by Wright et al.[3]: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); This function tells OpenGL to take the source (incoming) color and multiply the color (the RGB values) by the alpha value. Add this to the result of multiplying the destination color by one minus the alpha value from the source. Say, for example, that one has the color Red

13

3


3.1 changing the blending equation

Table 2: OpenGL Blending Factors Function GL_ZERO GL_ONE GL_SRC_COLOR GL_ONE_MINUS_SRC_COLOR GL_DST_COLOR GL_ONE_MINUS_DST_COLOR GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA GL_DST_ALPHA GL_ONE_MINUS_DST_ALPHA GL_CONSTANT_COLOR GL_ONE_MINUS_CONSTANT_COLOR GL_CONSTANT_ALPHA GL_ONE_MINUS_CONSTANT_ALPHA GL_SRC_ALPHA_SATURATE 1

RGB Blend Factors: (0,0,0) (1,1,1) (Rs , Gs , Bs ) (1, 1, 1) − (Rs , Gs , Bs ) (Rd , Gd , Bd ) (1, 1, 1) − (Rd , Gd , Bd ) (As , As , As ) (1, 1, 1) − (As , As , As ) (Ad , Ad , Ad ) (1, 1, 1) − (Ad , Ad , Ad ) (Rc , Gc , Bc ) (1, 1, 1) − (Rc , Gc , Bc ) (Ac , Ac , Ac ) (1, 1, 1) − (Ac , Ac , Ac ) (f,f,f)1

Alpha Blend Factor: 0 1 As 1 − As Ad Ad As 1 − As Ad 1 − Ad Ac 1 − Ac Ac 1 − Ac 1

Where f=min(As ,1 − Ad )

(1.0f, 0.0f, 0.0f, 0.0f) already in the color buffer. This is the destination color, or Cd . If something is drawn over this with the color blue and an alpha of 0.6 (0.0f, 0.0f, 1.0f, 0.6f),the final color would be computed as shown here: Cd = destination color = (1.0f, 0.0f, 0.0f, 0.0f) Cs = source color = (0.0f, 0.0f, 1.0f, 0.6f) S = source alpha = 0.6 D = one minus source alpha = 1.0 - 0.6 = 0.4 Now, the equation Cf = (Cs * S) + (Cd * D) evaluates to Cf = (Blue * 0.6) + (Red * 0.4) The final color is a scaled combination of the original red value and the incoming blue value. The higher the incoming alpha value, the more of the incoming color is added and the less of the original color is retained. This blending function is often used to achieve the effect of drawing a transparent object in front of some other opaque object. This technique does require, however, that you draw the background object or objects first and then draw the transparent object blended over the top. [3] 3.1

changing the blending equation

The blending equation showed earlier, is the default blending equation. One can actually choose from five different blending equations,

14


3.2 disk blender

Table 3: Available Blend Equation Modes

Mode GL_FUNC_ADD (default) GL_FUNC_SUBT RACT GL_FUNC_REVERSE_SUBT RACT GL_MIN GL_MAX

Function: Cf = (Cs ∗ S) + (Cd ∗ D) Cf = (Cs ∗ S) − (Cd ∗ D) Cf = (Cs ∗ D) − (Cd ∗ S) Cf = min(Cs , Cd ) Cf = max(Cs , Cd )

each given in Table 3 and selected with the following function: void glBlendEquation(GLenum mode); In addition to glBlendFunc, one has even more flexibility with this function: void glBlendFuncSeparate(GLenum srcRGB, GLenum dstRGB, GLenum srcAlpha, GLenum dstAlpha); Whereas glBlendFunc specifies the blend functions for source and destination RGBA values, glBlendFuncSeparate allows to specify blending functions for the RGB and alpha components separately. Finally, as shown in Table 3, the GL_CONST ANT _COLOR, GL_ONE_MINUS_CONST ANT _COLOR, GL_CONST ANT _ALPHA, and GL_ONE_MINUS_CONST ANT _ALPHA values all allow a constant blending color to be introduced to the blending equation. This constant blending color is initially black (0.0f, 0.0f, 0.0f, 0.0f), but it can be changed with this function: void glBlendColor(GLclampf red, GLclampf green, Glclampf blue, GLclampf alpha); [3] 3.2

disk blender

To demonstrate some of the many blending combinations the disk blender example is included [2]. Disk blender is drawn with a black background with an alpha of 1.0. A green disk with some transparency is drawn first, without blending. Then a semi-transparent red disk is drawn on top of it with blending enabled. By pressing the S, D, and E keys, one can cycle through all of the available source factors, destination factors, and blend equations. Screenshots of the program are shown in Figures 6 and 7.

15


3.2 disk blender

Figure 6: Src:GL_SRC_ALPHA Eqn:GL_FUNC_ADD [2]

Dst:GL_ONE_MINUS_SRC_ALPHA

Figure 7: Src:GL_ZERO Dst:GL_ONE_MINUS_SRC_ALPHA Eqn:GL_MIN [2]

16


4

FOG

Another easy-to-use special effect that OpenGL supports is fog. With fog, OpenGL blends a fog color that one specifies with geometry after all other color computations have been completed. The amount of the fog color mixed with the geometry varies with the distance of the geometry from the camera origin. The result is a 3D scene that appears to contain fog. Fog can be useful for slowly obscuring objects as they "disappear" into the background fog; or a slight amount of fog will produce a hazy effect on distant objects, providing a powerful and realistic depth cue. Turning fog on and off is as easy as using the following functions: glEnable/glDisable(GL_FOG); The means of changing fog parameters (how the fog behaves) is to use the glFog function. There are several variations on glFog: void glFogi(GLenum pname, GLint param); void glFogf(GLenum pname, GLfloat param); void glFogiv(GLenum pname, GLint* params); void glFogfv(GLenum pname, GLfloat* params); First the use of glFog is shown glFogfv(GL_FOG_COLOR, fLowLight); // Set fog color to match background When used with the GL_FOG_COLOR parameter, this function expects a pointer to an array of floating-point values that specifies what color the fog should be. Here, we used the same color for the fog as the background clear color. If the fog color does not match the background (there is no strict requirement for this), as objects become fogged, they will become a fog-colored silhouette against the background. Next one can specify for example how far away an object must be before fog is applied and how far away the object must be for the fog to be fully applied (where the object is completely the fog color), with the following lines: glFogf(GL_FOG_START, 5.0f); // How far away does the fog start glFogf(GL_FOG_END, 30.0f); // How far away does the fog stop The parameter GL_FOG_START specifies how far away from the eye fogging begins to take effect, and GL_FOG_END is the distance from the eye where the fog color completely overpowers the color of the object. The transition from start to end is controlled by the fog equation. A typical use is the linear fog equation specified with GL_LINEAR: glFogi(GL_FOG_MODE, GL_LINEAR); // Which fog equation to use [3]

17


4.1 fog equations

18

Table 4: Three OpenGL Supported Fog Equations

4.1

Fog Mode: GL_LINEAR

Fog Equation: (end−c) f = (end−start)

GL_EXP GL_EXP2

f = e(−d∗c) 2 f = e(−d∗c)

fog equations

The fog equation calculates a fog factor that varies from 0 to 1 as the distance of the fragment moves between the start and end distances. OpenGL supports three fog equations: GL_LINEAR, GL_EXP, and GL_EXP2. These equations are shown in Table 4. In these equations, c is the distance of the fragment from the eye plane, end is the GL_FOG_END distance, and start is the GL_FOG_START distance. The value d is the fog density. Fog density is typically set with glFogf: glFogf(GL_FOG_DENSITY, 0.5f); From the equations above one can note that GL_FOG_START and GL_FOG_END only have an effect on GL_LINEAR fog. The distance to a fragment from the eye plane can be calculated in one of two ways. Some implementations (notably NVIDIA hardware) will use the actual fragment depth. Other implementations (notably many ATI chipsets) use the vertex distance and interpolate between vertices. The former method is sometimes referred to as fragment fog; and the later, vertex fog. Fragment fog requires more work than vertex fog, but often has a higher quality appearance. Both of the previously mentioned implementations honor the glHint parameter GL_FOG_HINT. To explicitly request fragment fog (better looking, but more work), call glHint(GL_FOG_HINT, GL_NICEST); For faster, less precise fog, one would call glHint(GL_FOG_HINT, GL_FASTEST); Remember that hints are implementation dependent, may change over time, and are not required to be acknowledged or used by the driver at all. Indeed, you can’t even rely on which fog method will be the default! [3] 4.2

fog coordinates

Rather than letting OpenGL calculate fog distance, one can actually do this hiself. This value is called the fog coordinate and can be set manually with the function glFogCoordf: void glFogCoordf(Glfloat fFogDistance); Fog coordinates are ignored unless one changes the fog coordinate source with this function call:


4.2 fog coordinates

Figure 8: Fog Example [2]

glFogi(GL_FOG_COORD_SRC, GL_FOG_COORD); // use glFogCoord1f To return to OpenGL-derived fog values, change the last parameter to GL_FRAGMENT_DEPTH: glFogi(GL_FOG_COORD_SRC, GL_FRAGMENT_DEPTH); This fog coordinate when specified is used as the fog distance in the equations of Table 4. Specifying own fog distance allows one to change the way distance is calculated. For example, one may want elevation to play a role, lending to volumetric fog effects. The fog example on fig.8 shows several different examples of using fog. A simple heightmap-based terrain, with a large quad drawn in blue at ground level to represent water, is being rendered. The top left panel shows the terrain without any fog, the top right panel uses GL_LINEAR fog, the bottom left panel shows GL_EXP fog with a fairly low density, and the bottom right panel shows GL_EXP2 fog with fog coordinates set so that the fog is thicker where the terrains is lower (near the water), thinning quickly at higher elevations. As a conclusion one can use fog to allow object to fade to a background color as they get farther away, allowing the use of a smaller view frustum and thus avoiding having object pop into view as they enter the frustum. Fog is controlled through gl_Fog(). One can take greater control over how fog is calculated by using fog coordinates.[2]

19


SELECTION AND FEEDBACK

Selection and feedback are two powerful features of OpenGL that enable to facilitate the user’s active interaction with a scene. Selection and picking are used to identify an object or region of a scene in OpenGL coordinates rather than just window coordinates. Feedback returns valuable information about how an object or a primitive is actually drawn in the window. One can use this information to implement features such as annotations or bounding boxes in the scene. One should note that the features in this chapter are typically implemented in software (the driver), even for hardware accelerated OpenGL implementations. This means that rendering in selection mode, for example, will be very slow compared to hardware rendering. A common means of accounting for this is to render lower resolution "proxies," and render only objects that can be clicked on when performing selection. There are more advanced means of determining object selection that may be preferable for real-time picking. Using the techniques in this chapter makes adding this kind of functionality to the application fairly straightforward. One may also find that even with software rendering, the response time from mouse click may be more than fast enough for most purposes.[4] 5.1

selection

Selection is actually a rendering mode, but in selection mode, no pixels are actually copied to the frame buffer. Instead, primitives that are drawn within the viewing volume (and thus would normally appear in the frame buffer) produce hit records in a selection buffer. This buffer, unlike other OpenGL buffers, is just an array of integer values. One must set up this selection buffer in advance and name the primitives or the groups of primitives (objects or models) so they can be identified in the selection buffer. After that the selection buffer is parsed to determine which objects intersected the viewing volume. After that the selection buffer is parsed to determine which objects intersected the viewing volume. Named objects that do not appear in the selection buffer fell outside the viewing volume and would not have been drawn in render mode. For picking, one specifies a viewing volume that corresponds to a small space beneath the mouse pointer and then checks which named objects are rendered within that space. OpenGL supports an object selection mechanism in which the object geometry is transformed and compared against a selection

20

5


5.1 selection

subregion (pick region) of the viewport. The mechanism uses the transformation pipeline to compare object vertices against the view volume. To reduce the view volume to a screen-space subregion (in window coordinates) of the viewport, the projected coordinates of the object are transformed by a scale and translation transform and combined to produce the matrix 

px dx

 0 T =  0 0

0

x 0 px − 2 qxd−o x

py dy

0 py − 2

0 0

1 0

qy −oy   dy 

0 0

 

-

where ox , oy , px , and py are the x and y origin and width and height of the viewport, and qx , qy , dx , and dy are the origin and width and height of the pick region.[4] Objects are identified by assigning them integer names using glLoadName. Each object is sent to the OpenGL pipeline and tested against the pick region. If the test succeeds, a hit record is created to identify the object. The hit record is written to the selection buffer whenever a change is made to the current object name. An application can determine which objects intersected the pick region by scanning the selection buffer and examining the names present in the buffer. The OpenGL selection method determines that an object has been hit if it intersects the view volume. Bitmap and pixel image primitives generate a hit record only if a raster positioning command is sent to the pipeline and the transformed position lies within the viewing volume. To generate hit records for an arbitrary point within a pixel image or bitmap, a bounding rectangle should be sent rather than the image. This causes the selection test to use the interior of the rectangle. Similarly, wide lines and points are selected only if the equivalent infinitely thin line or infinitely small point is selected. To facilitate selection testing of wide lines and points, proxy geometry representing the true footprint of the primitive is used instead. One can name every single primitive used to render a scene of objects, but doing so is rarely useful. More often, groups of primitives are named, thus creating names for the specific objects or pieces of objects in the scene. Object names, like display list names, are nothing more than unsigned integers. The names list is maintained on the name stack. After the name stack, is initialized can push names on the stack or simply replace the name currently on top of the stack. When a hit occurs during selection, all the names currently on the name stack are appended to the end of the selection buffer. Thus, a single hit can return more than one name if needed. Many applications

21


5.1 selection

Figure 9: Selection Hierchy [4]

use instancing of geometric data to reduce their memory footprint. Instancing allows an application to create a single representation of the geometric data for each type of object used in the scene. If the application is modeling a car for example, the four wheels of the car may be represented as instances of a single geometric description of a wheel, combined with a modeling transformation to place each wheel in the correct location in the scene. Instancing introduces extra complexity into the picking operation. If a single name is associated with the wheel geometry, the application cannot determine which of the four instances of the wheel has been picked. OpenGL solves this problem by maintaining a stack of object names. This allows an application, which represents models hierarchically, to associate a name at each stage of its hierarchy. As the car is being drawn, new names are pushed onto the stack as the hierarchy is descended and old names are popped as the hierarchy is ascended. When a hit record is created, it contains all names currently in the name stack. The application determines which instance of an object is selected by looking at the content of the name stack and comparing it to the names stored in the hierarchical representation of the model. Using the car model example, the application associates an object name with the wheel representation and another object name with each of the transformations used to position the wheel in the car model. The application determines that a wheel is selected if the selection buffer contains the object name for the wheel, and it determines which instance of the wheel by examining the object name of the transformation. Fig.9 shows a partial graph of the model hierarchy, with the car frame positioned in the scene and the four

22


5.2 the basic steps

wheel instances positioned relative to the frame. When the OpenGL pipeline is in selection mode, the primitives sent to the pipeline do not generate fragments to the framebuffer. Since only the result of vertex coordinate transformations is of interest, there is no need to send texture coordinates, normals, or vertex colors, or to enable lighting. [4] 5.2

the basic steps

The basic steps of using selection can be summurized as follows [1]: 1. Specify the array to be used for the returned hit records with glSelectBuffer(). Enter selection mode by specifying GL_SELECT with glRenderMode(). 2. Initialize the name stack using glInitNames() and glPushName(). 3. Define a viewing volume that will be used for selection. Usually, this is different from the viewing volume so one has used to draw the scene originally, so one would probably want to save and then restore the current transformation state with glPushMatrix() and glPopMatrix(). 4. Alternately issue primitive drawing commands and commands to manipulate the name stack so that each primitive of interest has an appropriate name assigned. 5. Exit selection mode and process the returned selection data (the hit records). An illustration of the selection mode and name stack, which detects whether objects which collide with a viewing volume is given on fig.10. First, four triangles and a rectangular box representing a viewing volume are drawn (drawScene routine). The green triangle and yellow triangles appear to lie within the viewing volume, but the red triangle appears to lie outside it. Then the selection mode is entered (selectObjects routine). Drawing to the screen ceases. To see if any collisions occur, the four triangles are called. In this example, the green triangle causes one hit with the name 1, and the yellow triangles cause one hit with the name 3. 5.3

feedback

Feedback, like selection, is a rendering mode that does not produce output in the form of pixels on the screen. Instead, information is written to a feedback buffer indicating how the scene would have

23


5.4 the feedback buffer

Figure 10: Selection Example [1]

been rendered. This information includes transformed vertex data in window coordinates, color data resulting from lighting calculations, and texture data- essentially everything needed to rasterize the primitives. One enters feedback mode the same way as selection mode, by calling glRenderMode with a GL_FEEDBACK argument. First, the rendering mode should be resetted to GL_RENDER to fill the feedback buffer and return to normal rendering mode. 5.4

the feedback buffer

The feedback buffer is an array of floating-point values specified with the glFeedbackBuffer function: void glFeedbackBuffer(GLsizei size, GLenum type, GLfloat *buffer); This function takes the size of the feedback buffer, the type and amount of drawing information wanted, and a pointer to the buffer itself. Valid values for type appear in Table 5. The type of data specifies how much data is placed in the feedback buffer for each vertex. Color data is represented by a single value in color index mode or four values for RGBA color mode. 5.5

feedback data

The feedback buffer contains a list of tokens followed by vertex data and possibly color and texture data. One can parse for these tokens (see Table 6) to determine the types of primitives that would have been rendered. One limitation of feedback occurs when using

24


5.5 feedback data

Table 5: Feedback Buffer Types Color Data GL_2D GL_3D GL_3D_COLOR GL_3D_COLOR_TEXTURE GL_4D_COLOR_TEXTURE

Vertex Texture Data x, y x, y, z x, y, z x, y, z x, y, z, w

Total Values N/A N/A C C C

Type N/A N/A N/A 4 4

Coordinates 2 3 3+C 7+C 8+C

Table 6: Feedback Buffer Tokens Token GL_POINT_TOKEN GL_LINE_TOKEN GL_LINE_RESET_TOKEN GL_POLYGON_TOKEN GL_BITMAP_TOKEN GL_DRAW_PIXEL_TOKEN GL_COPY_PIXEL_TOKEN GL_PASS_THROUGH_TOKEN

Primitive Points Line Line segment when line stipple is reset Polygon Bitmap Pixel rectangle drawn Pixel rectangle copied User-defined marker

multiple texture units. In this case, only texture coordinates from the first texture unit are returned.[7] The point, bitmap, and pixel tokens are followed by data for a single vertex and possibly color and texture data. This depends on the data type from Table 5 specified in the call to glFeedbackBuffer. The line tokens return two sets of vertex data, and the polygon token is immediately followed by the number of vertices that follow. The user-defined marker (GL_PASS_THROUGH_TOKEN) is followed by a single floating-point value that is user defined.

25


F R A G M E N T O P E R AT I O N S

A number of fragment operations are applied to rasterization fragments before they are allowed to update pixels in the framebuffer. Fragment operations can be separated into two categories, operations that test fragments, and operations that modify them. To maximize efficiency, the fragment operations are ordered so that the fragment tests are applied first. The most interesting tests for advanced rendering are: alpha test, stencil test, and depth buffer test. These tests can either pass, allowing the fragment to continue, or fail, discarding the fragment so it can’t pass on to later fragment operations or update the framebuffer. The stencil test is a special case, since it can produce useful side effects even when fragments fail the comparison. All of the fragment tests use the same set of comparison operators: Never, Always, Less, Less than or Equal, Equal, Greater than or Equal, Greater, and Not Equal. In each test, a fragment value is compared against a reference value saved in the current OpenGL state (including the depth and stencil buffers), and if the comparison succeeds, the test passes. The details of the fragment tests are listed in Table 7. The list of comparison operators is very complete. In fact, it may seem that some of the comparison operations, such as GL_NEVER and GL_ALWAYS are redundant, since their functionality can be duplicated by enabling or disabling a given fragment test. There is a use for them, however. The OpenGL invariance rules require that invariance is maintained if a comparison is changed, but not if a test is enabled or disabled. So if invariance must be maintained (because the test is used in a multipass algorithm, for example), the application should enable and disable tests using the comparison operators, rather than enabling or disabling the tests themselves.[4] 6.1

multisample operations

Multisample operations provide limited ways to affect the fragment coverage and alpha values. In particular, an application can reduce the coverage of a fragment, or convert the fragment’s alpha value to another coverage value that is combined with the fragment’s value to further reduce it. These operations are sometimes useful as an alternative to alpha blending, since they can be more efficient.

26

6


6.2 alpha test

Table 7: Fragment Test

Constant GL_ALWAYS GL_NEVER GL_LESS GL_LEQUAL GL_GEQUAL GL_GREATER GL_EQUAL GL_NOTEQUAL 6.2

Comparison always pass never pass pass if incoming<ref pass if incoming6ref pass if incoming>ref pass if incoming>ref pass if incoming=ref pass if incoming 6=ref

alpha test

The alpha test reads the alpha compont value of each fragment’s color, and compares it against the current alpha test value. The test value is set by the application, and can range from zero to one. The comparison operators are the standard set listed in Table 7. The alpha test can be used to remove parts of a primitive on a pixel by pixel basis. A common technique is to apply a texture containing alpha values to a polygon. The alpha test is used to trim a simple polygon to a complex outline stored in the alpha values of the surface texture.[4] 6.3

stencil test

The stencil test performs two tasks. The first task is to conditionally eliminate incoming fragments based on a comparison between a reference value and stencil value from the stencil buffer at the fragment’s destination. The second purpose of the stencil test is to update the stencil values in the framebuffer. How the stencil buffer is modified depends on the outcome of the stencil and depth buffer tests. There are three possible outcomes of the two tests: the stencil buffer test fails, the stencil buffer test passes but the depth buffer fails, or both tests fail. OpenGL makes it possible to specify how the stencil buffer is updated for each of these possible outcomes. The conditional elimination task is controlled with glStencilFunc. It sets the stencil test comparison operator. The comparison operator can be selected from the list of operators in Table 7. Setting the stencil update requires setting three parameters, each one corresponding to one of the stencil/depth buffer test outcomes. The glStencilOp command takes three operands, one for each of the comparison outcomes.

27


6.4 dissolve effect with stencil buffer

Table 8: Stencil Update Values

Constant GL_KEEP GL_ZERO GL_REPLACE GL_INCR GL_DECR GL_INVERT

Description stencil value unchanged stencil value set to zero stencil value replaced by stencil reference value stencil value incremented stencil value decremented stencil value bitwise inverted

Each operand value specifies how the stencil pixel corresponding to the fragment being tested should be modified. Table 8 shows the possible values and how they change the stencil pixels. The stencil buffer is often used to create and use per-pixel masks. The desired stencil mask is created by drawing geometry (often textured with an alpha pattern to produce a complex shape). Before rendering this template geometry, the stencil test is configured to update the stencil buffer as the mask is rendered. Often the pipeline is configured so that the color and depth buffers are not actually updated when this geometry is rendered; this can be done with the glColorMask and glDepthMask commands, or by setting the depth test to always fail. Once the stencil mask is in place, the geometry to be masked is rendered. This time, the stencil test is pre-configured to draw or discard fragments based on the current value of the stencil mask. More elaborate techniques may create the mask using a combination of template geometry and carefully chosen depth and stencil comparisons to create a mask whose shape is influenced by the geometry that was previously rendered. [4] 6.4

dissolve effect with stencil buffer

Stencil buffers can be used to mask selected pixels on the screen. This allows for pixel by pixel compositing of images. One can draw geometry or arrays of stencil values to control, per pixel, what is drawn into the color buffer. One way to use this capability is to composite multiple images. A common film technique is the "dissolve", where one image or animated sequence is replaced with another, in a smooth sequence. The stencil buffer can be used to implement arbitrary dissolve patterns. The alpha planes of the color buffer and the alpha function can also be used to implement this kind of dissolve, but using the stencil buffer frees up the alpha planes for motion blur, transparency, smoothing, and other effects. The basic

28


6.4 dissolve effect with stencil buffer

1 0 0 1 1 0 0 1

First Scene

0 1 1 0 0 1 1 0

1 0 0 1 1 0 0 1

0 1 1 0 0 1 1 0

1 0 0 1 1 0 0 1

0 1 1 0 0 1 1 0

1 0 0 1 1 0 0 1

29

0 1 1 0 0 1 1 0

Pattern Drawn In Stencil Buffer

Second Scene Resulting Image drawn with glStencilFunc(GL_EQUAL, 1, 1);

Figure Using StencilImages to Dissolve Between Images [5] Figure 61. Using Stencil to 11: Dissolve Between

approach to a stencil buffer dissolve is to render two different images, using the stencil buffer to control where each image can draw to 14.1 Dissolves with Stencil the framebuffer. This can be done very simply by defining a stencil test and associating a different reference value withThis each image. Stencil buffers can be used to mask selected pixels on the screen. allows for The pixel by pixel stencil buffer is initialized to a value such that the stencil test will per pixel, compositing of images. You can draw geometry or arrays of stencil values to control, passinto with the images reference andis fail with themultiple other. images. what is drawn theone colorofbuffer. One way to use thisvalues, capability to composite An example of a dissolve partway between two images is shown in A common film technique is the “dissolve”, where one image or animated sequence is replaced with Figure 11. another, in a smooth sequence. The stencil buffer can be used to implement arbitrary dissolve patAt the start of the dissolve (the first frame of the sequence), the terns. The alpha planes of the color buffer and the alpha function can also be used to implement this stencil buffer is all cleared to one value, allowing only one of the kind of dissolve, but using the stencil buffer frees up the alpha planes for motion blur, transparency, images to be drawn to the framebuffer. Frame by frame, the stencil smoothing, and other effects. buffer is progressively changed (in an applicationdefined pattern) to The basic aapproach to avalue, stencilone buffer dissolve to render twocompared different images, using different that passesis only when against thethe stencil buffer to control where each image can draw to the framebuffer. This can be done very second images reference value. As a result, more and more of thesimply by defining a stencil test andisassociating different referenceOver valueawith eachofimage. Thethe stencil buffer first image replaced aby the second. series frames, is initialized to a value such that the stencil test will pass with one of the images’ reference first image "dissolves" into the second, under control of the evolving values, and fail with the other. An example a dissolve partway between two images is shown pattern in the stencil of buffer. Here is a step-by-step description ofinaFigure 61. At the startdissolve. of the dissolve (the first frame of the sequence), the stencil buffer is all cleared to one value, allowing only one of the images to be drawn to the framebuffer. Frame by frame, the stencil buffer is progressively changed (in an application defined pattern) to a different value, one that passes 1. Clearagainst the stencil buffer with glClear(GL_STENCIL_BUFFER_BIT). only when compared the second image’s reference value. As a result, more and more of the first image is 2. replaced by the second. Disable writing to the color buffer, using glColorMask(GL_FALSE, Over a series of GL_FALSE, frames, the first image “dissolves” into the second, under control of the evolving GL_FALSE, GL_FALSE). pattern in the stencil buffer. 3. If the values in the depth buffer should not change, use glDepthMask(GL FALSE).

For this example, the stencil test will always fail, and the stencil operation is set to write the reference value to the stencil buffer. The application will also need to turn on stenciling before one could begin drawing the dissolve pattern. 185 STENCIL TEST). 1. Turn on stenciling; glEnable(GL

2. Set stencil function to always fail; glStencilFunc(GL NEVER, 1, 1). Programming with OpenGL: Advanced Rendering


6.4 dissolve effect with stencil buffer

3. Set stencil op to write 1 on stencil test failure; glStencilOp(GL REPLACE, GL KEEP, GL KEEP). 4. Write the dissolve pattern to the stencil buffer by drawing geometry or using glDrawPixels. 5. Disable writing to the stencil buffer with glStencilMask(GL FALSE). 6. Set stencil function to pass on 0; glStencilFunc(GL EQUAL, 0, 1). 7. Enable color buffer for writing with glColorMask(GL TRUE, GL TRUE, GL TRUE, GL TRUE). 8. If you’re depth testing, turn depth buffer writes back on with glDepthMask. 9. Draw the first image. It will only be written where the stencil buffer values are 0. 10. Change the stencil test so only values that are 1 pass; glStencilFunc(GL EQUAL, 1, 1). 11. Draw the second image. Only pixels with stencil value of 1 will change. 12. Repeat the process, updating the stencil buffer, so that more and more stencil values are 1, using your dissolve pattern, and redrawing image 1 and 2, until the entire stencil buffer has 1’s in it, and only image 2 is visible. If each new frames dissolve pattern is a superset of the previous frames pattern, image 1 doesnt have to be re-rendered. This is because once a pixel of image 1 is replaced with image 2, image 1 will never be redrawn there. Designing the dissolve pattern with this restriction can improve the performance of this technique.[5]

30


7

S U M M A RY

In this work, an introduction to special features in OpenGL was given. First, two methods - display lists and vertex arrays- to improve code performance were presented. It was showned that applying these techniques should enable a higher frame rate to be achieved. Furthermore, it was presented how to use blending, and how to take advantage of OpenGL’s built-in fog. As a next subtopic, selection and feedback, two powerful features of OpenGL that enable us to facilitate the users active interaction with a scene were presented. The differences and simmilarities between Selection and Feedback were discussed. Thereafter, many of the mechanisms which rasterization and fragment processing operations provide for modifying a fragment value or discarding it outright were briefly analysed. Furthermore, an algorithmic approach of creating a dissolve effect with the use of the stencil buffer was presented. In conclusion, the combination of all the knowledge regarding special features in OpenGL presented in this work should be enough to create a fairly complex OpenGL applications, even including simple games.

31


BIBLIOGRAPHY

[1] OpenGL Programming Guide The Redbook The OpenGL Programming Guide 6th Edition The Official Guide to Learning OpenGL Version 2.1. D. Shreiner, M. Woo, J. Neider and T. Davis. Addison Wesley. (Cited on pages v, 5, 8, 23, and 24.) [2] D. Astle and K. Hawkins. Beginning OpenGL Game Programming. Course Technology. (Cited on pages v, 8, 9, 10, 11, 15, 16, and 19.) [3] OpenGL速 SUPERBIBLE Fourth Edition The Bluebook. R. Wright, B. Lipchak and N. Haemel. Addison Wesley. (Cited on pages 13, 14, 15, 17, and 18.) [4] T. McReynolds and D. Blythe. Advanced Graphics Programming Using OpenGL. (Cited on pages v, 20, 21, 22, 23, 26, 27, and 28.) [5] Tom McReynolds. Advanced Graphics Programming Techniques Using OpenGL. SIGGRAPH 1998 Course, 1998. (Cited on pages v, 29, and 30.) [6] D. Shreiner, E. Angel, and V. Shreiner. An Interactive Introduction To OpenGL Programming Course 54. 2001. (Cited on page 2.) [7] R. S. Wright and M. Sweet. Opengl Superbible: The Complete Guide to Opengl Programming for Windows Nt and Windows 95. (Cited on page 25.)

32


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.