close Warning:

<< Previous Chapter: Node cores Tutorial Overview Next Chapter: Light >>


Geometry is actually a common core like all the others from the last chapter, too. However, the geometry core is of course the most important and most complicated one, and there are some things you should know about it. That is why a whole chapter is dedicated to this special core.


Before we start generating geometry like crazy, we need to know something about the concepts. At the very beginning the implementation might look a bit uncomfortable, but please keep in mind that the geometry class was designed to provide a maximum of flexibility while still offering high performance.

One big advantage of the OpenSG geometry is it's great flexibility. It is possible to store different primitive types (triangles, quads etc.) in one single core, there is no need to create separate cores for each primitive type. Even lines and points can be used in the same core along with triangles and others.


All data describing the geometry is stored in separate arrays. Positions, colors, normals as well as texture coordinates are stored in their own OSG::MField. OpenGL is capable of processing different formats of data, some perform better under certain circumstances or require less storage. OpenSG features the same data formats, by providing a lot of different classes which are luckily very similar to use and are all derived from OSG::GeoProperty. Prominent examples for geometry properties are OSG::GeoPnt3fProperty or OSG::GeoVec3fProperty. There are a lot of other data types, of course, just have a look at the OSG::GeoProperty description page. All these classes have in common that they basically wrap a single !MField that contains the actual data.

An OpenSG Geometry stores the pointers to the properties in a single MField (called MFProperties). So how does OpenSG know which property is the one storing vertex positions and which one has the texture coordinates ? This association of a certain meaning to a property is determined from the index the pointer thas in the MFProperties field. Geometry has symbolic constants (e.g. Geometry::PositionsIndex, Geometry::NormalsIndex, Geometry::TexCoordsIndex, etc.) that make it easier to assign a property to the correct index in the field.

When using shaders, the predefined OpenGL vertex attributes (Position, Normal, TexCoord0, TexCoord1, etc.) are often used for a purpose different from what their name suggests, therefore recent version of OpenGL introduce the concept of general purpose vertex attributes, whose meaning is entirely defined by how the shader makes use of the data passed to it. The only property that retains a special meaning and always has to be present is the position. When using a shader OpenSG passes the property at index i in MFProperties as the i-th general purpose vertex attribute to OpenGL.

The following snippet shows two ways to access the properties, either by the named functions (e.g. set/getPositions()) or using the getProperty() function with the symbolic constant for the kind of property you are interested in. The second way may also be more appropriate if you are using shaders.

GeometryTransitPtr someFunc(GeoPnt3fProperty *pnts, GeoVec4fProperty *norms)
    GeometryRecPtr geo = Geometry::create();

    geo->setNormals  (pnts);

    return GeometryTransitPtr(geo);

void someOtherFunc(Geometry *geo)
    GeoPnt3fPropertyRecPtr pnts = dynamic_cast<GeoPnt3fProperty *>(
    GeoVec4fPropertyRecPtr pnts = dynamic_cast<GeoVec4fProperty *>(
        geo->getProperty(Geometry::NormalsIndex  ));

    // do something interesting with the positions/normals


Often one vertex is used by more than just one primitive. On a uniform grid for example most vertices are used by four quads. OpenSG can take advantage of this by indexing the geometry. Every primitive does not have to have a separate copy of the vertices, but an integer index which points to a vertex. In this way it is possible to reuse data with a minimum of additional effort. It is even possible to use more than one index for different properties. Jump to Indexing for a detailed overview.

Learning by doing

First of all we are going to build a geometry node with the most important features from bottom up. A good example is the simulation of water as this covers many problems you might encounter when creating your own geometry. This water tutorial will be developed throughout the whole chapter. Let us think about what we will actually need and what we are going to do in detail:

We simulate water by using a uniform grid with N * N points, with N some integer constant. As these points are equidistant we only need to store the height value (the value that is going to be changed during simulation) and one global width and height as well as some global origin where the grid is going to be placed.

There are a lot of algorithms which try to simulate the movement of water more or less adequate or fast but as we are more concerned on how to do stuff in OpenSG, I propose that we just take a very easy 'formula' to calculate the height values. Of course, if you are interested, you may replace the formula by any other.

Now take our framework again as a starting point, then add some global variables and a new include file.

#include <OpenSG/OSGGeometry.h>

// this will specify the resolution of the mesh
#define N   100

//the two dimensional array that will store all height values
Real32 wMesh[N][N];

//the origin of the water mesh
Pnt3f wOrigin = Pnt3f(0,0,0);

//width and length of the mesh
UInt16 width  = 100;
UInt16 length = 100;

Insert the code right at the beginning of the createScenegraph() function which should still be empty at this point.

Before we start creating the geometry we should first initialize the wMesh array to avoid corrupt data when building the scenegraph. For now, we simply set all height values to zero.

    for (int i = 0; i < N; i++)
        for (int j = 0; j < N; j++)
            wMesh[i][j] = 0;

Now we can begin to build the geometry step by step. The first thing to do is to define the type of primitives we want to use. Quads would be sufficient for us. However, as mentioned before, it is possible to use more than one primitive. That will be discussed here: Primitive Types.

    // GeoPTypes will define the types of primitives to be used
    GeoUInt8PropertyRecPtr types = GeoUInt8Property::create();
    // we want to use quads ONLY 

We just told OpenSG that this geometry core we are about to create will consist of only one single type of object: a quad. But of course this is not restricted to a single quad. Just watch the next step.

Now we have to tell OpenSG how long (i.e. how many vertices) the primitives are going to be. The length of a single quad is reasonably four, but we want more than one quad, of course, so we multiply four by the number of quads. With N*N vertices we have (N-1)*(N-1) quads.

    // GeoPLength will define the number of vertices of
    // the used primitives
    GeoUInt32PropertyRecPtr lengths = GeoUInt32Property::create();
    // the length of a single quad is four ;-)
    length->addValue(4 * (N-1) * (N-1));

We have to provide as many different length values as we have provided types in the previous step. As we only added one quad type we need to specify one single length. With N=100 the length will be 39204! Well, of course this does not mean we are creating a quad with so many vertices! OpenSG is smart enought to know that a quad needs four vertices and thus OpenSG was told to store 39204/4=9801 quads as it finishes creation of a quad after four vertices have been passed and begins with the next one.

Now we will provide the positions of our vertices by using the data of the 'wMesh' array we initialized previously.

    // GeoPnt3fProperty stores the positions of all vertices used in
    // this specific geometry core
    GeoPnt3fPropertyRecPtr pos = GeoPnt3fProperty::create();
    // here they all come
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            pos->addValue(Pnt3f(x, wMesh[x][z], z));

You might ask yourself if this is actually useful what I am doing here. It looks like that the width and length of the mesh we create corresponds to the resolution we choose, that is the higher the resolution the greater the mesh is. Well, that is correct. After creating the complete geometry core we are going to scale that whole thing to the correct size provided by the global variables

[The Author: I actually haven't found the time to do that - so this may follow in the near future]. Of course it would be reasonable to store not just height values but whole points like an two dimensional array of OSG::Pnt3f. But storing whole points consumes more memory than one Real32 value. Anyway, it is up to you or whether memory is a concern or not. As we want to play a bit around with scenegraph manipulation I have chosen the first variant.]

Now we assign colors to the geometry, actually just one color this time, to be specific. However, every vertex needs a color, so the same color value is added as often as we have vertices stored. This is not very efficient in this special case, however it is easy to implement. Multi indexing will be an alternative I present to you later on.

    // GeoColor3fProperty stores all color values that will be used
    GeoColor3fPropertyRecPtr colors = GeoColor3fProperty::create();
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)

Normals are still missing. We add them in a similar way like the color.

    GeoVec3fPropertyRecPtr norms = GeoVec3fProperty::create();
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            // As initially all heights are set to zero thus yielding a plane,
            // we set all normals to (0,1,0) parallel to the y-axis

And some material...

    SimpleMaterialPtr mat = SimpleMaterial::create();

Well, this material is not doing anything interesting except for it's existence. But if no material is assigned the renderer stops doing his job leaving you with a black screen. So we assign an "empty" material.

Something still missing? Yes of course! If you think about what we have done so far you might notice that something is not quite correct. We have not considered yet that a quad uses four vertices and thus most quads, except for these at borders, uses vertices already used by four other quads. However we provided every vertex just a single time.

Of course we did, because everything else would be a waste of memory. That is what indexes are used for. The next block of code tells OpenSG which vertices are used by a quad. The vertices are only referenced, but not copied, in this way.

Vertex are used by multiple quads

Quad A uses vertex 1,2,3,4 whereas vertex 4 is used by quads A,B,C and D. The index which defines quad A would point to the vertices 1,2,3 and 4. Quad B would reuse the vertices 2 and 4 as well as two others not considered here.

    // indices points to all relevant data used by the 
    // provided primitives
    GeoUInt32PropertyRecPtr indices = GeoUInt32Property::create();
    for (int x = 0; x < N-1; x++)
        for (int z = 0; z < N-1; z++)
            // points to four vertices that will
            // define a single quad

There are different possibilities on how to index the data. That will be discussed in this section: Indexing.

Now that we have created all data we need, we can create the geometry object that will hold all the pieces together.

    GeometryPtr geo = Geometry::create();
    geo->setTypes    (type   );
    geo->setLengths  (length );
    geo->setIndices  (indices);
    geo->setPositions(pos    );
    geo->setNormals  (norms  );
    geo->setColors   (colors );

Finally we put the newly created core into a node and return it.

    NodeRecPtr root = Node::create();

    return NodeTransitPtr(root);

Your first version of the water simulation is done. Compile and execute and watch the beautiful result! Please notice that you need to rotate the view in order to see anything. This is because we are initially located at y=0, just the same as the plane, thus you can see the plane as a line only. We can fix this if we add some value to the camera position during setup. You can add the code directly, before the glutMainLoop is called in the main function:

    Navigator * nav = mgr->getNavigator();

This will get the navigator helper object from the simple scene manager. The setFrom() method allows to specify a point (OSG::Pnt3f) where the camera shall be located. In that case we get the current position via getFrom() and add 50 units to the y-axis. This ensures, that the camera is above our mesh and not at the same height.

The code so far can be found in file Examples/Tutorial/09geometry_water.cpp.

What? A plane? That whole effort for a simple plane?

Of course the result is a plane as we set all height values to zero previously. We need to modify the values during the display function. But first we have a deeper look at what we have done so far!

Primitive Types

If you remember what we have done at the beginning when we started to create the water mesh geometry, you know that we had told OpenSG to use just one single primitive type, a quad, with a length of 39204 vertices. Now here some words about the geometry's flexibility: If you want to use triangles, quads and some polygons you need not to create separate geometry cores, but you can use them all in one single core even mixed with lines and points.

This is done by first telling OpenSG what primitives you are going to use. Let us imagine this little example: We want to use 8 quads, 16 triangles two lines and another 8 quads. Sure, you could (and should) put the quads together to 16 quads, but we leave it that way for now. Data from modeling packages is not quite well structured most of the time, so this example is maybe not as strange as it initially my appear.

Now, we simply tell OpenSG what is going to come:

// do not add this code to the tutorial source.
// It is just an example

    GeoUInt8PropertyRecPtr type = GeoUInt8Property::create();

Alright, but OpenSG also needs to know how many of each type will come. The length we have provided previously in our example specifies the number of vertices, not the number of quads, triangles or whatever. So with some math we will find out that we need 32 vertices for 8 quads (8 quads * 4 vertices per quad = 32) and 48 for the 16 triangles and so on

// do not add this code to the tutorial source.
// It is just an example
    GeoUInt32PropertyRecPtr length = GeoUInt32Property::create();
    length->addValue(32);   // 8 quads
    length->addValue(48);   // 16 triangles
    length->addValue(4);    // 2 lines
    length->addValue(32);   // 8 quads

Here is a list of all supported primitives:

Type Number of vertices per primitive
GL_TRIANGLE_STRIP any, at least 3
GL_TRIANGLE_FAN any, at least 3
GL_QUAD_STRIP any, at least 4
GL_POLYGON any, at least 4

If you are striping geometry, please make sure to provide a correct number of vertices!

Please notice that concave polygons are not supported by OpenGL nor by OpenSG! So make sure your polygons with more than three vertices are convex.

The following image shows an example of primitive types and corresponding lengths.

Primtives and corresponding length


It is easy to mix different primitive types in one core, assign some properties like normals or texture coordinates to them and you can even reuse data with indexing (see next section Indexing). So far everything seems to be fine, but from another point of view things might become difficult. If you want to walk over all triangles for example you can easily run into problems, as the data might be highly mixed up with different primitive types. So you would have to take care of a lot of special cases usualy solved by some kind of big ugly switch block.

This is where geometry iterators may help you out. These will iterate primitive by primitive, face by face (which is either a quad or a triangle) or triangle by triangle.

For example if you are using the build-in ray intersection functionality you might have encountered the problem of finding the triangle you actually hit. You can easily get the hit point, but the promising method "getHitTriangle" returns an Int32... so what to do? This integer defines the position in the index data array of the geometry. We will have a closer look at the ray intersection functions later in section Intersect Action?, but for now I only want to show a little code fragment of how to use a geometry iterator. Let's imagine we have send some ray into the scene, hit a triangle and now we have an integer returned from that class. We now try to compute the coordinates of the three vertices.

    // the object 'ia' is of type OSG::IntersectAction and
    // stores the result of the intersection traversal
    //retrieve the hit triangle as well as the node
    Int32      pIndex = ia->getHitTriangle();
    NodeRecPtr n      = ia->getHitObject  ();

    // we make sure that the core of this node is
    // actually a geometry core, just for safety
    std::string coretype = n->getCore()->getTypeName();
    if (coretype != "Geometry")
        std::cerr << "No geometry core! Nothing to do!" << std::endl;

    // get the geometry
    GeometryRecPtr geo = dynamic_cast<Geometry *>(n->getCore());

    // Creating the iterator object
    TriangleIterator ti(geo);
    // jump to the index we got from the
    // IntersectAction class;
    // and now retrieve the coordinates
    Pnt3f p1 = ti.getPosition(0);
    Pnt3f p2 = ti.getPosition(1);
    Pnt3f p3 = ti.getPosition(2);

The iterators are very helpful for iterating over the primitives/face/triangles of a geometry, but they are not intended for highly complex mesh manipulation. For these types of operations usually data structures quite different from those used by OpenSG are required.


Indexing is a very important topic if you want to use geometry efficiently. In the example above we added each vertex only a single time and this vertex was reused by all other primitives. On the one hand this is smarter than providing such a vertex four times, on the other hand we added the same color object N*N times, although adding it once would have been sufficient. All these problems can be addressed by choosing the right kind of indexing.

No Indexing

First of all there is the possibility to not use indexing at all. The following figure shows how the data would be organized in memory

Geometry data which is not indexed

This figure may need some explanation: At the top you have three colored circles each representing a vertex. The yellow vertex, for example is used by both quads and the triangle, whereas the blue vertex is used by the right quad and the triangle. Below that is a representation of OpenSG's datastructures that represent this geometry (of course this is not the only way these might look). The first row represents the primitive types (the contents of the Types property), in this case we have two quads, a triangle, a polygon, another quad followed by two triangles and finally another polygon. In the second row the lengths of the primitives are shown (these are the contents of the Lengths property); these numbers indicate how many positions, normals, colors, etc. are used for each primitive. In this case we have four for each quad (just enough to define one quad), three for the triangles and so forth. The last three rows are the properties that define the geometry; here we have positions, normals and colors.

You know can see that the yellow vertex appears three times in our data. With no indexing the vertex data is copied for every time it is used! Of course this is not very efficient. You will learn more efficent methods next.

Single Indexing

This is the most often used type of geometry and this is also supported by OpenGL, so it the prefered way to use indices. However, single indexing can not handle all cases. This is the kind of geometry storage we used in the water mesh example above. The following figure shows how single indexing work.

Indexed geometry data

As you can see, every vertex is stored exactly one time. The data of the yellow vertex is referenced three times.

The pointers to the indices for each property are stored in the multi field MFPropIndices and just as the index determines the meaning of a property the index in MFPropIndices defines which property the index belongs to, so MFProperties[i] is indexed by MFPropIndices[i]. Of course you can have multiple properties indexed by the same index, just by storing the same pointer in more than one place in MFPropIndices. In fact the the only thing that distinguishes single from multi indexed geometry is that for single indexed all non-NULL entries in MFPropIndices point to the same index. It is legal to have an index set although the corresponding property is NULL, in fact that is what happens when using the convenience method setIndex(), it sets all entries of MFPropIndices to the same value, regardless if there is an entry in MFProperties or not.

Indexed geometry in general is a lot better than non-indexed geometry, but still we have some issues that are not solved optimally. In our water mesh example every vertex has exactly the same color. With indexed geometry we need to have as many entries in positions as there are in the normals and colors array - so we need to store the same color a lot too often. This issue is addressed by multi indexed geometry.

Another variant of this issue is that for some vertices most properties have the same values, but one differs. For example think of a textured cube where on each corner you have three different normals, but only one position. To handle such cases the vertex data need to be replicated, or alternatively you can use multi indexed geometry.

Multi Indexed Geometry

While using a single index can already significantly reduce the amount of memory required to store geometry, there can still be situations that require duplication of data. For such cases you can use multi indexed geometry, i.e. set some entries of MFPropIndices to point to one index and some others to point to a different index:

Multi indexed geometry

When using multi-indexed Geometry the size of the different properties can vary widely, for example the colors property could have just a single entry that is referenced by all vertices to get an object that has the same color everywhere.

So now that you have non-indexed, single- and multi-indexed Geometry at hand, which should you use? In general single-indexed Geometry is the most efficient way for rendering. It can make sense to use non-indexed geometry, if there are no shared vertices. In this case indices only increase the memory footprint without improving performance. Multi-indexed data can be more compact in terms of memory (if the data is bigger than the additional index), but OpenGL doesn't natively support it. Therefore it has to be split up to be used with OpenGL, which can have a big impact on performance. Only use it if memory is really critical and you need it.

Conclusion: use single-indexed geometry, if you can.

Efficient Manipulation

Often geometry itself stays untouched during a simulation except for rigid transformations applied to the whole geometry. However, if it is necessary to modify the geometry during a simulation (like in our water example) it is usually important to do it fast. In this section we want to enable animation of the water mesh and by doing so, I will demonstrate some tricks on how to speed up this important task.

Before we start, we quickly implement a function which will simulate the behavior of water with respect to the time passed. As said earlier I will only use a simple function but feel free to replace this with a more complex one.

Add this block of code somewhere at the beginning of the file (at least before the display function).

void updateMesh(Real32 time)
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            wMesh[x][z] = 10*cos(time/1000.f + (x+z)/10.f);

Please notice: It is important to divide by 1000.f. If you divide by 1000 an integer division will be calculated yielding discret values, but in most cases that is not what you want''

And replace the display function with this code

void display(void)
    Real32 time = glutGet(GLUT_ELAPSED_TIME);

Well, but of course we won't see anything different now on screen, because we have updated our data structure, but not the scene graph. So now comes the interesting part: We are going to modify the data stored in the graph. Of course we could generate a new geometry node and replace the old with the new one. Well, this is obviously not very efficient due to a big amount of memory deletion and allocation. What we are actually going to do is the following:

First of all we need a pointer to the appropriate geometry node we want to modify. Luckily this is no big deal this time, as we know that the root node itself contains the geometry core. Add this block of code in the display() function right before mgr->redraw() is called

    // we extract the core out of the root node
    // as we now this is a geometry node
    GeometryRecPtr geo = dynamic_cast<Geometry *>(scene->getCore());
    //now modify it's content
    // first we need a pointer to the position data field
    GeoPnt3fPropertyRecPtr pos = dynamic_cast<GeoPnt3fProperty *>(geo->getPositions());
    //get the data field the pointer is pointing at
    GeoPnt3fProperty::StoredFieldType *posfield = pos->editFieldPtr();
    //get some iterators
    GeoPnt3fProperty::StoredFieldType::iterator last, it;

    // set the iterator to the first data
    it = posfield->begin();
    //now simply run over all entires in the array
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            (*it) = Pnt3f(x, wMesh[x][z], z);

Previously we used addValue() to add OSG::Pnt3f objects to the OSG::GeoPnt3fProperty array. Now we use setValue() to overwrite existing values. If you have a look at code where we first added the points to the array, you can see that these were added column major, i.e. the inner loop added all points along the z-axis where x was zero then all points with x=1 and so on. setValue() gets a point as first parameter and an integer as second which defines the index of the data that will be overwritten. With N*x+z we overwrite the values like we first generated them: column major.

Now you can again look forward to compilation and execution. The file 09geometry_water2.cpp contains the code so far. You will be rewarded with an animation of something that doesn't look like water at all, but is somewhat nice anyway. The problem here is, that the water is uniformly shaded and the "waves" can only be spotted at the borders.

Animated water without proper lightning

The next chapter will be about lightning, this is where we will improve the appearance of the water.

Another issue is the performance. With a resolution of 100*100 vertices (= 19602 polygons) the animation is no longer smooth when moving the camera with the mouse on my AMD 1400 Mhz machine with an ATI Radeon 9700! So we are definitely in need of some optimizations.

Direct Manipulation of the Multi Field

The properties for use with the Geometry core actually form a small class hierarchy that basically offers two ways of accessing the data, depending on whether you know the exact data type stored in the property or not. At the base of that hierarchy is GeoProperty that only has some functions to query information about the property, but offers no interface to modify or read the data. Derived from it are GeoIntergralProperty and GeoVectorProperty, where the former is used for things like types, lengths and indices and the latter for example for positions, normals etc. This level offers a generic interface that allows access to the data without requiring knowledge of the exact type, but comes at the price of introducing type conversions and virtual function call overhead. The lowest level of the hierarchy has a large number of classes, all of them instantiated from two class templates TypedGeoIntegralProperty and TypedGeoVectorProperty, examples of these types are e.g. GeoUInt32Property or GeoVec3fProperty. For best performance it is recommended to use theses classes (an example follows below) and the interfaces they provide, because it avoids the overhead of the more generic interface of the base classes.

We will now modify the part of the code that updates the position property of the water geometry to not use the generic interface, but instead make use the fact that we know it is of type GeoPnt3fProperty and can therefore gain direct access to the MField that stores the positions:

    //remove the following code
    //this loop is similar to when we generated the data during createScenegraph()
    // here they all come
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            pos->setValue(Pnt3f(x, wMesh[x][z], z), N*x+z);

and replace with this

    //get the data field the pointer used to store the positions
    GeoPnt3fProperty::StoredFieldType *posField = pos->editFieldPtr();
    //get some iterators
    GeoPnt3fProperty::StoredFieldType::iterator last, it;

    // set the iterator to the first data element
    it = posField->begin();
    //now simply run over all entires in the array
    for (int x = 0; x < N; x++)
        for (int z = 0; z < N; z++)
            (*it) = Pnt3f(x, wMesh[x][z], z);

The result will be all the same, of course, but working directly on the field with an iterator is somewhat faster and pays of especially when making large changes.

Turn Off Display List Generation

As you might know, OpenGL is capable of using "display lists". Such lists are usually defined by a programmer and OpenGL compiles this list and thus can render the content of such a list faster. However there is an overhead in compiling display lists, which makes it useless for objects which change often - like our water mesh. In OpenSG display lists will be generated by default for every geometry, and they will be generated again if the geometry data changes. You can turn off this feature by telling your geometry core :

    // geo is of type OSG::Geometry

This may increase rendering performance a lot if used wisely. When using static geometry you should not turn this feature off as this will slow down rendering. Only disable display lists on geometry which is modified often or even every frame.

Please notice: transformations do not affect the geometry in that way! Only direct manipulation of the geometry data is a performance problem with display lists''

General Optimizations

All hints and tricks that can be used with OpenGL can be used with OpenSG the one way or another, too. Of course, it is not good to allocate new memory during rendering and similar things. If you want to tweak your application to the maximum possible it might be useful to read a book about this specific topic.

Results of Optimization

I ran a little self-made benchmark on my machine to show you the results of the optimizations I suggested above. Please keep in mind that this is only one example and thus claims not to be an objective benchmark! I simply let OpenSG render 500 frames and watched how long it took.

Display Lists on Dynamic objects

As you can easily see the usage of the multi field manipulation instead of the interface methods is not such a big win at all, but turning the display lists off, is rewarded with a significant increase in performance.

Notice: You might know think that display lists are stupid and should be turned off, to increase performance - of course that is not the case, as a display lists only purpose is to increase performance! They only slow rendering down if the list themselves are constantly recreated as is the case with non-rigid transformations. With static geometry they perform very well. I ran some small tests on my machine with the beethoven model Examples/Tutorial/data/beethoven.wrl, which has 60k polygons. For this benchmark I let OpenSG render 5000 frames and took the time. The figure below shows the results

Display Lists on Static Objects

Geometry Utility Functions

OpenSG comes along with some useful utility functions, which can help you with some basic, but important tasks. I remember when I first needed face normals where the model only had vertex normals. I had one or two long nights I spend with the geometry in general and the geometry iterators until I had succeeded in developing an algorithm that worked in that way I wanted it to. It was a few days later when I realized a function called "calcFaceNormals". Well, my variant of face normal calculation was as fast as the build-in function was, but the only annoying thing was, that I did it with some dozens lines of code where one single line would have done the job. Here is how it works

Note: You need to include the following include file for the utility functions:

        #include <OpenSG/OSGGeoFunctions.h>

If you have some geometry core, for which you want to calculate face normals you simply need to type

    //geo again is of type OSG::Geometry

Of course with vertex normals it is just all the same


You probably already know, that face normals are unique for every triangle or quad. Objects rendered with face normals will look faceted, which might not be what you want. For a smooth rendering normals per vertex are required. Please keep in mind that vertex normals can only be computed correctly, if the geometry data is also correct. The resulting vertex normal is just the result of averaging all neighbouring vertex normals and if some vertices are stored multiple times the result will be incorrect.

Anyway, identical vertices, which are defined multiple times, can be unified automatically (i.e. they are "merged" into one vertex) by calling


on the geometry beforehand.

Different normals used for rendering

The left image was rendered using face normals, resulting in a faceted look, as promised. The middle image shows what happens if you calculate vertex normals with multiple vertex definitions, where the right image shows correct vertex normal rendering with usage of createSharedIndex() before the vertex normals were calculated.

The faceted effect on a sphere is often not what you want and calculating vertex normals does a fine job here. However, some other objects might not perform so well. The problem is, that all normals at a vertex will be averaged and thus edges you may want to keep will be averaged out, too.

Box with bad vertex normals

See how bad the cube now looks. If you increase the mesh resolution the effect will reduce in size, but that is not a good solution anyway.

There is another variant of calcVertexNormals which can be given an angle. All edges between faces that have an angle larger than the one specified, will be preserved.

Replacing the old function call with

    calcVertexNormals(geo, osgDegree2Rad(30))

would help us out with the cube.

osgDegree2Rad is a useful function that allows you to convert degree values to radians. As you might guess there is also a osgRad2Degree function.

Calculating vertex normals is more complex than it sounds and it requires a considerably amount of time to compute. Using one of these methods on a per frame basis is not really recommended!

As OpenGL supports a lot of different geometry data formats, also new problems arise. Of course, not all of these variants are equally efficient, but luckily OpenSG also offers some methods to improve the data automatically. I mentioned createSharedIndex() before, which will look for identical elements and remove the copies, changing only the indexes.

This step is necessary for OSG::createOptimizedPrimitives to work as this method needs to know which primitves are using the same vertex which means they are neighbours. It tries to reduce the needed number of vertices to a minimum by combining primitives to stripes or fans. No property values will be changed, only the indexes are modified. The actual algorithm used here is very fast, but will not necessarily provide an optimal solution. Due to its pseudo random nature you can run it several times and take the best result. If performace is critical you can of course do it only once which will yield a non-optimal but definetly better solution than before.

OSG::createSingleIndex will reduce a multi indexed geometry into a single indexed. Multi-indexing is very efficient in storage but when it comes to rendering performance single indexing is better. The reason is that OpenGL does not support multi indexing directly. OpenGL's more efficient specifiers like vertex arrays cannot be used with OpenSG's multi indexing geometry. Finally you have do decide for yourself what's suiting better for you, but it is good to know that you can convert from multi- to single indexing.

Last but not least, it is possible to let OpenSG render normals for you. This may be useful for debuging purposes, so you can make sure the normals are actually pointing in the desired direction. There are two methods, one for vertex and the other for face normals. You should make sure which exist before calling one of these methods.

Rendered normals of a sphere

This nice picture shows the rendered normals of a sphere. However, it would be difficult to figure out if one is facing the wrong direction anyway ;-)

This code fragment shows how to do it:

    root = calcVertexNormalsGeo(some_geometry_core, 1.0);
    SimpleMaterialRecPtr mat = SimpleMaterial::create();
    GeometryRecPtr geo = dynamic_cast<Geometry *>(root->getCore());

Note that you have to add a material even if it is "empty" like this one, else you won't see anything but error messages''

OSG::calcVertexNormalsGeo need some geometry core and a float value which will define the length of the normals. Of course this does not change the real normals in any way.

<< Previous Chapter: Node cores Tutorial Overview Next Chapter: Light >>
Last modified 7 years ago Last modified on 01/17/10 01:11:44

Attachments (3)

Download all attachments as: .zip