Developer Guide

Base Types

As one goal of OpenSG is the ability to run programs on a lot of different platforms, especially Unix and Windows, we have our own types which are guaranteed to have the same size on all platforms.

We have our own signed and unsigned integers in all useful sizes: Int8, UInt8, Int16, UInt16, Int32, UInt32, Int64, UInt64 as well as the two usual float sizes Real32 and Real64. Some useful constant are available: Eps, Pi, Inf and NegInf. A useful construct for template programming is the TypeTraits?<type> structure, which defines some standard functions/values for the given type, see OSGBaseTypes.h for details.

Base Math

Of course every scenegraph needs the basic math objects like Vectors, Points, Matrices, Quaternions etc., and OpenSG is no exception.


OpenSG matrices are similar to the OpenGL matrices in their storage structure and conventions, i.e. a matrix is per default a 4x4 Real32 matrix, and the multiplication convention is just like OpenGL's: v'=M*v. The matrix is stored column major and access methods respect the storage format, i.e. matrix[0] yields the first column. This is also true for the vector-based constructor. However, the constructor taking 16 single elements expects its parameters row-major like the matrix is written on paper. The positive side effect of this setup is the ability to access the base vectors of the matrix' coordinate space by accessing the vectors, i.e. matrix[3] is the translation to the origin of the local coordinate space. This is useful if you want to create your matrices from vectors, if you don't want to do that, don't worry about it.

Setting the contents of a matrix is done by the setValues() methods, accessing the values via operator[] for access to single columns or by using TransformationMatrix::getValues to get a pointer to the first element. In general most classes in OpenSG that keep an array of elements allow access to them via TransformationMatrix::getValues. If you need to create a matrix for a specific transformation, use the setTransform() methods, which create a matrix that executes the given transformation.

Matrices also supply the standard set of matrix operations like TransformationMatrix::det, TransformationMatrix::det3, TransformationMatrix::invert, TransformationMatrix::transpose, TransformationMatrix::mult and TransformationMatrix::multLeft. There are some variants that change the matrix in place, return their results in a different matrix or get their source data from a different matrix, see the class docs for details. The default vector/point multiplication methods TransformationMatrix::multMatrixVec and TransformationMatrix::multMatrixPnt assume that the matrix only uses the standard 3x4 elements. To use the full 4x4 matrix use TransformationMatrix::multFullMatrixPnt. As Vectors have a w coordinate of 0, compared to points which have w = 1, they don't need a full transform.

Note: As a 3-vector expands to a 4-vector with the 4th coordinate to zero, using matrix[3] = Vec3f(1,2,3) will set matrix[3][3] to 0, even if it was 1 from the beginning.


OpenSG is different from most other systems in differentiating between vectors, points and colors.

Vectors are the most common class, and they should behave like every other vector library on the planet. They are templated to simplify having variants, and the standard ones that are available are Vec4ub, Vec2us, Vec2s, Vec2f, Vec3s, Vec3f and Vec4f. They have operators for the scalar operations, and methods for everything else, see the doxygen docs for Vector for details. Conceptually, the 3 element vector has a w coordinate of 0, thus there is no full matrix multiplication for vectors.

Points represent positions in space, and as such they are more restricted than vectors. The available variants are Pnt2f, Pnt3f and Pnt4f. Some vector operations (Vector::dot, Vector::cross, etc.) don't make sense for points. Points can be subtracted (creating a vector), scaled and a vector can be added to or subtracted from them. If you want to represent a position, use a point. It helps keeping the concepts in order and not mix up everything just because it has the same data. When multiplied with a matrix, the w coordinate is set as 1 for 3 element points. If you really need to get from a point to a vector or vice versa, you can use

to cast a point to a vector and back.

Colors are RGB vectors, which also have access functions to the named components. They also allow access via the HSV color model and scalar multiplication, but no other operations.


Quaternions are the standard way to represent rotations. OpenSG quaternions feature the standard set of methods to get and set the rotations, in variants for radians and degrees. The standard order of the components is x,y,z,w. The standard operations (length, normalize, mult) are available, as well as Quaternion::slerp and Quaternion::multVec.


All data in FieldContainers? is organized in fields. There are two general types of fields, fields for single values (SFields) and fields for multiple values (MFields). For the standard types and most pointer and ptr types there are predefined instances of both types of fields.

Single Fields

Single fields hold, as the name says, a single value. Their content can be accessed directly using getValue(); and setValue();. It can also be copied from another field of the same type by setValue(); (for fields of the same type) or by setAbstrValue(); (for fields which have the same type, but are given as an abstract field).

Multi Fields

Multi fields hold multiple values. They are realized as STL vectors and offer a similar interface. The field defines types for iterators and references, and the standard begin(), end(), front(), back(), push_back(), insert(), erase(), clear(), size(), resize(), reserve() and other functions.

In addition, Multi fields have an interface reminiscent of single fields. It features the setValue() variants mentioned above and indexed variants like getValue(const UInt32 index) and setValue(const FieldTypeT &value, const UInt32 index) methods. It also features an OpenSG-style getSize() method.

FieldContainer Fields

Each attribute has a name, e.g. someValue, and every field container has a set of standard access functions to access its fields. The field itself can be accessed via getSFSomeValue() or getMFSomeValue() for single or multiple value fields respectively.

For SFields containers features getSomeValue() and setSomeValue() direct access methods. The MField getSomeValue() method returns the whole field, just like the getMFSomeValue() method. Some field containers have more access functions, often something like an addSomeValue() method to simplify adding data to multi fields. See the field container docs for details.

Creating New Field Types

All the data that is kept in FieldConatiners? has to be in Fields. Fields provide the interface for the reflecivity and generic access methods to work. They come in the two known variants single and multi field. To simplify creating new field types, they do not have to created explicitly. Instead there are templates SField and MField who take care of the implementation. All you need to provide is a Trait structure that defines the types needed and some type-specific functions.

Note that field types for new FieldContainers? (actually pointers to FieldContainers, as you can't instantiate them) can be created by fcdEdit automatically. So if you need fields for pointers to your containers, you don't have to follow the descriptions in this section.

The trait has to be a concrete version (What's the right name for this?) of FieldDataTrait?<class type> and has to provide the following functions/types:

  • a DataType _type; which is used to uniquely identify the Field's type
  • an access method for the type: DataType? &getType(void)
  • two methods to return the names of the field types: Char8 *getSName(void) and Char8 *getMName(void). The names are usually created by capitalizing the type name and prepending SF or MF, e.g. the matrix field names are SFMatrix and MFMatrix.
  • a method to get a default object to initialize the values: type getDefault(void).
  • two methods to convert a data element to and from a string: Bool getFromString(type &outVal, const Char8 *&inVal); and void putToString(const type &inVal, string &outVal);. Note that these are optional in the sense that the system will work without them, but some important features will not work without them, so it's highly recommended to implement them.

Note that all functions have to be static, as the Trait class is not instanced, and that the Trait cannot have any virtual functions or data memebrs. It is not used to create actual objects, it's just a convenience container for the needed types/functions.

The fields also need to be able to store themselves in a binary form. If the data structures used are contiguous in memory and don't use pointers this can easily be accomplished by deriving the FieldDataTrait?<class type> from FieldTraitsRecurseBase?<type>. It will copy the contents of the data types verbatim and back.

This approach will not work as soon as there are pointers in the new structures, even simple things like STL vectors will not work that way.

In these cases you need to implement the binary interface in the trait. It consists of three method, which exist for single and multiple elements of the type:

  • a method to calculate the size of the packed object: UInt32 getBinSize(const type &object);
  • a method to put the object into a binary block: void copyToBin(BinaryDataHandler? &mem, const type &object);
  • a method to receive the object from a binary memory block: void copyFromBin(BinaryDataHandler? &mem, type &object);

The last two methods work via a BinaryDataHandler?, which abstracts a memory block.

There are some details that have to be done in order for all the static elements to be available and all necessary templates to be instatiated. Especially for types that are to be used in libraries on Windows some constraints need to be satisfied.

For an example on how to implement new Field types, see Examples/NewTypes? in the source code or source distribution.


Geometries define the data used for rendering. Geometries don't necessarily have to be leaves of the tree. Due to the Node/Core? divison every Node with an OSG::Geometry Core can have children, which are used just as all other OSG::Node's children. Geometry has to be flexible, to accommodate the needs of the application and to provide input data for complex shaders. Providing support for various data types that define the geometry is useful, as well as different indexing capabilities to reuse data as much as possible. On the other hand, it also has to be efficient to render. Flexibility and performance don't always go well together, thus, there are some simplifications to make.

Properties (Attributes)

OpenSG geometry is modeled closely following OpenGL. The data that make up the geometry are stored in separate arrays. Positions, Colors, Normals and Texture Coordinates all have their own arrays, (or OSG::MField, to stay in OpenSG terminology). As OpenGL can handle many different formats of data, some of which might be more appropriate due to speed and memory consumption than others, depending on the application, OpenSG features different versions of this data, allowing pretty much all the variants that OpenGL can handle. To allow this with type safety and without having a separate geometry class for every possible combination, the data fields are stored in separate field containers, the so called GeoProperties.

There are separate OSG::GeoProperty types for different attributes, and variants for different data types for each kind of OSG::GeoProperty. The naming scheme for OSG::GeoProperty is OSG::Geo<StoredType>Property. The most commonly used types are:

Note: The OpenSG 1.x names directly associated a OSG::GeoProperty with its role in the fixed function OpenGL pipeline. Since the introduction of vertex shaders there role of a property is not cast in stone, but instead determined through its use inside the shader. Therefore the new naming scheme focuses more on the type of data stored in a OSG::GeoProperty and not on its function.

Class Structure

The basic idea of the OSG::GeoProperty class structure is very simple. The actual implementation might appear a little complex at fist, but allows for simple extensions with a great deal of code reuse. Once the ideas behind the structure are understood, the actual code does not look so bad any more.

All OSG::GeoProperty are derived from OSG::StateChunk, since geometric data is also part of the OpenGL state and this allows for a natural way to take advantage of vertex buffer objects.

There are three primary types of OSG::GeoProperty: OSG::GeoIntegralProperty, OSG::GeoVectorProperty and OSG::GeoMultiProperty. The first two are the generic base classes for the typed OSG::TypedGeoIntegralProperty and OSG::TypedGeoVectorProperty that hold the actual data. OSG::GeoMultiProperty (*XXX one sentence missing here XXX* -- cneumann).

OSG::GeoIntegralProperty types

OSG::GeoIntegralProperty is used for scalar data, most notably to specify OpenGL primitive types, the number of vertices to use for a primitve or as indices into other properties. However you can not create instances of this class as it is abstract and only offers an interface that allows access to the data through any scalar type, independent of the data actually stored in the property. This can be very convienent, but the abstraction comes at a cost, so for performance critical operations you need the exact type information and access to data through the derived OSG::TypedGeoIntegralProperty - the interfaces are described in more detail below.

The predefined typedefs of OSG::GeoIntegralProperty include:

There are also aliases for the OpenSG 1.x names including: OSG::GeoPTypesUI8, OSG::GeoPTypesUI16, OSG::GeoPTypesUI32 and OSG::GeoPLengthsUI8, OSG::GeoPLengthsUI16, OSG::GeoPLengthsUI32 and OSG::GeoIndicesUI8, OSG::GeoIndicesUI16, OSG::GeoIndicesUI32.

OSG::GeoVectorProperty types

OSG::GeoVectorProperty is used for vector like data, i.e. for properties that hold point, vector or color information, which may of course be of any type OpenSG offers for these purposes. Again this class is abstract and provides an interface that is independent of the actual data type.

There are predefined typedefs for all combinations of OSG::Geo(Pnt/Vec/Color?)(1/2/3/4)(ub/b/us/s/f/fx/d)Property that makes sense in combination. For example the most commonly used types are:

OSG::GeoMultiProperty types

OSG::GeoMultiProperty (*XXX paragraph here XXX* -- cneumann).

Accessing the data

Access to the data is available at two levels that offer different performance convenience tradeoffs. The generic level (available through OSG::GeoIntegralProperty and OSG::GeoVectorProperty does not require knowledge of the exact data types stored in a property and performs the necessary conversions for the user. The typed level (available through OSG::TypedGeoIntegralProperty and OSG::TypedGeoVectorProperty on the other hand does require that knowledge, but compensates this by offering better performance.

The generic level contains member template functions that convert the data to the user requested type after retrieving it through an internal interface, thus requiring two type conversions (from the stored type to the type used in the internal interface to the user requested type). The type used in the internal interface is exported as OSG::GeoIntegralProperty::MaxTypeT and OSG::GeoVectorProperty::MaxTypeT respectively.

template <class ExternalType>


Geo*Property::getValue(const UInt32 index) const;

template <class ExternalType>


Geo*Property::getValue(ExternalType &eval, const UInt32 index) const;

template <class ExternalType>


Geo*Property::setValue(const ExternalType &eval, const UInt32 index);

template <class ExternalType>


Geo*Property::addValue(const ExternalType &eval);

template <class ExternalType>


Geo*Property::push_back(const ExternalType &eval)

At the typed level the same template member functions are available as well, but they only require one conversion from the stored type to that requested by the user. Additionally there are overloads that use parameters of the stored type that incur no conversion penalty at all. Another alternative is to gain access to the OSG::MField that holds the data and modify it directly.

      TypedGeo*Property::StoredFieldType *editFieldPtr(void);

const TypedGeo*Property::StoredFieldType *getFieldPtr (void) const;

      TypedGeo*Property::StoredFieldType &editField(void);

const TypedGeo*Property::StoredFieldType &getField (void) const;

Finally, OSG::GeoProperty features an interface for OpenGL vertex arrays, giving access to the data and the types involved, which is used for rendering, however this read only interface should really only used for internal purposes.


Some rules of thumb:

  • Everywhere you actually want to create a new property you have to use the typed versions like OSG::GeoPnt3fProperty, as they are the only ones that actually contain data.
  • To write functions that can handle arbitrary types of data, use abstract property pointers and the generic interface to access the data.

GeoVectorPropertyPtr colProp = geo->getColors();

if(col == NullFC)


   FWARNING(("No colors available!\n"));



// this works independent of the real data type used for colors.

colProp->push_back(Color3f(1, 0, 1));


  • If you know that all the geometry your function has to work on has been created by yourself using a single type of property you can just downcast to that type and use the interface of the OSG::MField that holds the data directly. For safety reasons you should make sure the downcast succeeded. This is the most efficient way to access the data.

GeoColor3ubPropertyPtr colProp = GeoColor3ubPropertyPtr::dcast(geo->getColors());

if(colProp == NullFC)


   FWARNING(("Downcast failed!\n"));



MFColor3ub *colField = colProp->getFieldPtr();

// here only Color3ub works!

colField->push_back(Color3ub(255, 0, 255));



Using these properties it is possible to define geometry. Note that OpenSG inherits the constraints and specifications that concern geometry from OpenGL. Vertex orientation is counterclockwise when seen from the outside, and concave polygons are not supported.

Non-Indexed Geometry

One additional advantage of separating properties from Geometry is the ability to share properties between geometry osg::NodeCore? s. As geometries can only have one material right now that's useful for simplifying the handling of objects with multiple materials.

This simple geometry has one problem: there is no way to reuse vertex data. When a vertex is to be used multiple times, it has to be replicated, which can increase the amount of memory needed significantly. Thus, some sort of indexing to reuse vertices is needed. You can guess what's coming? Right, another property.

Indices are stored in the osg::GeoIndices? property, which only exists in the osg::GeoIndicesUI32 variant right now. When indices are present the given lengths define how many indices are used to define the primitive, while that actual data is indexed by the indices.

Indexed Geometry

Indexed geometry is very close to OpenGL, and probably the most often used type of geometry. It doesn't handle all the cases, though.

Sometimes vertices need different additional attributes, even though they have the same position. One example are discontinuities in texture coordinates, e.g. when texturing a simple cube. The edges of the cube don't necessarily use the same texture coordinate. To support that a single indexed geometry has to replicate the vertices.

To get around that you need multiple indices per vertex to index the different attributes. Adding an index for every attribute would blow up the geometry significantly and not necessarily make it easier to use. We decided to use another way: interleaved indices.

Multi-Indexed Geometry

Interleaved indices require every vertex to hold multiple indices. Which index is used for what attribute is defined by a separate indexMapping field. The indexMapping field is a osg::UInt32 osg::MField. The possible values are bitwise combinations of the available attribute masks: osg::Geometry::MapPosition?, osg::Geometry::MapNormal? etc. The length of the indexMapping defines how many indices are used per vertex. If it's not set a single index for all available properties is used (or none at all).

In addition to the properties geometry keeps a osg::MaterialPtr? to define the material that's used for rendering the geometry (see Materials) and a flag that activates caching the geometry in OpenGL display lists. As geometry rendering is not optimized very much right now that's the best way to get decent performance. Display lists are turned on by default.

Geometry Iterators

The osg::Geometry setup is very nice and flexible to define: you can mix different kinds of primitives in an object, you can have properties and different kinds and the indexing allows the reuse of some or all of the data.

From the other side of the fence things look different: if you want to walk over all triangles of a geometry to calculate the average triangle size or the surface area, or for calculating face normals or for whatever reason you have to take care of all the flexibility and be prepared for lots of different ways to define geometry.

To simplify that the concept of a geometry iterator has been introduced. A geometry iterator allows to iterate over a given geometry primitive by primitive, face by face (a face being a triangle or quad), or triangle by triangle.

All of them are used like STL iterators: the osg::Geometry has methods to return the first or last+1th iterator, and to step from one element to the next. They can also unify the different indexing variants: when using an iterator you can access the index value for each attribute of each vertex of the iterator separately. Or you can directly access the data that's behind the index in its generic form, which is probably the easiest way of accessing the data of the osg::Geometry.

Example: The following loop prints all the vertices and normals of all the triangles of a geometry:


for(it = geo->beginTriangles(); it != geo->endTriangles(); ++it)


    std::cout << "Triangle " << it.getIndex() << ":" << std::endl;

    std::cout << it.getPosition(0) << " " << it.getNormal(0) << std::endl;

    std::cout << it.getPosition(1) << " " << it.getNormal(1) << std::endl;

    std::cout << it.getPosition(2) << " " << it.getNormal(2) << std::endl;


If you're used to having a separate Face object that keeps all the data for a face, the Iterators pretty much mimic that behavior. The one thing you can't do using iterators is changing the data. To do that you have to use the Indices the Iterators give you and access the Properties directly. Be aware that the Iterators hide all data sharing, so manipulating data for a face the iterator gives you can influence an arbitrary set of other faces.

Primitive Iterator

The osg::PrimitiveIterator? is the basic iterator that just iterates through the osg::GeoPTypes property and gives access to the primitive's data. It is useful to solve the index mapping complications and to get access to the generic data, but it's primarily a base class for the following two iterator types.

Face Iterator

The osg::FaceIterator? only iterates over polygonal geometry and ignores points, lines and polygonal primitives with less than three vertices. It also splits the geometry into triangles or quads.

Triangle Iterator

The osg::TriangleIterator? behaves like the osg::FaceIterator?, but it also splits Quads into two triangles, thus it does an implicit triangulation. As OpenSG just like OpenGL doesn't support concave geometry that's not as hard as it sounds.

Line Iterator

The osg::LineIterator? only iterates over line geometry and ignores points, polygonal primitives and line primitives with less than two vertices. It splits line strips and loops into single lines.

Edge Iterator

The osg::EdgeIterator? (currently) only iterates over line geometry and ignores points, polygonal primitives and line primitives with less than two vertices like the osg::LineIterator? does, but it leaves line strips and loops as they are. This iterator will make more sense in a future version, where it returns the edges of the other primitives as single lines or line loops as well.

Dev: For all polygonal primitives probably except GL_POLYGON itself the edges could be returned as single lines conceptually similar to the implicit triangulation of the osg::TriangleIterator?. For a polygon it seems reasonable to me to reinterpret it as line loop. Shouldn't we overload getType() to return the interpretation?

The iterators can also be used to indicate a specific primitive/face/triangle/line. Each of these has an associated index that the iterator keeps and that can be accessed using getIndex(). A new iterator can be used to seek() a given primitive/face/triangle again and work on it. This is used for example in the osg::IntersectAction?.

Simple Geometry

OpenSG does not have NodeCores? for geometric primitives like spheres, cones, cylinders etc. Instead there are a number of utility functions that can create these objects. They can be created as a ready-to-use node and as a naked node core. In most cases we tried to mimic the VRML primitive semantics, so if you're familiar with VRML you will feel right at home.


osg::makePlane creates a single subdivided quad.


osg::makeBox creates a box around the origin with subdivided sides.


osg::makeCone create a cone at the origin.


osg::makeCylinder create a cylinder at the origin.


osg::makeTorus create a torus at the origin.


osg::makeConicalFrustum creates a truncated cone at the origin.


There are two ways to create a sphere. osg::makeSphere uses a icosahedron as a base and subdivides it. This gives a sphere with equilateral triangles, but they do not correspond to latitude or longitude, which makes it hard to get good texture mapping on it. As every subdivision step quadruples the number of triangles, it is also hard to control the complexity of these kinds of spheres.

osg::makeLatLongSphere on the other hand creates a sphere by simply usign a regular subdivision of latitude and longitude. This creates very small polygons near the poles, but is more amendable to texture mapping and gives finer control of the resoltuion of the sphere.

Extrusion Geometry

osg::makeExtrusion creates a pretty general extruded geometry. It works by sweeping a given cross section, which can be given clockwise or counterclockwise, across a spine. For every spine point an orientation and a scale factor are specified. The beginning and the end of the object can be closed by caps, but for the capping to work the cross section has to be convex. The resulting geometry can be refined as a subdivision surface (no idea which subdivision scheme is applied, anyone care who knows care to take a look?). Optionally normals and texture coordinates can be generated

Helper Functions

A number of helper functions can be used in conjunction with manipulating and optimizing geometry.

Normal Calculation

A common problem for self-created geometry or for geometry loaded from simple file formats are missing normals. Normals are needed for proper lighting, without them objects will either be black or uniformly colored.

Normals can be calculated either for every face or for every vertex.

Face normals, as calculated by osg::calcFaceNormals, are only unique for a given triangle or quad. The resulting object will look faceted, whcih may or may not be the desired effect. This will also work for striped or fanned models, as OpenSG doesn't have a per-face binding and uses multi-indexed per-vertex normals for this.

Vertex normals are calculated for every vertex and allow a shape to look smooth, as the lighting calculation is done using the vertex normals and interpolated across the surface. They can be calculated using two different methods.

osg::calcVertexNormals(GeometryPtr? geo) will just average all the normals of the faces touching a vertex. It does not unify the vertices, i.e. it does not check if a vertex with given coordinates appears in the position property multiple times, the geometry has to be created correctly or be run thrugh osg::createSharedIndex.

The disadvantage of osg::calcVertexNormals(GeometryPtr? geo) is its indiscriminative nature, it will average out all the edges in the object. The alternative is osg::calcVertexNormals(GeometryPtr? geo, Real32 creaseAngle), which uses a crease angle criterion to define which edges to keep. Edges that have an angle larger than creaseAngle will not be averaged out. It won't always work for striped geometry. It will process it, but if a stripe point needs to be split because it has two normals, that won't be done. The same sharing caveat as given above applies.

Calculating vertex normals with a crease angle sounds simpler than it is, especially if the calculation should be independent of the triangulation of the object. Thus the algorithm is relatively expensive and should be avoided in a per-frame loop. There are some ideas to do the expesive calculations once and quickly reaverage the normals when needed. These have not been realized, if you need this or even better want to implement it, notify us at info@….

Geometry Creation

Setting up all the objects needed to fully specify an OpenSG Geometry can be a bit tedious. So to simplify the process there are some functions that take data in other formats and create the corresponding OpenSG Geometry data.

Right now there is only one function to help with this, osg::setIndexFromVRMLData. It takes separate indices for the different attributes, as given in the VRML97 specification, together with the flags that influence the interpretation of these indices, and sets up the indices, lengths and types properties of the given geometry.

Geometry Optimization

OpenSG's Geometry structure is very flexible and pretty closed modeled on OpenGL. But not all of the Geometry data specification variants are similarly efficient to render. The functions in this group help optimizie different aspects of the Geometry.

osg::createOptimizedPrimitives takes a Geometry and tries to change it so that it can be rendered using the minimum number of vertex transformations. To do that it connects triangles to strips and fans (optionally). It does not change the actual property values, it just creates new indices, types and lengths. The algorithm realized here does not try a high-level optimization, instead it is optimized for speed. Due to its pseudo-random nature it can be run multiple times in the same time a more complex algorithm needs, allowing it to try different variants and keeping the best one found. Or, if execution time is a problem, it can be run only once and create a very quick result that is good, but not optimal.

osg::createSharedIndex tries to find identical elements in the Geometries Properties and remove the copies. It will not actually change the Property data, it will just change the indexing to only use one version of the data. This is a necessary preparation step to allow osg::createOptimizedPrimitives to identify the triangles it can connect to form stripes and fans.

osg::createSingleIndex resorts the Geometry's Property values to allow using a single index (in contrast to interleaved multi-indices) to represent the Geometry. To do that it might have to remap and copy Property values, as well as index values. While multi-indexing can be very efficient datawise, as as much of possible is shared, for rendering it is problematic. OpenGL doesn't know multi-indexing, thus for multi-indexed Geometry the more efficient OpenGL geometry specifiers like VertexArrays? can't be used, which can have a significant impact on performance, especially for dynamic objects.

UInt32 calcPrimitiveCount ( GeometryPtr? geo, UInt32 &triangle, UInt32 &line, UInt32 &point );

Normal Visualisation

For debugging it can be useful to actually see the normals of an object, as that allows making sure that the normals point in the expected direction and that normals are really identical and not just pretty close. Every normal is represented by a line of a user-defined length.

As OpenSG doesn't have an explicit face/vertex binding mode there are two different functions to create an object representing the vertex or face normals. The application should know whether face or vertex normals are used in the given geometry. In general it is safe to assume vertex normals are used.

osg::calcVertexNormalsGeo creates an object that shows the vertex normals of the given geometry, while osg::calcFaceNormalsGeo creates an object that shows the face normals.

Dev: We should add something more general here. Forcing the app to know is not nice. calcVertexNormalsGeo can already calc normals at the vertices and at the centers of the tris, but the decision when to use which and which normal to use is not clear.

Occlusion Culling

Occlusion culling in 2.0 is very easy to use. There is a good set of defaults that does not affect picture quality but can give you a 2x performance increase when rendering a large amount of geometry that will be in the view frustum but not visible. The occlusion culling that was implemented does the following:

  1. Roughly sorts the scene front to back
  1. Render the major occluders
  1. Decide if objects will be occluded, if so test, otherwise just draw the object
  1. When the test buffer gets full, draw the objects that came back visible
  1. Goto 3 until all tests and geometry are checked/drawn

Below is a code sample that illustrates all of user tweakable paramaters that are available followed by an explanation of each.

RenderAction *ract = RenderAction::create();

// Don't draw or test any object taking up less than 15 pixels


// Any object with an 80% or better chance of being covered, test instead of draw


// If the test comes back with less than 15 pixels that would have changed
// then discard this geometry and don't draw it


// Use a query buffer that can hold 600 tests at a time


// Objects with less than 50 triangles, just draw and don't test


// Turn Occlusion Culling on


Small Feature Culling

From the above example, ract->setOcclusionCullingMinimumFeatureSize(15) tells the occlusion culling algorithm that if it estimates the number of pixels the object would take up on the screen, is less than 15 pixels, then not to draw it. This is very useful when you want to get rid of tiny objects that only contribute detail to the final rendering. When set very low, the final rendering will look very similar to a full rendering. When set high there will be a lot of details missing from the scene. Default is 0 px, so no small feature culling.

Covered Threshold

For each object in the view frustum the algorithm calculates the probability that the object is covered. This value is compared against the user set threshold and if it exceeds it the object is tested for its visibility. The value is set by calling ract->setOcclusionCullingCoveredThreshold(0.x). The default value is 0.7, or 70%.

Visibility Threshold

When an object is tested, the result of that test is how many pixels the object would affect if drawn. This user tweakable setting is related to the small feature culling, but different: small feature culling uses a rough estimate of pixels that would be affected based on the objects bounding box (small feature culling assumes that the bounding box that would be drawn would be 100% visible), small feature culling doesn't use the results of any test from the object. For example, if we tested an object that is almost completely hidden behind another object, the number of affected pixels would be very small. By using the visibilty threshold we could tell the occlusion culling algorithm to not draw the object. Like the small feature culling, this setting affects the final rendering. It can be set from the example above by calling: ract->setOcclusionCullingVisibilityThreshold(pixels). Default is 0, so no visibility threshold.

Query Buffer Size

This setting simply affects the number of tests that are preformed before checking test results. It can come in handy when there are a lot of objects in the scene that are not visible because a few objects cover most of them. In this case you would want to set the buffer size larger. It can be set by calling: ract->setOcclusionCullingQueryBufferSize(numTests). The default is 1000.

Minimum Triangle Count

This setting specifies how many triangles an object must have before testing it. To perform a test, the bounding box is drawn (12 triangles). So, in reality, if the object has 12 triangles, sending it out for a test would be a waste of time and resources. However, the latency involved in sending the test out and then later retrieving its results makes the number of triangles needed before testing higher. This setting really depends on how fast the graphis card can render geometry. It can be set by calling: ract->setOcclusionCullingMinimumTriangleCount(minTriangles). The default is 500 triangles, so all objects under 500 triangles are rendered and are never tested.


The ScreenLOD that was introduced in 2.0 is meant to be an "automatic" LOD node. Traditionally, level of detail nodes have been based on the distance to the viewer. However, there is no distinction between very small objects and very large objects that are the same distance away. This node instead makes an estimate of what percentage of the screen would the node use. It also takes into account triangle degradation within the node to pick the node that gives the best cost/performance for the image based on these factors. This node is easier to use than the !DistanceLOD because the user does not have to explicitly state when to switch to different LODs. Instead the user just adds the different levels into the node in decreasing complexity. There are also user tweakable settings that can be controlled to optimize the nodes behaviour to the users liking. They are demonstrated below with an explanation of each that follows.


OSG::ScreenLODPtr lod_core;

geom_lod = OSG::makeCoredNode < OSG::ScreenLOD > (&lod_core);

while(you have more geometry LODs)

   OSG::NodeRefPtr lod_group(OSG::makeCoredNode < OSG::Group > ());

   //Put your all geometry for this LOD into lod_group

   //Add your lod_group to the ScreenLOD node


RenderTraversalAction *tact = NULL;

tact = RenderTraversalAction::create();

// Set the number of LODs to use, 0 indicates use all LODs


// Set the minimum coverage threshold(in percent) before highest LOD is not used

tact->setScreenLODCoverageThreshold(0.01); //If object takes less than 1% of screen then choose other LODs

// Adjust the slope of degradation when choosing LODs, higher=faster


Number of Levels to Use

The user can control the number of levels to use by calling: tact->setScreenLODNumLevels(numberOfLevels). This will tell the node to not use any level higher than X. When set to 0, all LODs are used. When set to 1, only the highest LOD is used. Default is 0, so all LODs are used.

Screen Coverage Threshold

The node figures out how much of the screen the bounding box of the node will take up(a percentage). This percentage is then compared against the screen coverage threshold to determine whether to use a different LOD. If the node would take up less than the minimum needed to render at the highest LOD then a different LOD is chosen. The user can tweek this setting by calling: tact->setScreenLODCoverageThreshold(0.01). NOTE: Even if this is set higher, the rendering may not neccessarily get worse. The node still will pick the best LOD given how much of the screen is taking up and how fast the triangles within the node degrade. The default is 0.001, so 0.1%.

Degradation Factor

This setting allows the user to control the slope of degradation used in picking LODs for the objects. When set above 1.0, the selection of lower LODs happens faster. Conversely, when set below 1.0, lower LODs are picked slower. A user can adjust this level by calling: tact->setScreenLODDegradationFactor(theFactor). The default is 1.0, or don't adjust what the node is picking.

Coverage Override

It is possible to take complete control over the LOD node and specify exactly what coverage percentages to use when selecting the LOD. To enable this amount of control, use the coverageOverride MField and specify the exact percentages. When you use this list, all other processing is disabled and only these values are used for selection.



Last modified 7 years ago Last modified on 01/17/10 11:44:33