Several people have asked on the mailing list if it is possible to use doubles in OpenSG, because they need to visualize large-scaled scenes, such as solar systems, with objects both near to the camarea (a few metres) & far away (lightyears).
The problem is a bit more complex than just using doubles in OpenSG...
This page gathers these and related issues and presents some solutions. Please contribute if you have experience in this.
Floating Origin, http://www.floatingorigin.com
Floating point precision and accuracy
- Using doubles instead of floats allows for far-away objectsby increasing the number of significant digits.
- Precision is often thought to be the main issue but in fact the main issue is often that the resolution of floating point space (and hence accuracy) decreases with distance from origin. This is the main problem with conventional (origin relative viewpoint navigation) systems.
- The worst case error magnifies exponentially with multiplication in calculations. It also fluctuates (first in one direction, then the other) randomly with changes in position. This can make a fluctuating error pattern of exponentially increasing magnitude with distance from origin, leading to occasional error spikes that may become visible as z buffer tearing artifacts, jitter or occasionally strange rendering results when viewing objects far from the origin.
- You can still loose accuracy if you're dealing with large numbers because floating point accuracy decreases with distance from origin regardless of precision.
- I.e. with 3 significant digits, a value in the range of 1e10 has a max precision of 1e7, so you can't place something with more precision than that. This also applices to multiplied values.
- 32-bit float has ~6 significant digits. 64-bit doubles have ~15, so an object at 1e18 can have vertices and/or be placed at max 1e3 units precision with doubles.
- Almost all OpenGL hardware implementations use float precision only (and probably most software too). So very little is gained by sending doubles to OpenGL. The float precision may be 16 bit, 24bit or 32bit, depending on the hardware and software implementation.
- OpenSG only has partial double support (mostly since it's not meaningful due to GL implementation).
However, OpenSG supports doubles for geometry properties (positions, normals). This might help you save memory and store your data in one place. It will not give you incredibly precise vertex*mtx multiplication results. We might see a DoubleTransform? core in 2.0 also, but there are additional fixes needed if you want the RenderTraversalAction? to do the math (i.e. frustum culling etc) in double precision.
- Move the objects while keeping the camera at origin (the floating origin approach). This means that you're storing positions in double format (or better) and computing camera->object matrices for each transform core each frame. Objects that are far away will still suffer a bit from numerical issues, but they should be far enough away that this won't be noticeable due to perspective foreshortening. Note that very large objects at very large distances can still exhibit visible problems if their polygons are correspondingly large (which they will often be). In this case, see the next point!
- Render far objects to texture (cubemap) and use as skybox. This is also a LOD techinque but is often referred to as "imposters".
- Note that, if you are using a floating origin apporach, it is also important to manage the far/front clip planes: you want to keep the full representable range of coordinate values within the visible volume - the view frustum. Avoid a frustum that is greatly larger than the visible region.
Depth buffer precision
- Most OpenGL HW has a 24-bit depth buffer using fixed math. To get sufficient precision, a suggested near/far ratio is 10000. (i.e if near is 5 units, far shouldn't be more than 50 000).
- Support for 32-bit IEEE float is in the works. Note that this might not be a lot better, precisionwise, than the 24-bit buffer, due to the nonlinear way depth values vary and the tricks OpenGL HW vendors use to optimize this. (However, 32-bit depth buffers are nice when working with render-to-texture etc.)
- Render the scene many times, back-to-front, clearing the depth buffer and have different near/far planes. (i.e. a two render step would have planes at 1, 100 and 10000. Note the exponential scale). Currently, this is not supported directly, so one way to do this is to have several viewports with separate cameras, using the same beacon, with different near/far planes and using a DepthClear? Background on the closer one to not disturb the farther one's image.