Issue #61 open
Each PagedVolume::Chunk currently stores it's data in 'linear' order, but using Morton order instead would bring a couple of advantages:
- Locality of access: On average, the neighbors of a voxel should be closer together in memory.
- Ease of downsampling: A crude approach to downsampling is to simply average together groups of eight voxels into a single voxel. When Morton encoding is used, these groups of eight voxels always lie adjacent in memory, meaning that the downsampling can be done with a single pass over the data in the order that it is stored in memory.
- Some algorithms need random access: Operations like raycasting (which may be used heavily for ambient occlusion calculation) can sample the neighbors in any/every direction and so may benefit from Morton ordering.
There are also disadvantages:
- Index calculation is harder: For a given (x, y, z) position we need to compute the index into the chunks array. This is easy with the linear approach (a couple of adds and multiplys) but requires complex bit shifting in the Morton case. In principle the improved locality is supposed to make up for the computational overhead. Rather than doing the bit shifting every time, a look up table is generally considered to be the fastest approach.
- Some algorithms need linear access: The surface extractors work one slice at a time and in general I think this makes a lot of sense. This mean they may be better suited to a linear layout. That said, profiling has indicated that normal calculation is actually one of the botlenecks and in this case Morton ordering may help. The MC extractor needs to be optimised anyway as it can touch the same voxel multiple times, and also calls the convertToDensity() function multiple times. It should probably create a slice of densities, fill it once, and use that for both vertex and normal generation.