Federico Ponchio and Matteo Dellepiane
VCG - ISTI - CNR
Efficient transmission and rendering
of (large) 3D meshes (on the web)
(we are not concerned with the user interface here)
The mesh is decomposed into small patches of thousands of triangles at different resolution.
And assembled back at variable resolution:
so that the resolution on screen is approximately constant.
Of course it is a bit more complicated than this.
You can find the gory details on my PhD thesis:
That's perfect for the web...but also for native implementations.
struct IndexEntry {
unsigned short n_of_vertices; //64k vertex max.
unsigned short n_of_triangles;
int patch offset;
float error;
vec4f bounding_sphere;
}; //32 Bytes
Any mesh compression algorithm can be used
if we compress each patch separately
(es. OpenCTM, WebGL-Loader, PopBuffers).
As long as
Of course we developed a new one.
The best spot depends on the available bandwidth
and processing power.
From: Limpey et al. Fast Delivery of 3D Web Content: A Case Study
Modified version of edgebreaker:
turns a mesh into a sequence of symbols.
We keep a linked list of the boundary, and a list of unprocessed edges.
In practice, these expensive operations can be minimized processing active edges in an appropriate order.
The error value in the patches is used to automatically pick quantization steps.
This ensures consistency across borders.
New vertex positions are estimated, and the differences from the actual position encoded.
This is especially effective for regular meshes.
The first stream will be compressed
The dictionary becomes very small (and thus fast).
We assume that nearby difference values have similar probability.
We can approximate our streams as a pure entropic source with no memory and fixed frequencies.
Variable number of symbols to a fixed size word (es. a byte)
In decompression each byte is decompressed to a variable number of symbols
32.000 symbols, poisson distribution, iCore5 3.1Gh
Tunstall has about 10% worse compression.
1-3 million triangle per second in Javascript single thread, Chrome, difficult to measure exactly
16 million triangles per second in C++
This includes normals and color per vertex
Nexus library: available at http://vcg.isti.cnr.it/nexus
3dHop (includes windows executables and interface): http://vcg.isti.cnr.it/3dhop
Versions with compression support to be released shortly.
Estimating them using faces and vertices is quite fast.
We can then encode the differences, rememebr we need borders to match.
We change colorspace for better coherence
Quantize and encode differences as usual
Implemented (but not in the paper...)
Interleave the bits, sort by z-order, encode difference as number of different bit and the differetn bits as usual
Not implemented yet (wait next year)
The format support projective textures and texture coordinates
We plan to add support for this algorithm, especially for mobile
Allocation of arraybuffer is costly.
using ArrayBuffer directly is fast(er than copying variables around)
Manually inlining functions? Yes, legibility suffer but gains are substantial
Integer aritmetic is possible but painful. Very painful.
Use a C++ reference implementation.