Julio Jerez wrote:The possibities are scary.
You surely know the Atomontage engine? It's built around that vision, and allows to poke out holes of the models at runtime. Some random video where they use this to deform terrain:
https://www.youtube.com/watch?v=Gshc8GMTa1YI also use similar ideas for my volumetric editor, which can use regular meshes, voxelize them, then do CSG or geometry blending. Or i create geometry from SDF primitives per fluid particle. Stuff like that. Pretty interesting for procedural content generation, which i think should be explored to reduce content creation costs.
I have no volumetric representation of the whole world, but i generate this only temporally to generate the final surface mesh. The surface pipeline is isosurface extraction -> curvature aligned quad remesh. Such remeshing isn't realtime, but curvature alignment gives good quality. Still, it's resampled geometry and can't preserve all details of input meshes ofc.
Contrary, voxels or isosurface is aligned to the global grid, so it's quality is not good for visuals. Except if we would use insane high resolutions, leading to a waste of memory. Thus i'm no real voxels believer, and things like Atomontage surley are not the future despite their claims.
However, for realtime dynamic geometry it's probably our only option, so that's interesting.
But than i need to ask again: Why a surface mesh at all? You could do collisions against the volume data itself much faster? I mean, ofc. you need a mesh for easy visualization, but you don't need it for the actual engine?
Supporting volume data would be quite interesting. Having some experience with this, here are some important points i've learned / found out:
Density needs less memory than signed distance (SDF), because we can do easy compression based on sparsity.
DAG achieves much higher compression ratios over SVO on big data. It's easy to convert from SVO to DAG, but not realtime. Interesting for storage at least. Random paper:
https://www.cse.chalmers.se/~uffe/HighResolutionSparseVoxelDAGs.pdfBut SDF allows faster processing than density. E.g. efficient sphere tracing for collision detection. (This is part of how Unreal Engine 5 realtime GI works - they trace SDF models.)
An approximate conversion from density to SDF is possible in realtime, by making an exponential sum of density mip maps. It's not perfect, but good enough for my needs so far.
Voxelization of mesh assets is hard. Because they are usually not manifold and are often just partial, defining surface only for visual purposes. So they lack a robust volumetric definition.
Early solution was raytracing many rays from a given point. If the rays hit mostly back faces, they point is probably in solid space, otherwise in empty space. This is super slow.
A better solution is to integrate the surface to the point. This paper does so by calculating a winding number:
https://www.dgp.toronto.edu/projects/fast-winding-numbers/fast-winding-numbers-for-soups-and-clouds-siggraph-2018-barill-et-al.pdfThe authors implementation showed they store some coefficients and do some taylor series, but this is a wast of memory and calculations. As i knew from my work on GI, surface area and normal direction ('surfel') is enough to integrate surface surface form factor with better accuracy. If we are close to the surface, we can switch to exact triangles for precise accuracy where needed.
Here is code i did to research this (can switch between both methods):
- Code: Select all
static bool visInsideOutsideTest = 0; ImGui::Checkbox("visInsideOutsideTest", &visInsideOutsideTest);
if (visInsideOutsideTest)
{
static int init = 1;
static HEMesh mesh;
if (init)
{
//MeshConverter::TestTrianglesToHE (mesh, mesh);
//((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\bunny closed.lxo"); // hangs on tessellation (delauney)
((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\Armadillo_input.obj");
//((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\Gear.lxo");
//((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\MaxPlanck_input.obj");
//((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\Beetle_input.obj"); // hangs on tessellation (delauney)
//((HEMesh_Serialization&)mesh).AutoLoadModel ("C:\\dev\\pengII\\mod\\Buddha_input.obj"); // hangs on tessellation (delauney)
}
static vec query(0.46f, 0, 0);
ImGui::DragFloat3("query", (float*)&query, 0.001f);
static bool surfels = 0; ImGui::Checkbox("surfels", &surfels);
float integral = 0;
if (surfels) for (int i=0; i<mesh.GetPolyCount(); i++) // surfels
{
float area;
vec center = mesh.CalcPolyCenterAndArea(area, i);
vec norm = mesh.CalcPolyNormal(i);
vec diff = query - center;
float sql = diff.SqL();
if (sql > FP_EPSILON2)
{
float solidAngle = diff.Dot(norm * area) / (sqrt(sql) * sql);
integral += solidAngle;
}
}
if (!surfels) for (int i=0; i<mesh.GetPolyCount(); i++) // triangles
{
vec triVD[3];
bool onSurface = false;
int eI = mesh.GetPolyEdge(i);
for (int j=0; j<3; j++)
{
vec diff = mesh.GetEdgeVertexPos(eI) - query;
float dist = diff.Length();
if (dist < FP_EPSILON2)
{
onSurface = true;
break;
}
triVD[j] = diff / dist;
eI = mesh.GetEdgeNext(eI);
}
if (!onSurface)
{
float numer = triVD[0].Dot(
vec(triVD[1]-triVD[0]).Cross(triVD[2]-triVD[0]));
float denom = 1 +
triVD[0].Dot(triVD[1]) +
triVD[0].Dot(triVD[2]) +
triVD[1].Dot(triVD[2]);
float solidAngle = 2 * atan2(numer, denom);
integral += solidAngle;
}
}
float winding = 1 / float(4*PI) * integral;
bool inside = winding > 0.5f;
RenderCircle(0.01f, query, vec(0), !inside, inside, 0);
ImGui::Text("winding %f", winding);
((HEMesh_Vis&)mesh).RenderMesh(0,0,0);
init = 0;
}
That's simple and nice, but still slow.
To make it fast, we can use something like an octree and calculate average surfel per cell. If the query point is distant from the actial node, we can use that average instead traversing the subtree.
Then we are at the point where above paper is. Still slow

They proposed fast multi pole method, but did not try.
I implemented it (or something similar) by processing blocks of queried regions in subdividing top down fashion (which requires to give traversal stack to child nodes as parameter). Now we can use a distant surface patch integral for a whole block of query region, and the algorithm is finally paractically fast. Iirc, i can voxelize a meshes at 2k^3 resolution in one minute.
But it's still not perfect. Imagine a model of a column on a floor. The modeler will create a big flat quad for the floor, and a cylinder without caps for the column. He will not cut out a hole from the floor polygon where the column is, as this would increase triangles and work for no win.
In a point inside the column but close to the floor, the algorithm 'sees' more frontface surface than backfaces, so it will generate a bubble of empty space at this region, which we do not want.
I solve this by a heuristic filtering out bubbles with small surface area, bun manual interaction to decide over such bubbles remains necessary in very rare cases, i guess.
A nice worst case model is CryTeks Sponza.
Now you now. And you can decide how deep you want to go into the rabbit hole of volumetric data
It's not really hard problems, but quite time consuming. As usual

And sorry if i wrote all this once before already - i somehow remember as if i did.