Mesh shattering with baked physics

7 min readMay 22, 2020

‘Baking’ in the computer graphics field is the principle of precomputing heavy tasks, save the result and use it in realtime application.
The most obvious baking process you may encounter is lightmap baking. Instead of computing all direct and indirect lighting in realtime, the light contribution is precomputed offline, saved to textures named lightmaps and those lightmaps are used for realtime rendering.
You get the benefit of good looking and plausible lighting and fast rendering. The trade off is dynamics: It’s then impossible to change the lights without recomputing those lightmaps.

Lightmap example from

The same principle applies to baking Ambient Occlusion (with tools like Substance Painter) texture maps.
But baking can also be useful for totally different cases. As long as you precompute something in favor of speed, it’s a win.
Now let’s go full circle with precomputing physics. Physics computation can be extensive when you have 100s or 1000s of complex meshes in your simulation. Then, why don’t we precompute the dynamics and render them in local space so we can have varieties of position, rotation and scale.

That’s what I’m going to show you today with mesh shattering.

The playground for this demo is accessible here :

Breaking a mesh

The algorithm to break a mesh into smaller pieces is simple but can be hard to achieve correctly. It will be done offline so CPU and memory constraints are less relevant.
We start with a Voronoi diagram. Theory and math are available here

“In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects.”

To say it more simply: lines of cut exist at the mid distance between two points. When you have two points, the line that cuts the plane in two is the bisection.
Add more points and things get complicated very quickly.
But there is another way to view it for our case:
1. For each cell A, the current mesh is the mesh to break
2. for each other cell B:
3. compute the bisection plane between the
4. cut the current mesh with the plane
5. Repeat 2
6. the current mesh is one piece
7. Repeat 1

The complexity is O(n2) where n is the number of cells.
The bisection plane in step 2 is the plane whose normals is the direction from cell A to cell B that passes thru the middle of Cell A and Cell B.
Cutting the mesh leads to more substeps, mainly computing inner faces. As the topic of this demo is not only shattering, I will not cover it here. Check the code for more details ( ).
Now, you have a bunch of meshes that if colored differently for each submesh looks like this:

Shatter Studio with Voronoi mesh computed.

Physics and baking to a texture

Computed the physics is not difficult. I used Bullet Physics for ease of integration but the steps with another engine should be pretty much the same:
— create rigid bodies with convex hull for the collision shapes for each sub mesh computed previously
— apply optional forces for simulating an explosion, a vortex… you named it
— run 1 physics step
— store resulting rigid bodies matrix in an array
— repeat for as much frames as you want
Now you have an array dependent on time. You can go back and forth in time with a simple slider. No more physics computation is involved.

The number of matrices depends on time and the number of submeshes. This can escalate quickly. For 64 meshes, 512 frames of animation and 64bytes per matrix (16 floats of 4 bytes each), this is 2 megabytes of data.
Even with a zipped version with a good ratio of 50% this is still 1Mb of data to transfer. We can do better!

First, we can decompose the matrix in a position and a quaternion. No need to store a scale as the submeshes scale doesn’t change during simulation. 3 floats for position and 4 for the quaternion gives us less than a megabyte uncompressed data.

There is still room for improvement. The 4 floats quaternion can be quantized to 4 bytes. each normalized quaternion component range from -1 to 1. By changing the range to [0..255] we can store the quaternion to a rgba texture.
Can we do the same for position? Well, yes, with some trade off on precision. Instead of storing the absolute value, we will store value in the simulation bounding box.

With all the matrices, we compute the bounding box minimum and maximum. Basically, the volume in which all the simulation runs. Then, the matrix position is computed as a ratio in that bounding box. That ratio ranging [0..1] is scaled to [0..255] and saved in the same texture as the quantized quaternion.

Baked animation. One line per mesh. Animation time goes from left to right. Orientation is top half of picture. Translation is bottom half.

The resulting image is 40 Kilobytes instead of 2 Megabytes. Quite a nice ratio of compression.
Our job now is to read those animation values and animate the submesh.

Display with NME

At this point, we have 10s or 100s or small meshes and animation world position/orientation in a texture. For performance concern, we only want to render 1 mesh. Having one drawcall per submesh will be a performance killer.
To do the link between the submesh and the corresponding information in the texture, we will use its texture coordinate. For each submesh, we compute the V texture coordinate to be unique (basicaly its index) then we merge all the submesh into one big mesh.

The trick is to change the range, like before. So for an index between [0..NumberOfSubmeshes-1] with compute a V texture coordinate in the range [0..1[.
Once done, save it to your favorite format. For this demo it will be .OBJ.

Next, we use Node Material Editor.

Global view of NME graph with loaded shattered mesh

Mesh.UV is split to extract the V component and is mixed with Time uniform to get the texture coordinate. As texture contains position and orientation, 2 taps are performed.

Sampling of animation texture

Position is in range [0..1] and rescaled to real world coordinates. The bounding box minimum and maximum values are part of the shader graph.

Position is scale to the animation boundingbox. World position = Min + (Max-Min) * TexturePositionSample.

The orientation quaternion is used for transforming the vertex position and the normal. The formula to transform a vector by a quaternion is:

vec3 qtransform( vec4 q, vec3 v ){ return v + 2.0*cross(cross(v, ) + q.w*v,; }

And it’s directly translated in the graph.

Applying an orientation quaternion to the mesh position

The translation vector is added to the oriented position is used for rendering.

Final world position = Animation Translation + Oriented Position
Same operation is performed on the normal and used for lighting

The normal is transformed and used for lighting.

The NME graph is accessible here:

For the playground, I render the mesh twice to fake some reflection. I first render a simple box and then hide it and show the shattered mesh.
I animate the time uniform to get the animation. Job’s done.

After words

Baking simulation result to a texture that is easily usable and instanciable is incredibly powerful. It doesn’t take a lot of resources to add more dynamics and effects to your game.

This example was particularly suitable to highlight this technique. But with some tweaks, it can also be applied to multiple form of dynamics content:
— cloth/flags/cape simulation
— fluid simulation
— realistic wind on vegetation or tree

I can’t wait to see what your imagination will bring us!
I used a custom tool to produce the shattered mesh. It’s quite limited and far from being a production ready tool. I don’t maintain it. If you feel like being the maintainer, contact me.

But you can also use production tools to precompute your simulation. SideFX Houdini and Maya can be used for that. Once you have the simulation data, the same baking to a texture technic can be used.

Big thanks to Patrick for helping improving the overall look of the playground.




Babylon.js: Powerful, Beautiful, Simple, Open — Web-Based 3D At Its Best.