Infinite Grassland: Ray Traced Rare Semi-Uniform Entity Manifolds for Rendering Countless Discreta

Photo by Stanislav Klimanskii on Unsplash

Rendering vegetation is hard. (Disclaimer: I am not an expert and do not actually know if rendering vegetation is hard.) Rendering a single solid entity, especially a fairly simple one, is straightforward: you just make a 3D model of it, then render it. Similarly, rendering assorted continua is, though perhaps not straightforward, quite well-understood: path tracing and screen-space volumetric techniques both explore ways to render arbitrary continua in a performance-capped manner. But vegetation — for example, an endless field of grass — is a bit of a special case. An infinite grassland is not a continuum; it is a space filled with individually-discernible discreta, which existing screen-space volumetric techniques are ill-suited to represent. (Disclaimer: I am not an expert and do not actually know what existing techniques are ill-suited to represent.) Consequently, grasslands are nowadays most commonly rendered as individual entities, taking advantage of optimizations like instancing and geometry shaders to “scale up” the efficiency of rendering them. Unfortunately, no matter how efficiently we render them, each individual entity inevitably incurs a rendering cost, however small; and because of this, in principle it becomes impossible to render a truly infinite field of individual grass discreta, and in practice rendering a reasonable area and density of grass this way quickly becomes an LOD balancing act trading off performance and fidelity. (Disclaimer: I am not an expert and really have no idea what I’m talking about.)

In this blog post, we’ll discuss a screen-space volumetric technique for rendering discreta by ray tracing into a rare semi-uniform entity manifold. This approach may already be well-known (I haven’t found anything relevant by searching, possibly because I named the technique something insane), and it may turn out this isn’t practical for grasses (I’m not an expert and simply don’t know). I can only say for certain that I personally have never encountered a technique quite like this before, and if nothing else I think it’s interesting. Let’s jump in and talk about how we can use Babylon on the Web to render that.


What Is That?

That — the proof-of-concept demo linked and screenshotted above — is a Babylon.js shader material showing hundreds of thousands of individual grass “cards” rendering in a browser. Depending on what device you’re using, this may or may not work very well; it’s just a proof of concept, so please forgive the rough edges and visual artifacts galore. The interesting part about this demo, however, is that it actually doesn’t matter how many grass cards are being rendered. This shader material is capable of rendering a functionally infinite number of grass cards at no per-card render cost because the cards themselves have no geometry. The only geometry in that scene is a single cube with the grass material on it and a single dark green plane under the grass; the discreta themselves — the individual grass cards that you can see if you move the camera around and look down “into” the grass cube — are a volumetric screen-space effect created per-pixel with no per-entity cost whatsoever. That is not to say that rendering this grass is computationally free, but that cost is not in any way affected by the amount of grass in the scene; the cost exclusively depends on the complexity of the grass material shader and the number of pixels to which it is applied.

An earlier demo where you can more clearly see the individual discreta.

So if the discreta being rendered by this material — grass-textured “cards” in the first demo, individual blades in the second — aren’t geometry, then what are they? As can be seen if you look down and move the camera around in the demos, they are clearly 3D elements that hold position, animate, parallax, etc. exactly as you would expect for typical mesh-based entities, yet they have no vertices, no normals, no indices… As far as anything that happens before their pixel shader is concerned, these blades and “cards” do not exist at all. This is possible because this effect is rendered not by typical instance rendering techniques or by path tracing into a continuum, but by ray tracing against simple mathematical surfaces laid out in a rare semi-uniform entity manifold.

What Is a Rare Semi-Uniform Entity Manifold?

Rare semi-uniform entity manifold is the kind of concept name you get when a graphics hobbyist stumbles into an arena of higher math that he never formally studied in any way — in this case, topology. In topological terms, a manifold is a space with local, but not necessarily global, resemblance to a Euclidean space such as the most common presentation of 3D. Though we rarely use the term, manifolds appear commonly in computer graphics and animation: any bone-based animation will produce one or more manifolds which dictate the vertex positions of the associated skinned mesh, for example. For my purposes here, I have sort of Frankensteined this with a concept related to a vector field to arrive at what I’m calling an entity manifold: a locally-Euclidean space where each point in the space is associated with an entity. The remaining two terms in the concept name follow closely from this: semi-uniform suggests that the entities within the space are all similar but not necessarily the same; and rare, a more linguistically terse alternative to nowhere dense, clarifies that we’re not interested in the full continuum of the manifold but only in a sparse and discontinuous subset, the entities associated with which will constitute our discreta.

To visualize this, consider ℝ², the Cartesian plane. This plane is a continuum — every subspace of the plane is filled with an infinite density of points — but it contains an easily-recognizable rare subspace: ℤ², which is the set of all points (x, y) in the Cartesian plane such that x and y are both integers. ℤ² is thus among the simplest examples of what I’m calling a rare manifold; and if we declare that every point in ℤ² is uniquely associated with a semi-uniform entity via the function g(v) = E for all v in ℤ², then the pairing of g and ℤ² constitutes a rare semi-uniform entity manifold. Furthermore, it is easy — and computationally fast — to find nearby entities in this manifold given any point in ℝ² because another way to express ℤ² itself is as the image of the function f(x, y) = (round(x), round(y)). Thus, from any point in ℝ², it is computationally simple to find the manifold entities which are most closely related in terms of proximity. This property — that we can easily go from a Euclidean space to a related rare semi-uniform entity manifold — will serve as the basis for our screen-space volumetric technique for rendering countless discreta.

So What Exactly Does This Help With?

Rare semi-uniform entity manifolds can help us distribute discreta through a space, and given any point in the space it is computationally easy to find which discreta are most nearby. To understand why this is valuable, however, it’s important to try to think about when it’s valuable. Grass is the quintessential example, so let’s evaluate some of the considerations one might have when rendering a field of grass. Typically, no one who’s rendering grass actually cares about any particular blade or “card” of grass; it doesn’t really matter where any particular one is because they’re intended to be viewed collectively, not individually. Put another way, the fact that grass is composed of discrete blade and “card” entities is visually necessary but logically irrelevant because the individual constituents are never the goal. Nobody wants to render a grass; the goal is always grass, and the less cost that can be directed toward any individual piece of grass, the better.

With this in mind, conventional instance-based grass “cards” are actually over-described: they specify more information and provide more control than we typically need. This excess control and specificity comes at a cost. Every grass entity in this conventional approach must have at least a small amount of information — position, rotation, etc. — provided for it, meaning the cost of this approach must always scale with the number of entities being rendered. However, if we can forego specific manual control over that information and instead create a mechanism where position, etc. can be derived rather than provided, we can eliminate the per-entity cost of rendering entirely such that the cost of the effect is exclusively determined by the per-pixel weight of the shader and the number of pixels to which it’s being applied. This is what allows the demos referenced above to render effectively infinite blades and “cards” of grass at screen-constant cost: entity-specific attributes like position, etc. are derived, not provided, using a rare semi-uniform entity manifold to allow consistent attribute derivation across all pixels.

How Do We Use It?

With the goal (“Infinite Grassland”), central concept (“Rare Semi-Uniform Entity Manifolds”), and underlying motivation (“for Rendering Countless Discreta”) established, completing the usage picture brings us to the last as-yet-undiscussed words in the title: “Ray Traced.” So here, in brief, is how we implement our entity manifold renderer: we implement a tiny optimized ray tracer inside the grass material’s pixel shader.

Modern advances in half-baked idea rendering have finally allowed us to bring traditional illegible whiteboard scribbles to a whole new, Internet-savvy generation.
  • By the time the pixel shader is ready to begin running, the vertex shader has computed the positional information the pixel needs: view direction, world space position, etc. This, however, is only starting information: the world space position provided to the pixel by the vertex shader is the geometry world space position. There is likely no grass directly at this position; however, because the geometry is effectively a hull which contains the grass, we can use this position as the origin for the ray we’re going to trace along the view direction into the entity manifold. (Note: if the object containing the manifold is itself displaced or warped, additional operations must be done using tools like object and UV space to transform the starting information from world-space into manifold space. I haven’t actually done that in practice, however, and the theory is beyond the scope of this overview.) So far, the approach is identical to the technique my friend Cedric described in his path tracing blog post, which was actually what inspired me to start investigating these sorts of rendering techniques in the first place.
Note that the “corridor” actually begins before the first point on the projected ray. It is important that the corridor contain all potentially relevant entity origins to prevent visible clipping-like artifacts.
  • We are now ready to begin ray tracing into the entity manifold. Our goal, as ever with pixel shaders, is to figure out what color our pixel should be, and to do that, we need to figure out what entities — for grass, blades or “cards” — our ray intersects and consequently gets its color from. Recall that our manifold contains infinite entities, so we cannot possibly check whether our ray intersects them all. However, as described above, we’ve chosen a rare entity manifold where it is easy, using simple computations, to find all the entities which are most proximate to any given point in the domain. For grass, this is especially easy: we project our ray’s origin down to ℝ², follow it along the projected view direction, and compute a “corridor” of all the entities which are near enough that they might intersect with our ray. The width and length of the “corridor” can be tuned to suit the scenario. For example, if entities are wide relative to the density of ℤ², a broader “corridor” might be needed because of the likelihood that an entity further to the “right” or “left” may intersect the view ray. Similarly, if the entities are highly porous or otherwise include a lot of transparency, a longer “corridor” might be needed because of the likelihood that the nearest several entities are all transparent where the view ray intersects them.
Though only one is depicted here, every ℤ² origin has an entity of its own for the ray to trace against.
  • We now know the parameters of our view ray — origin and direction — as well as the ℤ² (or other rare subspace) points associated with the entities the ray needs to be traced against. The next step is to derive the equations of the simple math surfaces which represent each entity. The demos referenced above use a slightly funky custom paraboloid, but any surface can be used; the only requirement is that it should be easy to intersect a line with the surface in question. Harking back to the semi-uniform nature of our entity manifold, it is expected that all entities should be mathematically similar (for visual and shader performance reasons), but their numeric parameters can differ freely — origin offset, rotation, curvature, etc., depending on the math surface being used. Note that origin offset is usable, though it should be approached with caution: the “corridor” method described above makes assumptions about the visible influence of any entity being more or less local to the associated rare subspace origin, so if entities can “wander away” from their origins, they may get too close to the edges of “corridors” for certain pixels, which will produce visible clipping artifacts. Deriving the parameters that will allow the various entities to differ can be done in any number of ways, the best of which probably involve using the entity origin to sample from either a provided control texture or a random noise space. You can get tolerable results even by doing something silly, though; for my proof-of-concept demos, I just took the cosines of a bunch of things and called it a day.
The semi-uniform nature of the elements within the manifolds means that all intersections can be computed identically, but the results of the actual intersection will vary according to the parameters of the entities.
  • Once we’ve derived the equations of all the relevant entities, the ray tracing itself is simply a matter of computing all the intersections, finding the entity color at the point of intersection (converting from entity space to UV space, calculating lighting, etc.), then iterating through the intersections to compute what color the pixel should actually be. These are standard ray tracing operations and do not need any special customization for this use case. Once all the relevant intersection points have been calculated/traversed/colored/lit/blended, the result we are left with is the color of the grass seen “through” our ray tracing pixel.

What Next?

To be perfectly honest… Implementation-wise at least, this is as far as I’ve taken it. As mentioned above, I’ve personally never come across a technique quite like this before, but I am far from a rendering expert and don’t know whether techniques like this are already well known (or perhaps discovered and discarded ages ago because of shortcomings I haven’t even thought of yet). If you have thoughts on this — if you know of an existing usage of this sort of thing, or if you know of a situation where you think you’d like to use it — please ping me on Twitter or @ me on the Babylon.js forum! I’d love to learn more about what, if any, place this sort of technique has or may have in the wider world of graphics. Until I learn more, though, I can only speculate about what, when time allows, I or other might like to experiment with next.

I’d love to go back to my proof-of-concept demos and try to drive at least one of them to a high visual fidelity. There are only a few artifacts I think I’d need to tackle: aliasing and depth problems which could probably be mitigated with some manual mipping, and a “corridor”-related clipping artifact which I suspect could be resolved by making the “corridor”-computation algorithm a little less insane. I’d also like to try getting rid of all those periodic cosines in favor of control textures, which should also help to deal with the “over-rendering” problem where grass can render outside grass volume when the boundary is viewed from the wrong side.

I’d like to try it on things other than grass, too. I think flight-simulator-style forests could probably be done the same way, and perhaps even higher dimensional objects like clouds. I’d like to actually exercise the “manifold” aspect of entity manifolds, too, by seeing what happens if I bend, and subsequently attempt to “unbend,” the geometry containing such a manifold. I’d also like to experiment with narrower discreta, which I have a weird suspicion might be usable to create things like hair and fur. I also have an idea about how to perhaps trace deep into a rare manifold without the use of a “corridor” by trying to directly calculate, within error boundaries, where a given line first comes arbitrarily close to a point in ℤ². I don’t know how I’d be able to do that without enforcing uniformity on the discreta, which I worry might make the result to obviously regular. Maybe I could try a hex grid instead to hide the “square-ish” nature of using ℤ² directly?

I don’t even know what I’m talking about anymore. Ooh, but do you know what else might be cool?

(Disclaimer: I am not an expert — on any of this — and consequently I have no credible information on what else might be cool.)

Photo by Cookie the Pom on Unsplash

Justin Murray — Babylon.js Team



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Babylon.js: Powerful, Beautiful, Simple, Open — Web-Based 3D At Its Best.