This might be a new level of openness, even for us.

Today, the Babylon team would like to welcome you all to the very earliest stages of our development process. We’ve spotted an industry problem, and we’re thinking about what sort of technology might be used to address it. We’re still very early in the process — we’ve only just begun to write prototype code — so our ideas are far from finished and in some cases are barely even formed. But we’re already very excited by the possibilities, and so rather than waiting to talk about it until after we’ve sanded off the rough edges, we want to open up the conversation to all of you right now. So welcome, everybody! Welcome to the weird and amorphous world of Polymorph!

The Messy Middle

Polymorph is a family of proposed technologies intended to address the problem of the Messy Middle (as named by our friends at Target). The Messy Middle refers to the disorganized and disjointed current state of the content pipeline solution space. That’s probably not the clearest way I could describe that, though; let me explain.

Consumer-facing 3D technology — on the Web and elsewhere — is on the cusp of a renaissance. Advances in rendering technology and consumption experiences have captured the imaginations of independent developers, educators, and corporations alike. However, one common problem for creators is the difficulty of preparing 3D content to be included in an experience. For example, a manufacturing company might want to take their CAD models of their products and display them on their website; but CAD models aren’t designed for that, so the models will need to be converted into a runtime-friendly format (like glTF) before they can be used in that way. Many, many experience creators face this sort of problem, and almost everyone solves it by developing their own independent content pipelines for their own particular use cases. Lots of these solutions are trying to do fundamentally similar things, and often in similar ways; but because they are all developed independently, they often don’t work together and are forced to duplicate efforts and overlap in functionality. This ever-growing tangle of disjointed connective technologies bridging the gap between creation assets (CAD models, etc.) and consumption assets (glTF, etc.) is what we’re referring to as the Messy Middle.

So Alice, Bob, and Charlie are all trying to take creation assets and convert them into consumption assets. Alice and Bob need mesh decimation, Bob and Charlie need advanced texture compression, and Alice and Charlie need to convert parametric shapes to geometry. Wouldn’t it be great if they didn’t each have to do all the work independently? Wouldn’t it be nice if Bob could use Alice’s decimator, Charlie could use Bob’s texture compressor, and Alice could use Charlie’s geometry generation system? Instead of duplicating all the work and ending up with three disjoint and unrelated content pipelines, could there be a way for developers to create unconstrained, recombinant pipeline operations that could be shared and reused throughout the community?


The Babylon team began thinking about the Messy Middle problem this past summer. Most of our work so far has been limited to discussion and thought experiments, with a very small amount of prototyping. The result of this work is what we’re describing for you today: an extremely preliminary proposal for a family of technologies collectively known as Polymorph.

Polymorph is an ecosystem for the creation of content pipelines. Polymorph pipelines are assembled as a sequence of individual operations (Morphs) designed to be self-contained, recombinant, and reusable. In many ways, you can think of Polymorph as a node-based procedure creation system similar to Babylon’s Node Material Editor, but for content manipulation rather than material creation. In this way, Polymorph is intended to address the Messy Middle problem directly by providing an environment in which it’s easy for developers to (1) easily combine existing Morphs into content pipelines for new use cases and (2) create new Morphs that will be recombinant and compatible with existing Morphs in a predictable way.

Once again, at this point Polymorph is just a proposal, and a very preliminary one at that. If you’d like to join the conversation — where everything from definitions to implementations is still very much up for discussion — please join us on the forum! As preliminary as it is, though, we have thought quite a bit about Polymorph from a philosophical standpoint, and there are some things we think we know about it.

  • Content pipelines can be either stand-alone utilities or components of other programs, and we want the output of Polymorph to be as portable as possible, so we’ve chosen C++ as Polymorph’s primary implementation language.
  • We don’t want to constrain what kinds of operations can be performed, so we’ve avoided making assumptions about the types and usages of data that can be supported by Morphs.
  • Developers shouldn’t need specialized expertise in order to build pipelines using Polymorph, so we’ve made simplicity and ease-of-use primary considerations in our designs.

But these particulars are just a small subset of the broader conversation about what constitutes Polymorph. As a reminder, this conversation is going on right now, and we’d love for you to join it on the forum! The core of the conversation, however, has relatively little to do with prescriptive technicalities like the ones above because such things aren’t really what Polymorph is about. We believe that the heart of the Messy Middle problem lies in the disunion of the paradigms used to create content pipelines across the industry; correspondingly, we think that the best way to address the problem is to create a mechanism and environment in which these disparate paradigms can be unified according to community-driven, naturally evolving conventions. Sound tricky? Well, to some extent, it is. Welcome to…

The Abstraction Stack

The Messy Middle, in its most general form, is kind of difficult to describe, let alone solve. As we on the Babylon team discussed the ideas and systems now collectively known as Polymorph, it soon became clear that effectively tackling the Messy Middle problem would require an approach built on multiple layers of abstraction. To explore this, let’s start with an example of a specific content pipeline and step one-by-one through the abstraction layers: Instance, Application-specific Conventions, Domain-specific Conventions, Domain-agnostic Conventions, and Pipeline.

The abstraction stack represents a categorization of shared concepts grouped by how similar — or dissimilar — the Morphs which share them are.

Instance: Consider, as an example, a 3D file converter— a piece of software that takes one 3D file (for example, a CAD model) and outputs a 3D file of another type (like a glTF). This is a content pipeline because it can be used to turn a creation asset into a consumption asset. But in this case, there is no abstraction at all; this is an instance, and nothing about it is general.

Application-specific Conventions: Now consider the solution space of 3D file converters— the set of all software solutions that do nothing except convert one type of 3D file into another. This is a layer of abstraction on top of the original idea of the converter because there are unknowns, but they are pretty tightly constrained. To produce any 3D file converter, all that’s necessary is an importer for the desired input format, an exporter for the desired output format, and application-specific conventions that enable the importer and exporter to communicate with each other.

Domain-specific Conventions: Now let’s go up another layer and consider 3D transformations — the set of all software solutions that take 3D data and change it in some unspecified way. This layer of abstraction loses another level of constraint because we can no longer make assumptions about exactly what a given transformation will do; we just know, in a general sense, the kinds of things it might be doing. For example, because we know these operations pertain to 3D, we can expect different Morphs (operations within a Polymorph pipeline) to have the same understandings of common 3D data types like meshes, textures, splines, etc. These shared understandings collectively amount to domain-specific conventions that, when followed, allow Morphs that all pertain to 3D to be concatenated to create arbitrary 3D transformations.

Domain-agnostic Conventions: We ascend one more layer of abstraction to consider transformations — the set of all software solutions that take one form of data and change it in any way, not necessarily pertaining to 3D. At this point, what we’re considering has become so abstract that we’re no longer able to make any specific statement about what’s actually being done except that we’re still (within the context of Polymorph) assembling recombinant Morphs in order to do something to some data that we have. Thus, though we can no longer say exactly what it is we’re doing, we still know that we want to easily be able to do one thing after another and allow the operations to communicate with each other. Such assemblage and communication can be facilitated by domain-agnostic conventions — such as build/linking practices and other general coding practices — in order to make it easier to create Morphs that can be reliably recombined into transformations.

Pipeline: This brings us to our final layer of abstraction: compatibility. By now, we are no longer interested in whether anything is being transformed; our sole concern at this point is whether a sequence of Morphs is compatible such that each, by the time it is run, will have everything it needs to run successfully. This notion of compatibility — whether or not a given pipeline of Morphs will run successfully — is the foundation upon which every other layer of abstraction, and consequently the whole idea of Polymorph, is built.

I apologize if this somewhat longwinded journey through the abstraction stack seems laborious. I include it here because it roughly mirrors the discussions we had while developing our understanding of what Polymorph is. This particular five-layer breakdown is quite new, and like everything else here it’s just a proposal. However, I believe it’s valuable because it provides a way to view Polymorph holistically as a coherent stack of abstractions.

  • Pipeline: the highest level of abstraction, which defines what it means to create compatible assemblages of Morphs.
  • Domain-agnostic conventions: a collection of practices to allow generic pipelines to be constructed easily.
  • Domain-specific conventions: a collection of practices to facilitate the creation of Morphs that will reliably be compatible with other Morphs from within the same usage domain.
  • Application-specific conventions: particular practices and usage details required to build content pipelines for a specific purpose.
  • Instance: an actual assemblage of Morphs into a working content pipeline.

But there is an even simpler (though less precise) way to think of Polymorph. My friend and colleague Jason, in a spectacular display of dad-logic, came up with the consolidated metaphor of a playground for Morphs. In this metaphor, the highest layer of abstraction (the pipeline) is the playground itself: it’s the environment in which all the Morphs can play together. The middle three layers of abstraction (everything pertaining to conventions) are the playground rules: they describe what it means to be a good Morph, and following these rules will allow many Morphs to play together nicely. Finally, the lowest layer of abstraction (instance) is a playground game: it’s a specific set of Morphs following the rules in order to play together.

At first I thought that metaphor was bizarre. Then I thought it was too imprecise. Then I realized that it actually helped me to solidify my own understanding, too, and that’s what makes it brilliant.


I want to emphasize how important — quintessential, even — conventions are to our current vision for Polymorph. Out of five layers in the abstraction stack, three are composed entirely of conventions, and the impact of these conventions on Polymorph cannot be overstated. In fact, I think it’s fair to say that the success or failure of Polymorph as an initiative depends overwhelmingly on community-driven conventions.

This is because Polymorph is not a single piece of technology, but an ecosystem in which certain technologies — namely content pipelines — can be easily created. The promise of Polymorph is twofold: (1) that you will be able to take existing Morphs and assemble them readily into a pipeline, and (2) that you’ll be able to create new Morphs and easily get them to work with the existing ecosystem. To make this twofold promise a reality, Morphs which logically should be compatible must be compatible; if they aren’t, then using Polymorph will become confusing, inefficient, and frustrating, and the project will surely fail. It is therefore crucial that Morphs be predictably and reliably compatible; and given that, it may seem surprising that there is no direct mechanism whatsoever to enforce compatibility among morphs. Instead, the fulfillment of Polymorph’s fundamental twofold promise is entrusted entirely to conventions.

The decision to rely so heavily on conventions was not made lightly. We talked about it a lot, and even now that we’re confident it’s the correct approach, it’s still a little unnerving. There’s a lot of risk associated with having mission-critical, make-or-break decisions be governed by convention rather than enforced by technology, and a typical reflex for developers concerned about risk is to grasp for more control, not less. For me in particular, as a C++ developer, risks of this kind automatically make me nervous because they open the door for abuse and place the whole future of the platform unreservedly into the hands of the community.

But the community can handle it. In fact, nobody but the community can handle it. The more we considered what we could do to enforce “proper usage” of Polymorph, the more we realized that neither we nor any other single group were qualified to define what “proper usage” means. If a project is truly open source, then it belongs to the community, and no individual development group — not even the project’s originators — will add value by being arbitrarily prescriptive about it.

Furthermore, as scary as it may seem to C++ developers like me, open-source communities like Node.js — and even loosey-goosey languages like JavaScript itself — have proven that communities based on convention rather than enforcement can not just succeed, but thrive. The Node community in particular is notable for being a collective of relatively unconstrained developers bound together only by a small set of shared tools and a desire to collaboratively create something awesome. This non-prescriptive approach to creating an ecosystem has the added benefit of making the ecosystem futureproof by default. In creating Polymorph, we didn’t want to accidentally lock the system away from some future possible usage; and if we had tried to create some pre-ordained mechanism to enforce Morph conformity, we would have done exactly that. But if we simply leave the system open-ended and focus on making it easy for the community to decide what’s best for it, then Polymorph will never be held back by the limitations of the people who designed it. Convention is not foolproof, and convention is not as reliable as pre-definition; but convention is flexible, and putting it in the hands of the community will ensure that, like Node.js, it evolves to become what the users really need from it.

And this, among other places, is where we need help. This is where we need you. See, in this convention-centric approach, Polymorph’s success will be built on the successes — and collaborations — of the members of the community. Conventions will emerge naturally from experimentation, discussion, and consensus. They will make sense because they will be defined by the very people they affect; and when they stop making sense, they’ll be replaced by new conventions discovered, discussed, and agreed upon by the community members who understand the use case best. So if that sounds like a great community for you — if you have a usage you’d like to solve, or a convention you’d like to establish, or even just a relevant topic you’d like to discuss — then welcome aboard! Come join the conversation! The Messy Middle won’t clean up itself, and we’ve got a long way to go and a lot of work to do if we want to make Polymorph a success. Let’s come together and build something awesome!

Justin Murray — Babylon.js Team

in collaboration with Jason Carter — Babylon.js Team

Appendix: pipeline.h

In a surprisingly apropos analogue for our work on Polymorph so far, this has been an awful lot of talking, hasn’t it? I hope it’s at least been interesting, and if nothing else it’s been worthwhile to summarize the idea and present it to the community at large. But even worthwhile efforts get exhausting at some point, and I did want to talk about the little bit of Polymorph prototyping we’ve done thus far. At long last, then, here we are. Ready to look at some code?

Of the many components of Polymorph, the only thing we’ve coded thus far is an implementation of the pipeline abstraction layer: pipeline.h. (It’s actually only one of two planned prototype implementations of the pipeline; for more information about the other, let’s talk on the forum.) This pipeline implementation is built using template metaprogramming to formalize the pipeline as a compile-time concept. While an in-depth explanation of this implementation is well beyond the scope of this blog post (if you’re interested in that, again, please let me know on the forum), there are a few topics that I think are worth discussing here, most of which can be seen in the header’s provided example pipeline.

#include “pipeline.h”
#include <vector>
PIPELINE_TYPE(Vector, std::vector<int>);
PIPELINE_CONTEXT(Print, IN_CONTRACT(Vector), OUT_CONTRACT());void Run(Initialize& context)
void Run(Append& context)
auto& vector = context.ModifyVector();
void Run(Print& context)
std::cout <<
"Vector ends with " <<
context.GetVector().back() <<
int main()
auto pipeline = Pipeline::
First<Initialize>([](auto& context) { Run(context); })
->Then<Append>([](auto& context) { Run(context); })
->Then<Print>([](auto& context) { Run(context); })
->Then<Append>([](auto& context) { Run(context); })
->Then<Print>([](auto& context) { Run(context); });
auto data = pipeline->CreateManyMap();
return 0;

This is, of course, an extremely simplistic toy example of how to use this implementation of pipeline.h, but as such it provides a concise overview of many of the most important concepts and usage patterns.

  • PIPELINE_TYPE: This macro is used to declare a pipeline type that will be used in the various stages of the pipeline. The type declared here is simply a vector of integers, but virtually any name and type can be declared in this way: meshes, vertices, textures, etc. The consensus of pipeline types — what is a vertex, what is a mesh, etc. — is one example of something that should probably be governed by domain-specific conventions.
  • PIPELINE_CONTEXT: This macro declares a context that will be used within pipeline operations (Morphs) to get, set, and modify data. What data is available is controlled by the context’s contracts. Every pipeline context — which is analogous to a Morph — has two contracts respectively specified using the IN_CONTRACT and OUT_CONTRACT macros. The in-contract declares what types are needed (i.e., must already exist within the pipeline) in order to complete the intended operation. Similarly, the out-contract declares what types are fulfilled (i.e., are guaranteed to exist after the operation has completed) by the intended operation.
  • Run(…): This is an example of a domain-agnostic convention. It’s not enforced that the actual operation associated with a given pipeline context be specified in a Run(…) function, but following this practice is convenient and makes the code easier to understand. Note, by the way, that each Run(…) method is given access to the state of the pipeline by means of named accessors — GetVector(), ModifyVector(), etc. — available on the context. These accessors are inherent, type-safe features of the context which are automatically added according to the context’s contract. If a pipeline type called Foo is part of the context’s in-contract, that context will have a GetFoo() function; and if Foo is part of the context’s out-contract, that context will have a SetFoo(…) function; and if Foo is in both contracts, the context will have an additional ModifyFoo() function to allow in-place modification of the data.
  • auto pipeline: This is where the actual pipeline itself is defined, and despite the fact that this is a toy example, this is actually a fairly realistic depiction of what a pipeline definition might look like. Pipelines are constructed as a sequence of operations defined by contexts; each line in the pipeline’s definition can be considered a separate Morph. When a pipeline is declared in this way, the pipeline itself checks at compile time to make sure that every operation (every context, every Morph) is possible given the state of the pipeline at the time of its execution. For example, because the Append operation has Vector as part of its in-contract, the pipeline type will check to make sure that some operation that occurs prior to Append has Vector as part of its out-contract, thereby guaranteeing that Vector is available for Append when it needs it. More explicitly, the pipeline type guarantees that every operations in-contract is satisfied at the point in the pipeline when that operation is to be run; any circumstance where this is not true is an invalid pipeline and will refuse to compile. In this way, if a given Morph declares that it needs a named, typed piece of data, the pipeline guarantees that some other Morph somewhere upstream in the pipeline promised to provide that data, of the correct type and called the correct name.
  • pipeline->CreateManyMap(): Under the hood, data in the pipeline’s state (which is made available to the different operations through the contexts) is stored in a fully type-safe structure called a manymap. That name is an implementation detail which probably shouldn’t be exposed, though, so it’s rather likely that this method will be renamed fairly soon. Ultimately, all that’s happening here is the creation of the data structure in which all the pipeline’s state will be stored while the pipeline is being executed.
  • pipeline->Run(data): Interestingly, up to this point in the execution of the program, almost nothing has actually been done. Everything leading up to this point has been defining what it means to be this pipeline and creating the structure in which the pipeline’s state will be stored. This, however, is the call that actually makes things happen. After this call, the program will execute an Initialize operation, than an Append, then a Print, then another Append, then another Print. In short, this is the line that will run the whole pipeline.

That…was probably a lot of information to take in, but it really was just the high points. Of one possible implementation. Of one abstraction layer. Perhaps I haven’t stated it explicitly yet, but the Polymorph initiative is huge.

But I hope you don’t find that off-putting. I hope it isn’t overwhelming or exhausting. Instead, I hope I’ve been able to communicate just how excited we are by the possibilities that Polymorph represents. Yes, it’s definitely massive — we’ve spent more than a few conversations just sitting back and gawking at the magnitude of it all — but that just makes us even more eager to explore the potential that such an enormous undertaking could unlock. I hope you’re as excited by all this as we are, and if so, then I’ll say it once again: welcome aboard!

Babylon.js: Powerful, Beautiful, Simple, Open — Web-Based 3D At Its Best.