There and Back Again: A Tale of Backwards Compatibility in Babylon.js

10 min readAug 10


Babylon.js is almost 10 years old. Throughout this time, there have been a few guidelines that have guided each contributor when implementing new features for the framework or fixing and improving existing ones — it needs to be simple, it needs to be performant, and it most certainly needs to be backwards compatible. I doubt anyone would argue about the first two — simple? Yes, please! Why wouldn’t it be simple? Performant? Of course! Who doesn’t want their web or native experience to be highly performant? But backwards compatible? Why is that even a thing?

Well, it is a thing, for sure. And an important one.

First, what do we define as backward compatibility? Simply put — Babylon’s public API should always be future-safe. If we have committed to a certain API in a stable release of the framework, this public API will forever stay in the framework. And in the same place. That means that code that was written for Babylon 3 will still work with Babylon 6. We do deprecate functions (more on that a little later), thus pointing out that there is probably a better way of doing things, but we won’t remove it from the framework.

Throughout the years I had my share of criticism towards maintaining backward compatibility. My biggest concern was the biggest flaw I believe is inherited in back-compat — hindering innovation, hindering improvements. Meaning — if the functionality is already there and we can’t change the public API, how can we fix what needs to be fixed? But over the years I have been proven wrong more times than I care to admit.

So let’s dive into the why, how, and when we actually present breaking changes, and any other questions that might be still open concerning this wonderful topic.

Side note — I am not talking about supporting legacy browsers and older execution contexts. This is a different topic. Interesting one, but a different one altogether. We do support those as well 😊


As software developers, we constantly update the dependencies we are working with. These frameworks are sometimes at the core of our applications, and they are constantly evolving, adding new features, improving existing ones, and fixing bugs. However, a lot of the time, we are a little afraid to do the upgrade. The main fear (or my main fear?) is: how much work will it involve to upgrade our dependencies? SemVer standards try to solve this for us — Patch and minor versions don’t break anything. This is (probably?) why when installing a dependency using npm, it adds the wonderful caret to the version — we can safely assume (though not always…) that installing the next minor version will still work. But then comes a new major version. According to, major should be used “when you make incompatible API changes.” Meaning — this might involve a slight inconvenience of porting your code. It might also involve a whole day of reading documentation, understanding the changes, implementing them, understanding that they lack compatibility with other dependencies, upgrading node, giving up, drinking some water, understanding you already started, and you need to continue, and, in general, losing a lot of hair.

But wait, you say naively, why are you upgrading to a new major version? Just stick with your current version and upgrade the minor version. Don’t touch a running system! But we all know better. Major versions don’t update forever. Security updates and bug fixes are sometimes implemented only for a new major version, and even major versions that are maintained over a long period of time have an expiry date. We can delay it as much as we want, but eventually, we HAVE to install the new major version.

Try to remember — what was your last positive experience upgrading a major version of a framework? Remember? Nice. This framework probably had no breaking changes. Now try to remember the last time you wanted to quit your job because of some dependency upgrade. This is what I am talking about.

Now, wouldn’t it be nice if the framework could save my time and NOT break anything between versions? This is where Babylon’s backwards compatibility kicks in. Upgrading Babylon to a new major version should (more on “should” vs. “must” later) take 3 minutes — update your package version, install the new dependencies, build, and — that’s it! Utopia. Babylon tries its best to provide a seamless experience when updating to a new version. It should always work.


How do you maintain backward compatibility while continually improving? First, we exercise great caution when introducing new public APIs. For obvious reasons, we want to ensure that the added functionality is future-proof. This task is often far from simple. However, over the years, we’ve honed our ability to prioritize certain aspects to guarantee the feature’s extensibility. New functions go through a public review process to confirm their “future-safe” status.

For instance, we strive to employ objects for configuration parameters. Why, you ask? It allows us to introduce new parameters without disrupting the function’s signature and maintains a cleaner API. Compare the (now deprecated) Mesh.CreateSphere with the independent CreateSphere function:

  • Mesh.CreateSphere has several configuration parameters. Adding new ones entails extending the function’s signature. However, this would alter the API. The scene is no longer the last passed variable; the parameter must be optional, and the function signature would eventually become bewildering. Check out ExtrudeShapeCustom to grasp the idea.
  • The CreateSphere function — Mesh | Babylon.js Documentation ( — boasts a much more organized structure. Observe the wealth of new parameters added to the configuration object! The scene is now the last variable, while the name is the first, aligning with the rest of the CreateXXX APIs.

This approach allows us to maintain backward compatibility — Mesh.CreateSphere serves as a proxy function to the new CreateSphere function. It’s not implemented twice. Users of the old API are welcome to continue its use; it just won’t see further extensions. Yet, it will always work.

When we encounter an old API that we can’t extend, we create a V2 for that API. A prime example is our new physics architecture. The old one, crafted nearly six years ago, fell short for the new and remarkable Havok engine. We designed a new API, learning from the old and enhancing it. The old API remains available. You can still use it! However, we recommend transitioning to the new one.

On rare occasions, we do alter a function’s signature, but we take care to ensure it won’t break if people still use the old parameter order. A fascinating example is the attachControl of the camera: The implementation is even more intriguing: Babylon.js/packages/dev/core/src/Cameras/arcRotateCamera.ts at master · BabylonJS/Babylon.js ( Note our handling of arguments to enable passing a different first variable.

Lastly, Babylon has no external dependencies in the sense that we don’t rely on dependencies to BUILD the framework. While we do support external libraries like physics engines or recast, they either reside in the global namespace or are injected into the code. Babylon’s building process won’t falter due to these dependencies. We provide support for fixed versions of them, with no assurance of compatibility with newer versions — unless they maintain backward compatibility, of course 😊

We do modify the signature and functionality of private, protected, and internally public functions!

When do you break stuff?

“Hold on, Raanan, you sneaky two-faced developer!” I hear you say. “There ARE breaking changes in Babylon! I scrutinized your changelog very carefully!” Yes, indeed, we do. Taking a peek at the changelog for versions 5.0.0 and 6.0.0 will reveal a few breaking changes. So, let’s delve into when we choose to tread carefully and when we decide to unleash those breaking changes.

We opt to shake things up under the following circumstances:

  • The environment in which we’re working (like a browser, for example) changes its API, compelling us to make adjustments in Babylon.
  • Bugs rear their heads in a public function or its functionality. These bugs could manifest as a function returning the wrong type, a variable being incorrectly defined, or a behavior that doesn’t align with expectations.
  • When we can ensure that the modified functionality hasn’t been used yet or is exclusively utilized internally by us.
  • A feature bears the label “experimental.”
  • When we fail to notice that we’re breaking something.

Point 1 is crystal clear — we have to make the change. Bugs need fixing, even if they force us to tweak the public API. But how can we guarantee that the function isn’t in use? The truth is, we can’t, not really. What we CAN do is examine our playground’s code, which stands as our richest resource for user code, and identify which playground employs this specific function. You’d be surprised how often we discover that a certain function, while immensely helpful, isn’t (yet) being used at all. That alone is a motivation for change! 😊

Experimental features aren’t set in stone to maintain the public API; although we do our utmost to do so. We tag experimental features with either the uncommon “@beta” tag or the @experimental tag. We don’t resort to these tags too frequently. In fact, you won’t stumble upon any “experimental” features in the current master branch. But since I don’t know when you might be reading this, there might be a few experimental features in the repo.

And what’s our course of action when we fail to notice that we’ve merged a breaking change? We revert the change or ensure that we restore the altered/removed functionality. This hiccup pops up from time to time, primarily because AI hasn’t taken over just yet, and most of our contributors are, after all, human.


We do deprecate, though. When we believe a function shouldn’t be used anymore, we will mark it as deprecated. We use TSDoc for this purpose, making use of the widely recognized @deprecated tag

We don’t take deprecation lightly, and we ensure that deprecated functions are not removed from the API.

I answered about deprecation here — Loading .babylon scene as a child of a specific scene graph node

The good, the bad, and the ugly

We talked about the good already — it is a huge advantage to our users, that the framework can be upgraded to a new and improved version without any changes needed on their part.

But what are some of the bad sides of backwards compatibility?

  1. Overbloating the framework — Babylon’s main flavor is still our UMD version, available on our CDN and our playground. That means that, since we only add new features or improve existing ones, the package size will only grow bigger. The UMD version contains all of the core functionality in a single package and cannot be tree-shaken.
  2. Improving or replacing core features — This can actually be an issue, if we don’t find a way to maintain backwards compatibility of a core feature, we can’t replace it. We are “stuck” with it. Not only that — if the public API is a dependency in a major core element, we can never remove it from there. Take the audio engine as an example. The audio engine’s public API is outdated. It was good when it was implemented, but it should be changed to provide better functionality. The audio engine is a static element of the Engine class, and is part of the Engine’s initialization process. That means that it will forever stay there. Even if we build a better v2, we still can’t replace it where it is currently referenced. This might be confusing for the users, even if we set it to be deprecated.
  3. Reusing (abstract) class and component names — Again, looking at the current audio implementation — we have a class called “Sound”. If we want a new “Sound” class in the new API, it will have to be called “Soundv2” or something similar. Or we will need to name it something completely different, but make sure the devs know that the Sound class (a very common name for, well, sounds?) doesn’t work with the new API. Fun.
  4. Hard(er) to maintain — There is juggling involved when trying to make changes to our public APIs. It is also harder to review PRs.

So, is it good or not?

As I mentioned earlier, over the years, I’ve come to appreciate the significance of backward compatibility. We have numerous users who thanked us for this feature, and it’s one of our USPs compared to other frameworks in our field.

My opinion? In my view, methods that are marked as deprecated should ideally be removed in the next major version after being designated as such. Meanwhile, the remainder of the public API should be maintained. However, I also know that this isn’t what we guarantee, and altering this could potentially break the trust our users place in us.

Will this ever change? The answer is both no and yes 😊. No — our UMD and ES6 flavors will continue to maintain backward compatibility. Yet, if a new architecture emerges, or a new structure for Babylon’s core library is implemented, it could impact the public API, forcing users to adapt their approach slightly. But this is, as the Germans say, future-music. The current structure of our public packages and CDN will remain intact.

Stay awesome!





Babylon.js: Powerful, Beautiful, Simple, Open — Web-Based 3D At Its Best.