Introducing the Babylon Texture Inspector

Babylon.js
8 min readSep 4, 2020

--

Texturing is one of the most powerful steps in the 3D content creation workflow. Textures can add rich detail to even very simple meshes. However, they can also be abstract and difficult to reason about. Textures are mapped to the surface of a 3D image using UV coordinates in ways that are not always intuitive. The color data of the texture — numbers representing “red”, “blue”, “green”, and “alpha” channels — can be used to represent features like “metalness” or “roughness” rather than visible color. These complexities can be challenging and time consuming to debug, especially since working with textures typically requires going back to the original creation software (such as Photoshop or Substance Painter).

We wanted to make that process dramatically easier. A few months ago we wrote about what simplicity means to us. We’ve been building tools to make working with Babylon faster and more straightforward. The Texture Inspector is one of those tools.

In this post, I’ll show you how to use the Texture Inspector, and then walk you through the process of building it.

Using the Texture Inspector

The Texture Inspector can be accessed from the inspector pane. If you’re in the playground, you can hit the Inspector Button in the top bar to open the inspector.

Easy access to the inspector from within the playground

If you’re working in your own environment, simply call

scene.debugLayer.show();

From the inspector, simply select a texture and hit the “Edit” button under the texture preview.

Opens the texture inspector

To learn more about the features of the inspector, feel free to consult the documentation page.

Building a Texture Inspector for the Web

I joined the Babylon team as an intern in June. The inspector was my internship project, built over the course of 12 weeks. Along the way, there were several technical challenges to overcome. The inspector makes heavy use of both the 2D Canvas API and 3D WebGL graphics (powered by Babylon.js). Integrating these two worlds was not always easy.

The basic framework is this: when you click “edit” on a texture, it is rendered at full resolution to a canvas which then serves as the ground truth pixel data for the editor. Tools manipulate that canvas and then the changes are uploaded back to the GPU using an HTMLElementTexture . To reflect the changes in the scene, we simply swap the original texture’s internal _texture property to point to our new HTMLElementTexture. This means that resetting your changes is as easy as swapping back to the original internal texture (and again rendering it to the canvas.)

What follows is a selection of some of the more challenging features I built.

Painting on Canvas

We really only had two requirements for our paint tool.

  1. It needed to be fast, so that users can have fine control over where they’re painting. There’s nothing worse than watching your brush jitter across the canvas as you try to drag it due to framerate issues.
  2. It needed to paint exact RGBA values onto the canvas, rather than blending with the pixels on the canvas using the alpha value (which is the default behavior of Canvas).

That second point may require a little clarification. Let’s say you want to place a red circle at 50% alpha onto a tree image. There are two ways to interpret alpha: either as a blend factor (center), or as the resulting value in the A channel (right). We wanted to do the latter.

Different ways to interpret alpha

These seemed like pretty simple conditions, but the challenge came in implementing both simultaneously.

I started out with the most basic implementation using the canvas’s stroke API. When you started painting, I would call moveTo to bring the “pen” to your cursor’s position, and on every mousemove event I’d call lineTo and then stroke. This produced a nice curvy line, and it was pretty fast, but it produced strange results when painting at anything below 100% alpha. I also couldn’t see a way to paint exact RGBA values using this approach.

My first thought was that I could replace the lines by painting a series of circles instead. The algorithm was simple: you start at the last position of the mouse cursor, and travel towards its current position, placing circles at regular intervals to replicate brush strokes. You can’t just place a single circle per frame, because users can easily move the mouse further across the canvas than the radius of the brush in a single frame.

In order to paint alpha values directly as discussed previously, we actually paint each circle twice. The first time, we use the destination-out blend mode to remove the pixels on the canvas. The second time, we use source-over to place the correct color and alpha values. The code looks something like this:

This was looking pretty good! Transparency worked as intended, and performance was pretty good. There was just one problem: these ugly antialiasing artifacts appearing at the edge of each individual circle.

This looks more like an evil insect than a brush stroke

Perhaps disabling imageSmoothingEnabled would fix it? No such luck. As far as I could tell, there’s no easy way to turn off AA when drawing circles with Canvas.

At this point, I was starting to get frustrated. I decided I would try manipulating the pixels on the canvas directly. Canvas provides two methods, getImageData and putImageData for working with individual pixels. My idea was simple: scan over the entire canvas and calculate if each pixel is within a certain radius from the line between two cursor positions. If so, replace that pixel with the active RGBA value.

This worked, with just one problem…it was slow. Excruciatingly slow. It turns out that the ImageData methods are very expensive, and calling them every frame is not an option, especially on larger textures.

So, manipulating pixels on every frame was out the window. But then it struck me; we are still essentially trying to splash circles across the canvas at regular intervals. And each of those circles is identical. What if we just had to call putImageData once?

Here’s how it works in the final version:

  1. When the user starts painting, we create a temporary canvas that is only as wide as the brush stroke.
  2. Using putImageData, we manually generate the exact circle that we want, with the correct RGBA values.
  3. Every time the user moves their cursor, we paint the circle multiple times usingdrawImage. We use the two-pass approach discussed earlier to avoid color blending.

This way, we need only set raw image data once, when the user first begins to paint. This shouldn’t create a noticeable stutter except perhaps for very large brush sizes.

The end result

Working with Channels

A critical feature of the inspector is the ability to disable RGBA channels, allowing you to view and manipulate a single channel in isolation. For example, you can use the paintbrush to set the R values of pixels to 0 while not affecting the GBA values at all. For viewing, this turned out to be relatively trivial; we used a simple fragment shader to render the texture within the editor. It uses conditional statements to combine color channels based on what the user has opted to hide.

Cycling through color channels

Editing proved more tricky. We needed a way to restrict the channels that tools could write to. We didn’t want to implement this in the individual tools. Here’s the approach we came up:

  1. The editor sends a canvas containing a copy of the texture to the tool.
  2. The tool can make any edits it likes to that canvas, ignoring channel limitations.
  3. We take the copy and draw it back to the original canvas. But if some channels are locked, we manually process the ImageData to selectively copy some channels and not others. Fortunately, ImageData is stored as a uint8 array with each integer representing a single channel of a single pixel, so it’s pretty easy to separate out the data.

Postprocessing

We had the idea pretty early on to build a tool for adjusting contrast and exposure. Babylon.js already has many powerful postprocesses built in, including an ImageProcessing shader which supports contrast and exposure. The question was, how did we apply a WebGL shader to a 2D canvas?

The trick was to create a separate 3D canvas, which holds a single unlit plane filling the entire viewport. The 2D canvas is used as the source texture for this plane. When we render the 3D scene, we apply the desired ImageProcessing effects. We then feed the rendered pixels back into our 2D canvas.

Using contrast and exposure to control a heightmap

This gives us access to the full power of Babylon’s postprocessing system; implementing blur or color correction in a tool would take only a few minutes.

Performance Optimizations

Uploading large textures to the GPU is expensive. Under the hood, HTMLElementTexture has to call texImage2D every time we want to push the content of the canvas into our WebGL texture. I explored using texSubImage2D as a replacement, but found that it offered the same or worse performance. Ultimately, the solution was as simple as staggering the updates to only occur every 32ms. It turns out that being able to paint in 60 FPS is much more important than seeing your texture update at the highest possible frame rate.

I’m hoping that some of these solutions will be helpful to others battling the Canvas API. I spent many hours attempting different approaches, trying to squeeze the best performance possible out of it. Despite that, I’m sure there are many things that could be better optimized. If you’d like to contribute, or are interested in learning how any of this works, feel free to dive in to the source code in the Babylon GitHub repository!

Darragh Burke — Babylon.js Team

https://www.linkedin.com/in/darragh-burke-875509163/

--

--

Babylon.js
Babylon.js

Written by Babylon.js

Babylon.js: Powerful, Beautiful, Simple, Open — Web-Based 3D At Its Best. https://www.babylonjs.com/

Responses (1)