Last week was quite an adventure. There I was thinking oh “what am I going to do for my Babylon.js demo next week??” For those who don’t know me, I’m the GUI girl for Babylon! I love GUI; demos, twitter posts, the upcoming GUI editor, I love everything GUI…but I think it was time to try something a little bit different. Lo and behold I was recommended, “Why not try photogrammetry?!” “Photowha — “ “Photonemone — ?” There I was lost and confused like Nemo from the Disney movie. What started as a simple recommendation became a mini adventure of learning something remarkably cool. And what’s even more exciting, this journey resulted in this visually beautiful demo, better than any I had previously made….and I was proud! ❤
So what is Photogrammetry?
From Google, Photogrammetry is “the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.” In other words, by taking a lot of HQ photos of an object, we can use algorithms to generate a photorealistic 3D model. With that model we can import it into the our Babylon.js playground to make our demo! Pretty simple right? Yes….and no. While the process is overall very simplistic there are a lot of pitfalls you can end up with. But don’t you worry! By the end of this article I hope that I’ll be able to steer you past any those “gotchas”. Instead we’ll we able to create some really stunning scenes.
Software — For my demo, I went with 3DF Zephyr. It has a free trial version, plus free and paid versions. In addition, it has a fair amount of tutorials to get started. One thing I really liked about Zephyr was the ability to extract frames from a video. This is super handy because the software even has thresholds you can set to discard images that are similar. Little disclaimer, being on the free trial allowed me to extract over 400 frames, but I don’t think this is super necessary unless you are trying to render a massive object. 50 or so frames works for small-medium sized objects. From there you just click “next” all the way through. No special settings needed when you’re just getting started.
Pitfalls —Now the first pitfall I experienced was not having the right dataset. I can not stress this enough! 3DF Zephyr also warns about needing a proper data set but I think showing examples of what can happen with a bad dataset. Some of the mistakes I made when taking my original videos.
- Taking videos to close to the object (See the blur?)
- Moving to quickly around the object/shaking the camera
- Improper lighting
My original demo idea was to have a photogrammetry flowerpot, in where the user would water the plant and watch it grow. I grabbed a small plant and started taking a close up video of it. I learned rather quickly that for extremely close and small objects the camera on my phone was not focusing fast enough. This led to a lot of blurred frames in the frame extraction. In fact of the only 400 frames that were extracted, only fraction of them were actually readable by the algorithm.
Turns out, the smaller the object is, the higher fidelity image you are going to need. Unless you have a tripod or something to keep your hand steady, its hard to get a non-shaky video. You also want to make sure the object is in the center frame for most of your video which can be a little tricky as you're manually circling around the object, trying to capture both high and low images. I heard capturing in slow mode can help counter this too. Make sure to not capture a ton of the background. I found that also led problems with small objects. Best angles came from when the camera was looking more downward at the object.
That’s when I thought I’d give it another try, perhaps with a larger object. I personally found going outside was the perfect solution. Lots of natural light from the sun and space to walk around a larger object at a distance. This was key because my camera didn't have to focus on the tiny details and could just capture the object as a whole. This created a smooth, consistent video that became perfect for the algorithm.
Once I had a solid data set that was properly generating the mesh, things were pretty simple from there. With a couple quick buttons clicks I was able to generate a textured mesh and boom, its looked just like the real thing.
From here we export it to a FBX and turn into a GLB for Babylon. For this set I simply used Blender to export again. Voila we have a GLB we can just drag into the sandbox to take a took. Wow how stunning. From here we can import into any Babylon.js project. For my demo I just made a playground with 2 photogrammetry meshes.
Well here's another catch. If we took a look at the number of polygons, we would see there are thousands and thousands…almost a million polygons. Now Babylon.js does a great job at rendering these at 60 fps. But there is a bit of overhead when it comes to loading. Granted being able to load such a gorgeous asset in only seven seconds is still remarkable, but this can feel slow if you’re used to really fast playground loads. I would definitely suggest using an async function for loading your models. Still, in the playground this can lead to a bit of slowdown for development every time you are refreshing. If you really want that high polygon model for the final production, I’d suggest using something stand in while you are developing. Another thing we can do however is lower the count by using a decimation modifier in Blender. This can reduce the count by a fraction, just keep an eye on the quality as you go lower and lower. In the end it’s a choice for what you want to do in your scene, how fast you want to run, and how many models want to have.
Another thing to think about is lighting. When creating the textured mesh in 3DF Zephyr, lighting from your photos will already be taken into account. Therefore, it’s important to remember if you using lights on other parts of your scene you should either disable the lighting on the mesh, or at least think about what you trying to achieve. Double lighting can look weird if you’re going for realism. Definitely something to keep in mind.
Finally, don’t forget about mobile! Surprisingly most phones within the Babylon team were able to render these scenes, but do keep in mind it can be very taxing in the hardware. Perhaps a lower and higher polygon version for mobile vs desktop.
So there you have it. A couple tips and tricks for dealing with photogrammetry in Babylon.js. I would 100% recommend trying it out sometime if you want to create a visually impressive model, especially if don’t have the modeling skills like myself ;)
Babylon.js can remarkably render even the expensive models and still run at 60 fps no sweat. Just be cautious of those load times by using other software to downscale it, especially if load time is important. I know that’s something I will probably do on my next demo for sure. Until then, have fun taking those videos/photos and can’t wait to see what you all create!
Seen you real soon,
Pamela Wolf — Babylon.js Team