3D scanning has started to become a more integral part of photography and video for many people. Whether it’s to scan real-world objects to render on a computer and drop into the scene in post or to be able to produce physical 3d printed models to include as part of the set, its use has started to increase over the last few years.
It’s something that Google’s been working on for years, but this latest iteration uses what they call “volumetric capture” made up of 100 12-megapixel cameras and 331 LED lighting modules in a 360° setup to see all around an object to create a photorealistic 3D representation of the subject. This lets them capture every single detail, drop them into any scene they like, and relight the subject in post.
Volumetric capture is often used to create 3D representations as a subject using various techniques including light fields, photogrammetry and various other systems. As TechCrunch notes, though, many of these techniques typically have a couple of pretty serious weaknesses.
- It’s essentially a 3D movie, rather than a model. There’s no “skeleton” system applied to the 3D models, so you can’t pose them in post and make them do whatever you want. The clothing is also what-you-see-is-what-you-get. You can’t swap jeans and a t-shirt for a dress on a subject, for example.
- You can’t relight them in post. Whatever light they’ve been captured with is what you get. So, you have to really know what kind of scene you’re dropping your subject into beforehand so you can recreate the lighting as needed for it to work.
Each of the one hundred 12-megapixel cameras shoots images at 60hz in order to capture the tiniest detail from all angles as the subject moves in front of them. 331 LED light modules allow them to light the scene in multiple different ways in rapid succession in order to see as much lighting information on each pose in as short a space of time as possible.
This allows them to see how the different surfaces of the subject react to different types of light, how reflective or absorbant they are, and in what colour spectrums. This lets them shoot and relight the 3D model in post from any angle and get a moderately accurate representation of how they would respond in the real world.
The first problem still remains something of a challenge, at least when it comes to AI. It’s entirely possible to add a skeleton to a 3D scanned human model on the computer, but only really by a person. Then all of the deformations would need to be manually adjusted as you move joints in order to prevent them from caving in on themselves. But it’s a very cool concept.
It reminds me of another video I saw from Google back in 2008 (which, to be honest, I’m amazed I managed to find again, but here it is), showing a similar system. The new relighting system is essentially an evolution of that process from way back when, incorporating some AI and more modern 3D techniques.
The goal, ultimately, is to be able to have somebody walk onto a stage, push a button, and have an entire 3D scan, already set up with a skeleton, photorealistic textures and realistic reaction to light that you can just drop straight into your movie and animate however you wish.
We might be a few years of making the whole system automated, but it’s definitely coming. As computers continue to get faster, and software becomes more evolved, it’s probably closer than we think.