This whole “computational photography” thing always felt a little bit weird. But it also intrigued me. The idea that a computer can realistically create things that weren’t actually shown in the original shot is pretty amazing. Maybe it was seeing this scene in Blade Runner as a kid that did it for me. It was pure fantasy back then, but we’re getting there.
A new “computational zoom” technology developed by researchers at Nvidia and UCSB brings us a step closer to Deckard’s reality. Essentially it allows the photographer to change the focal length and perspective of an image in post, but this description barely does it justice. It actually allows you to simulate multiple focal lengths simultaneously. Here, watch this video, and it’ll all make sense.
The basic image acquisition and calculation process looks very similar to photogrammetry. This is a process whereby 3D models are generated from a large series of photographs of an object. The different shooting angles of the same object allow a computer to determine where various parts of an object sit in 3D space. It’s what they use for photography based 3D scanning.
Here, though, instead of creating a 3D model, it produces flat, photorealistic images. And that’s not the only big difference. It appears that this process managed to not only transcend space, but time.
For photogrammetry, your object needs to be inanimate. This means for people, all of the images have to be created at the exact same time. Even a tenth of a second difference can mean your shots don’t line up, potentially ruining your final result. The same is true of computational imaging cameras such as the Light L16. Each of its lenses is seeing the shot at the exact same time.
Some of the examples shown are a bit “so what?”, essentially just looking like a swapped out background in Photoshop. But one particular demonstration opens up all sorts of creative options. Options that aren’t physically possible with just a camera and a lens.
Being able to incorporate not only multiple focal lengths, but also multiple points of perspective into a single image with a realistic final result? Bring it on! While the technology does seem to have some way to go, this early stage shows some great potential.
You can find out more about computational zoom on the Nvidia website and see the full paper presented to SIGGRAPH on the UCSB website.
[via UCSB]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!