Google Pixel 3 may only have one rear camera, but it relies heavily on Google’s promising AI to deliver high-quality images. The latest feature Google launched for all three generations of Pixel lets you shoot clean and bright images in near darkness – even when you can barely see anything with your own eyes. It works on both front and rear cameras, and you don’t even need a tripod or a flash.
It’s September which means another generation of Apple iPhones. This year, the iPhone XS (pronounced “ten ess”) adds a slightly larger sensor plus significantly more computing power via the A12 Bionic Chip to enhance the phone’s image signal processing.
Stacking raw files isn’t anything new. We’ve been able to do it in Photoshop for years. But doing it in Photoshop requires some legwork. If you’ve got moving subjects in your shots, you need to mask things out, which can take a lot of time depending on the shot. Kandao’s new Raw+ software, however, figures it out automatically.
On Oct 8, 2015, a completely unknown company announced a new camera which promised to change how we think about photography.
The L16 combines breakthrough optics design with never-before-seen imaging technology to bring you the camera of the future.
The L16 would use 16 camera modules with varying focal lengths with folded optics instead of 1 single sensor, and use “computational photography” to fuse the individual photos together with depth data, producing results that would be “DSLR quality”.
The marketing promised a unicorn camera that prosumer/enthusiast photographers like me would want to carry around as an every-day tool. In theory, I would be able to leave my traditional cameras behind — today a FujiFilm X-T2, Leica Q and Sony RX100mk5 — and travel with just the Light L16.
2 years and 1 day after I pre-ordered my camera, my Light L16 finally arrived.
This whole “computational photography” thing always felt a little bit weird. But it also intrigued me. The idea that a computer can realistically create things that weren’t actually shown in the original shot is pretty amazing. Maybe it was seeing this scene in Blade Runner as a kid that did it for me. It was pure fantasy back then, but we’re getting there.
A new “computational zoom” technology developed by researchers at Nvidia and UCSB brings us a step closer to Deckard’s reality. Essentially it allows the photographer to change the focal length and perspective of an image in post, but this description barely does it justice. It actually allows you to simulate multiple focal lengths simultaneously. Here, watch this video, and it’ll all make sense.