There are now two ways of creating digital images with a camera. You can either follow a software-centric computational photography approach. The other way is to stick to traditional hardware-centric optical photography. The former is used with AI to help enhance the final image, the latter relies on the quality of the camera’s components (e.g. lens, sensor). The two techniques may differ, but they are not at all on a collision course. They can complement each other and even address each technique’s limitations.
Sony creates new “Sony AI” organisation to enhance imaging tech
Sony has announced that they’ve established a new “Sony AI” organisation, with offices in Japan, Europe and the USA. Its goal is to advance the research and development of AI. They see AI playing a vital role in the future, particularly when it comes to imaging & sensing, robotics and entertainment.
Their purpose, Sony says is to “Fill the world with emotion, through the power of creativity and technology” and that AI will play a big part in that. Sony AI will drive the R&D of AI through “multiple world-class flagship projects”, and they’ll be looking into the ethics of AI technology, too.
Sick of dog pictures on social media? Nvidia’s GANimal AI lets you turn them into other animals
Of course, I’m kidding, how can anybody get sick of dog pictures on Facebook?
Nvidia’s research teams have been doing some pretty crazy stuff with AI the last few years. This latest one is pretty funny from an amusement level standpoint but quite groundbreaking from a technical one. Nvidia’s GANimal AI lets you remap your pet’s “expression” onto other animals.
Newest iOS 13 developer beta shows some insight into Apple’s new “Deep Fusion” AI photos
During the Apple event last September for the new iPhone 11 models, Apple spoke about a new tech they call “Deep Fusion”. It’s a process whereby 9 images are combined using an AI engine in order to create a single image to present the most detail possible. There haven’t really been any good samples of it out there, though, until now.
The feature has appeared in the latest iOS 13 developer beta, and now lots of samples showing off its capabilities have started to pop up on the web – most notably on Twitter.
This camera can photograph a subject from 28 miles away
Computational imaging has given us some interesting and useful inventions so far, from fake bokeh to capturing the movement of light. This time, scientists have figured out how to take a clear image from as far as 28 miles (45km), regardless of the Earth’s curvature and the amount of smog in the air.
Light is teaming up with Sony for the next generation of multi-camera smartphone
Light, the company behind the Light L16 camera which contains 16 sensors and lenses, has announced that they’ve teamed up with Sony to create the next generation of multi-camera smartphones. Well, we knew Light was working on something to do with phones, and now it looks like it’s official.
Google’s Night Sight feature lets you shoot in the dark without a tripod or flash
Google Pixel 3 may only have one rear camera, but it relies heavily on Google’s promising AI to deliver high-quality images. The latest feature Google launched for all three generations of Pixel lets you shoot clean and bright images in near darkness – even when you can barely see anything with your own eyes. It works on both front and rear cameras, and you don’t even need a tripod or a flash.
Why dedicated cameras will always be (optically) better than smartphones
It’s September which means another generation of Apple iPhones. This year, the iPhone XS (pronounced “ten ess”) adds a slightly larger sensor plus significantly more computing power via the A12 Bionic Chip to enhance the phone’s image signal processing.
Kandao Raw+ offers automated image stacking for raw files without ghosting
Stacking raw files isn’t anything new. We’ve been able to do it in Photoshop for years. But doing it in Photoshop requires some legwork. If you’ve got moving subjects in your shots, you need to mask things out, which can take a lot of time depending on the shot. Kandao’s new Raw+ software, however, figures it out automatically.
The Light L16 — brilliant and braindead
On Oct 8, 2015, a completely unknown company announced a new camera which promised to change how we think about photography.
The L16 combines breakthrough optics design with never-before-seen imaging technology to bring you the camera of the future.
The L16 would use 16 camera modules with varying focal lengths with folded optics instead of 1 single sensor, and use “computational photography” to fuse the individual photos together with depth data, producing results that would be “DSLR quality”.
The marketing promised a unicorn camera that prosumer/enthusiast photographers like me would want to carry around as an every-day tool. In theory, I would be able to leave my traditional cameras behind — today a FujiFilm X-T2, Leica Q and Sony RX100mk5 — and travel with just the Light L16.
2 years and 1 day after I pre-ordered my camera, my Light L16 finally arrived.
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!