Newest iOS 13 developer beta shows some insight into Apple’s new “Deep Fusion” AI photos

Oct 7, 2019

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Newest iOS 13 developer beta shows some insight into Apple’s new “Deep Fusion” AI photos

Oct 7, 2019

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

Share on:

During the Apple event last September for the new iPhone 11 models, Apple spoke about a new tech they call “Deep Fusion”. It’s a process whereby 9 images are combined using an AI engine in order to create a single image to present the most detail possible. There haven’t really been any good samples of it out there, though, until now.

The feature has appeared in the latest iOS 13 developer beta, and now lots of samples showing off its capabilities have started to pop up on the web – most notably on Twitter.

Apple’s SVP of Worldwide Marketing, Phil Schiller describes that the process shoots nine images in total. Before you hit the shutter, it’s shot 8 of those images; four “short” and four “secondary” images. It’s actually shooting images all the time, but once those first eight fill up, the old ones get tossed aside as new ones replace them – similar, in a way to the how dashcams and those crazy 100,000fps slow-motion cameras work.

Hitting the shutter shoots the 9th image at your regular exposure, the way you’d shoot a normal photo. This is where the “computational photography mad science” happens. The Deep Fusion neural engine inside iOS 13 combines and analyses the first 8 pre-shots together with your actual exposure in less than a second. It then makes the final image up from the data found in those nine separate photos.

The samples I’ve seen posted to Twitter so far look quite impressive, especially those that compare to preceding iPhones, and previous versions of iOS. The iPhone 11 camera already offered a pretty significant increase in quality over the iPhone X, but this new feature enhances things even further.

https://twitter.com/_oscg/status/1179979927933861888

Comparing to other technologies that composite from multiple images, like the “Smart HDR” feature, are also quite telling.

It will be interesting to see how good the quality will be when it enters final public distribution, but so far it looks quite impressive. Of course, some Android devices have used similar techniques already, too.

I still wouldn’t go ditching your DSLRs and mirrorless cameras to make the switch to iPhones for client work, though.

[via DPReview]

Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

John Aldred

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *