Ever since Google Pixel 4 was announced (and even before), its Night Sight or “astrophotography mode” has been creating quite a buzz. But the camera in Pixel 4 is certainly capable of much more. In a recent blog post, Google has explained the science behind the Portrait Mode of its latest flagship phone.
The Portrait Mode was first launched in Google Pixel 2, and the improved version has been there in later iterations of Pixel phones. In Pixel 4, Google relies on AI just like in the previous models. But there two more big improvements to Portrait Mode. Google writes that they leveraged both the Pixel 4’s dual cameras and dual-pixel auto-focus system to improve depth estimation. Also, Pixel 4 has improved bokeh, so the fake blur looks more realistic.
When it comes to depth estimation, Pixel 4’s Portrait Mode uses the dual-pixel autofocus system just like the previous model.
“Dual-pixels work by splitting every pixel in half, such that each half pixel sees a different half of the main lens’ aperture. By reading out each of these half-pixel images separately, you get two slightly different views of the scene. While these views come from a single camera with one lens, it is as if they originate from a virtual pair of cameras placed on either side of the main lens’ aperture. Alternating between these views, the subject stays in the same place while the background appears to move vertically.”
Another important component of determining depth is the dual-camera system. “The Pixel 4’s wide and telephoto cameras are 13 mm apart,” Google notes. This is “much greater than the dual-pixel baseline, and so the larger parallax makes it easier to estimate the depth of far objects.” Combined with the data from the dual-pixel AF system, the AI system in Pixel 4 determines the amount of depth in each image.
The new Portrait Mode for Pixel 4 leverages dual cameras and the dual-pixel auto-focus system to improve depth estimation, allowing users to take professional-looking shallow depth of field images both close-up and at a distance. Learn how it was done at https://t.co/Z0S9REj5ES pic.twitter.com/mF1btwQt5O
— Google AI (@GoogleAI) December 16, 2019
It’s not just that the estimation of depth has been improved, but also the bokeh. These two combined make the photos look more like those taken with a DSLR, according to Google. They replaced each pixel in the original image with a translucent disk whose size is based on depth. They used to do it after tone-mapping, but it caused a loss of information. The bokeh would just blend into the background and wouldn’t look natural.
In Pixel 4, Google appears to have solved this problem. “The solution to this problem is to blur the merged raw image produced by HDR+ and then apply tone mapping” Google explains. This creates brighter and more obvious bokeh disks and the background that is saturated in the same way as the foreground.
Google previously showed us how its praised Night Sight feature works. Now they’ve put focus (no pun intended) on the Portrait Mode, which appears to be improved over the previous models. If you’d like to read about it in more detail, head over to Google AI Blog and have a look.
[via Engadget; lead image credits: Maurizio Pesce on Flickr]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!