When “Portrait Mode” was first introduced with the iPhone 7+ it was met with mixed reaction. Some people loved it, thought it was a fantastic feat of software engineering. Others hated it, either they felt threatened by it (yes, really) or just didn’t think it looked believable enough. Neither side is particularly wrong, though.
It is a fantastic feat of software engineering. But it wasn’t perfect in the beginning. It still isn’t, but it’s improved a whole lot in the last year and a bit. Other manufacturers have also developed things further. So, how do they stack up today against a large sensor camera like the Hasselblad X1D? That’s what Marques Brownlee attempts to determine in this video.
In the video, Marques puts three of the leading smartphones up against the Hasselblad X1D. There’s the new flagship iPhone X from Apple, the Samsung Galaxy Note 8 and the Google Pixel 2. Mobile devices which are all quite respected with regard to their camera capabilities.
It’s true that Portrait Mode has come a long way since the initial introduction of the feature. It still has a long way to go before it’s perfect, but it has improved. With dual cameras pretty much becoming standard now (I just got a new phone today that has dual cameras, and isn’t any of the phones in the video), the competition is fierce to outdo each other.
As Marquis illustrates in the video, though, it still doesn’t compare to shooting a camera with a large format sensor. Portrait mode often pretends to favour faces, blurring other things that are even on the same plane of focus. Not to mention keeping things sharp that should be out of focus (look at the bottom right of the Pixel 2 shot below).
That’s the great thing about computational imaging, though. It’s all software. As algorithms are basically just software, they can be altered over time without having to touch the hardware. Sure, eventually, it’ll possibly hit a speed limit in the current generation of phones. But then new phones with newer hardware will come out allowing them to take it further in a shorter space of time.
There are still things that the Portrait Mode algorithms need to figure out and improve on. As Marques mentions in the video, edge detection is a big one. It’s still quite common to see the edges of subjects that should be sharp blending with their surroundings. Especially around the hair. They’re also still not very good at recognising non-human subjects.
It will improve over time, and it will probably suffice for the needs of many. But I think it’ll be a very long time, if ever, before software catches up with the optical physics of large sensor cameras.