Ever since Google Pixel 4 was announced (and even before), its Night Sight or “astrophotography mode” has been creating quite a buzz. But the camera in Pixel 4 is certainly capable of much more. In a recent blog post, Google has explained the science behind the Portrait Mode of its latest flagship phone.
Search Results for: portrait mode
When the Google Pixel 2 came out it was extremely impressive. Despite only having the single rear camera, it beat out everything else on DxOMark and held the position for quite a while. When the Pixel 3 was announced, it also had only the single rear camera. And despite one or two issues with the phone overall, Google made some great improvements with it.
Google has published a blog post on how they taught the Pixel 3 to predict depth for its Portrait Mode. Given that it only has a single camera, and not the dual cameras of other brands, it seems like it wouldn’t be possible, but the techniques used are pretty interesting. And involve a case that holds five separate cameras.
Earlier this year, Samsung was busted for using stock photos to show off capabilities of Galaxy A8’s camera. And now they did it again – they used a stock image taken with a DSLR to fake the camera’s portrait mode. How do I know this, you may wonder? Well, it’s because Samsung used MY photo to do it.
YouTuber Jonathan Morrison caused some stir on Instagram and Twitter on Saturday and trolled both Apple and Android users with a single photo. He posted a selfie with a caption: “Pixel 2 Portrait mode ? rocking the smalls hat ? thoughts?” Android fans rushed to praise the image quality and of course, to bash the iPhone. But a day later, Jonathan revealed the truth: the photo was actually taken with an iPhone XS.
Portrait Mode has been simultaneously one of the biggest jokes and coolest advancements in smartphone camera technology. Google’s version of it can be found in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones. And they have just released their latest version of it as Open Source, available to any developer who can make use of it.
It’s detailed in Semantic Image Segmentation with DeepLap in Tensorflow on the Google Research blog. And reading how it works is quite interesting, even if you have no idea how to actually do it. Semantic Image Segmentation is basically the process by which pixels in an image are defined by labels, such as “road”, “sky”, “person” or “dog”. It allows apps to figure out what to keep sharp and what to blur.
Google Pixel 2 is currently taking the first place on DxO’s list of best-rated smartphone cameras. It’s a single-lens camera, yet it offers the Portrait mode on both rear and front camera. This feature wasn’t available on earlier phones from Google, but now you can get it even on some older devices.
Developer Charles Chow has made the Portrait mode available for free, for the original Google Pixel phone, as well as Nexus 5X and Nexus 6P.
When “Portrait Mode” was first introduced with the iPhone 7+ it was met with mixed reaction. Some people loved it, thought it was a fantastic feat of software engineering. Others hated it, either they felt threatened by it (yes, really) or just didn’t think it looked believable enough. Neither side is particularly wrong, though.
It is a fantastic feat of software engineering. But it wasn’t perfect in the beginning. It still isn’t, but it’s improved a whole lot in the last year and a bit. Other manufacturers have also developed things further. So, how do they stack up today against a large sensor camera like the Hasselblad X1D? That’s what Marques Brownlee attempts to determine in this video.
You may argue if iPhone 7 Plus’ Portrait mode is any good. Some people love it and some hate it, but apparently – it was good enough for Billboard Magazine. Photographer Miller Mobley shot the February 17 issue of Billboard magazine, featuring a rising pop star Camila Cabello on the cover. And despite the fact that he generally uses professional and expensive gear, this time he was limited to iPhone 7 Plus and its Portrait mode. And he really did a fine job.
One of the primary lessons I teach in portraiture is how to control the viewer’s eye, and how depth of field is one of the key methods to do that. This is normally the preserve of expensive fast lenses, but soon anyone will be able achieve this with some new technology I’ve been trailing on the iPhone 7 plus.
This new IOS 10.1 software, currently in beta and available later this year uses the twin cameras built into the iPhone 7 plus. It basically provides you with a new and super simple ‘portrait’ camera mode which takes two image and uses software to artificially create a creamy depth of field….and it’s great!
[use the slider to see the regular/portrait version compared side by side]