How Google built its “best ever” camera for the Pixel 2 smartphone

Oct 9, 2017

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

How Google built its “best ever” camera for the Pixel 2 smartphone

Oct 9, 2017

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

Share on:

Google’s Pixel 2 smartphone quickly dethroned the new iPhone 8 Plus once DxO Mark got their hands on it. And the reviewers so far seem to be giving it great praise, both as a camera and a phone. But how is the camera inside the Pixel 2 actually put together?

That’s what Nat of Nat and Friends wanted to find out. Being a Google employee, she has a little more access than most of us. So, in this video Nat takes us inside Google’s HQ to speak to engineers and find out more about how the camera’s development and working process.

YouTube video

It’s pretty interesting to see how they pack such a relatively decent camera down into the tiny space available. A sensor, six lens elements and motors to drive the optical image stabilisation all packed into an area which Nat describes as being the size of a blueberry.

I never really looked much into the lenses on smartphone cameras before. So, finding that there’s six separate elements packed into that tiny space was quite surprising for me. Google’s Computational Photography Team Lead Marc Levoy (yes, that Marc Levoy), explains.

They have very strange shapes, they’ve got weird Ws in them and so on, because you’re trying to correct for what are called aberrations that distort the image, in a very small amount of space.

These elements work in much the same way as elements on a more standard sized lens for a “real camera”. They help prevent pincushion and barrel distortion to produce a final result that more closely matches what we saw in reality.

Marc also talks about how cameras are moving away from a dedicated hardware process, toward a computational software process. And it’s quite amazing just how far these computational software processes have come. Not just in mobile phone photography, but with digital cameras in general.

While I still think some computational processes like faking shallow depth of field, and relighting your subjects have a way to go, it’s only going to get better. But, I still can’t see myself ever ditching those larger cameras in favour of a phone.

It does hold great promise for the future, though. For those quick snaps with the camera we always have with us in our pocket.

Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

John Aldred

John Aldred

John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *