Camera sensor technology has come a very long way since it was first developed in the 1970s. And while they’ve gotten much smaller and faster since Steven Sasson’s original 3.6kg, 0.01-megapixel digital camera, the basic principle of how a sensor records an image is still pretty much the same – as explained in this wonderfully technical and geeky video from IMSAI Guy.
In it, he begins by explaining the difference between standard sensors and rear-illuminated (BSI) sensors – and how the latter is basically the way an Octopus’ eyes work. He illustrates how a pixel “sees” the light that hits it and how a sensor arranges all these pixels using just a standard (albeit very large) matrix array in order to make up the final image.
It’s fascinating to think that, in principle, just about any of us could order the components on eBay or Amazon to build a sensor (of sorts). All each pixel uses is a photodiode and three MOSFET transistors. Of course, it’d be absolutely huge, with each “pixel” being hundreds of times larger than pixels on your average sensor these days. Your typical through-hole photodiode is around 3mm in diameter. The pixels on something like the Sony A7R IV, for example, are 3.76 microns (about 1/800th the size).
But equally as fascinating are BSI sensors and how they see the light vs how more traditional front-illuminated sensors see it. The latter of those sees light in the same way that almost every animal on the planet sees light… Except for one; The octopus, whose eyes see the world in a very similar way to a BSI sensor.
You don’t need to know this stuff to become a better photographer, but it’s absolutely fascinating.
[via Hackaday]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!