Resolution is an often complex topic but even more so when it comes to how it relates to human vision. And it’s kind of a two-part question, to begin with. What resolution do cameras need to be to capture what the human eye sees? What resolution do screens need to be for the human eye to not be able to see the individual pixels?
But it’s even more complex than that, as this video from Vsauce explains. This isn’t a new video, but it seems to have popped back up to the fore again recently. I thought it was pretty fascinating, especially as the resolution debate never seems to end and we haven’t featured it here on DIYP before. So, here it is.

With smartphone cameras going to an insane 200-megapixels, interchangeable lens cameras at 100-megapixels and cinema cameras capable of producing 12K video, it always feels like camera companies are pushing for more and more resolution – even if user demand usually isn’t. But is it really worth it? Does it really matter?
Well, there are certainly times when more resolution matters in a camera (or a display). Those who are doing visual effects and CG often require the highest resolution possible for both stills and video. Higher resolution cameras give you the freedom to stabilise in post and crop down while retaining as much detail as possible in the final result.
But for just straight up viewing of the content? For the most part, probably not. And I’m not even going to try to explain why because Vsauce does such an excellent job of it in the video above and it’s a complex topic that can’t easily be broken down into a few sentences – except to say that human vision isn’t the same across your entire field of view.
So, the way the human eye sees and the way a camera sees are two very different things and trying to equate the two with a metric as seemingly simple as “resolution” isn’t so easy.
[via 43Rumors]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!