Neural network turns your Pixelated, black and white gameboy photos into realistic portraits
Neural networks, deep learning, artificial intelligence, whatever term you prefer seems to be a very hot topic lately. Every couple of weeks there seem to be new developments to show off what it can do. We’ve seen CSI-like enhancements, facial recognition that can see through obscuration, converting 2D to 3D and plenty more. This one, while it might initially seem quite gimmicky, is actually pretty cool.
Research Engineer Roland Meertens has been working with neural networks and the Nintendo Gameboy camera. His goal is to produce photorealistic results from 190×144 pixel images produced by the Gameboy Camera. Released in 1998, it became the world record holder at the time for the “smallest digital camera”. You could also get a little printer to go along with it. It was a very cool toy for its time, but it wasn’t exactly broadcast quality.
Meertens says that he had one of these as a child, and would use it regularly. Although he couldn’t location the original one he used to own, he managed to find one for his experiment. And it’s a cool concept. The Gameboy’s camera produced extremely low resolution images. It also did it using only four shades of grey (2 if you don’t count pure black & white).
The thing with neural networks is that you need to teach them. You need to feed them sets of data so it can make comparisons and figure out what something’s supposed to be when you show it something else. As there’s no real training data set with Gameboy camera images of faces alongside the real image, Roland simulated it.
He created a dataset that took a photograph in as an input, and then created a four shade greyscale image. This would then give a result similar to that created by the Gameboy camera. Here you can see the simulated image on the left next to the original.
Roland then fed many of these sets of images into the network to help it learn. It starts to figure out what certain noise patterns might suggest, like highlights, shadows and gradients. The more sets that the network is fed, the more accurate the results become. Every so often, he ran test sequences to see how well it was doing, and to figure out when enough was enough.
Early on, you can see it doesn’t really have much of an idea what it’s supposed to be, although you can make out the basic shapes in the final result on the right. Just a couple of hundred images later, the results drastically improve.
And after it’s seen a few thousand, it’s not a bad final result at all. It’s still not perfect, but if you knew the person, you’d actually be able to recognise them now.
So these were simulated Gameboy camera images. But this process had to be developed to help it learn. To see how the Gameboy’s camera sees. And it seemed to work rather well. When he ran some samples through it shot with the Gameboy camera, the results clean up extremely well.
Ok, so it doesn’t quite compare to a Nikon D810 or a Canon 5DSR, but the improvement is very striking. It’d be interesting to see just how much better it can get with a few thousand more faces from which to learn.
Whether there’ll be any real world practical use for this remains to be seen. I’m sure there aren’t many people out there with Gameboy cameras expecting to get high resolution shots. It’d be easier to just to pull out their phones. But the same techniques might assist in helping to “remaster” older digital images from other cameras.
You can find out more about Roland’s process over on his website at Pinch of Intelligence.
John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.