We’ve already seen that AI-generated faces can look so realistic that it’s sometimes difficult to distinguish them from real ones. And if you want to put a fake headshot to use, Generated Photos lets you choose from 100,000 AI-generated faces. They’re all free for download and you can use them whichever way you want. What’s more, many of them look so good that it’s hard to tell them apart from photos licensed by stock photo companies.
Deepfake has become a pretty hot topic in the world of visual AI over the last couple of years, and it’s come a very long way in a short amount of time. It’s an incredible and terrifying technology. And now Samsung has jumped on the bandwagon.
Researchers at the Samsung AI Center in Mosci and the Skolkovo Institute of Science and Technology have published a new paper detailing their new software that generates 3D animated heads from a still image. And while it’s not perfect, to be able to do this from a handful or even a single image is pretty mindblowing.
We’ve seen NVIDIA’s impressive content aware tool and noise removing tool. They have recently developed a generative adversarial network (GAN) which easily customizes styles of realistic faces and creates new faces. That’s right, these super-realistic faces you can see in the lead image are not real at all!
Shutterstock has introduced their new search tool, which helps you narrow down the search results even further. Composition Aware Search lets you search the images by the position of the objects in them. The tool features a canvas on which you place the keywords. Then you can move them around and get the photos that contain specific objects in a specific order.
Thanks to Google Street View, we can see many corners of the world we may never even visit. Some artists even use it to create photos of remote places from their own home. Now, Google itself finds a way to produce professional-looking photos from their Street View shots. They have created Creatism, a deep-learning system that analyzes Street View scenes searching for a beautiful composition. The algorithm finds the scenes which it’s supposed to turn into shots worthy of professional photographers.
Ok, imaging AI is just getting ridiculous now. By now we all know of the Prisma app. It lets you take photos and turn them into images that resemble paintings by artists such as Van Gogh, Monet, etc. A team of reseachers at UC Berkley have come up with a system that does the exact opposite of that. It looks at paintings and turns them into something that resembles a photograph.
Using the same principles, though, the team have also taken things in an entirely different direction, allowing you to map individual objects between each other. Such as turning horses into zebras, or vice versa. And now, we really can compare apples to oranges.
Neural networks, deep learning, artificial intelligence, whatever term you prefer seems to be a very hot topic lately. Every couple of weeks there seem to be new developments to show off what it can do. We’ve seen CSI-like enhancements, facial recognition that can see through obscuration, converting 2D to 3D and plenty more. This one, while it might initially seem quite gimmicky, is actually pretty cool.
Research Engineer Roland Meertens has been working with neural networks and the Nintendo Gameboy camera. His goal is to produce photorealistic results from 190×144 pixel images produced by the Gameboy Camera. Released in 1998, it became the world record holder at the time for the “smallest digital camera”. You could also get a little printer to go along with it. It was a very cool toy for its time, but it wasn’t exactly broadcast quality.
Well, this could be quite the problem for privacy advocates everywhere. In locations such the UK, it could cause even cause legal issues for things like the Data Protection Act. Pixelating faces is used throughout the world to conceal identities on camera. Often it’s to protect their identity. Sometimes they need protection, sometimes they’re caught in the background of footage about something else, and sometimes you just forgot to get a release signed.
Wired reports that researchers at the University of Texas at Austin and Cornell Tech claim to have trained a piece of software to see through it. The scariest thing about this is that the technology is already out there. They didn’t actually have to develop any new tools, simply feed a new set of data using existing facial recognition methods. The software then figured it out all by itself.