We’ve already seen that AI-generated faces can look so realistic that it’s sometimes difficult to distinguish them from real ones. And if you want to put a fake headshot to use, Generated Photos lets you choose from 100,000 AI-generated faces. They’re all free for download and you can use them whichever way you want. What’s more, many of them look so good that it’s hard to tell them apart from photos licensed by stock photo companies.
From relighting images to removing backgrounds, the applications of AI tools in photography are many. The new AI-powered tool introduced by Chinese scientists can accurately fill in the blank spaces in all kinds of photos. Be it a front of a building, a landscape photo, even a portrait – the AI is trained to fill in the gap surprisingly accurately.
NVIDIA’s researchers came up with an impressive algorithm that’s able to generate realistic faces. Some of them are so realistic that you may have a hard time figuring out that they were computer-generated. If you’re up for a challenge, there’s now a website where you can test how many fake faces you can distinguish from real ones. It can get more difficult than you may think.
Chinese company Xiaomi is working on an algorithm that will improve low-quality images. The company wants to compete with Apple regarding smartphone photography, and it has just published a new paper on the AI network called “DeepExposure.” It uses machine learning to improve low-quality images by adding them detail while enhancing colors and brightness.
We’ve seen some of the algorithms that can enhance low-quality photos. The researchers from Oxford University and the Skolkovo Institute of Science and Technology in Moscow have developed a new approach for restoring damaged or low-quality images. Instead of training the neural network with thousands of photos, their system called Deep Image Prior works everything out from the original image. And without any previous learning, it turns a pixelated or damaged photo into a hi-res one.
Shutterstock has introduced their new search tool, which helps you narrow down the search results even further. Composition Aware Search lets you search the images by the position of the objects in them. The tool features a canvas on which you place the keywords. Then you can move them around and get the photos that contain specific objects in a specific order.
Stock photo search engine Everypixel is a tool that should make the quest for perfect stock photos easier. But what’s even more interesting is their tool called Everypixel Aesthetics. It uses neural networks to tell you how “awesome” your photo is.
According to the developers, this tool sees the beauty of stock photos in the same way as humans do. So before you buy a stock image or upload one of your own, you can run it through this quick test and see what neural network has to say about it. I tested it out, and the results were surprising, to say the least.
Ok, imaging AI is just getting ridiculous now. By now we all know of the Prisma app. It lets you take photos and turn them into images that resemble paintings by artists such as Van Gogh, Monet, etc. A team of reseachers at UC Berkley have come up with a system that does the exact opposite of that. It looks at paintings and turns them into something that resembles a photograph.
Using the same principles, though, the team have also taken things in an entirely different direction, allowing you to map individual objects between each other. Such as turning horses into zebras, or vice versa. And now, we really can compare apples to oranges.
An Adobe Research paper titled Deep Image Matting, might just put an end to green and blue screen techniques. Adobe collaborated with the Beckman Institute for Advanced Science and Technology, to develop a new system based on deep convolution neural networks. This system extracts foreground content from its background accurately and intelligently without any kind of blue or green screen background.
Eliminating the green screen isn’t a completely new idea. Lyryo’s cinema cameras are able to do this based on depth perception. But this solution is 100% software based. The paper outlines the process to evaluate images. It then determines what needs to be cut from the background, and how.