We’ve seen several uses of AI aimed at improving photos. Whether it’s improving their resolution, or turning selfies into decent portraits, they usually work on a single, existing photo. But a method from NVIDIA generates the photos of people that don’t actually exist. And it’s interesting and kinda creepy at the same time.
The team of scientists behind the project has released a paper on the use of a generative adversarial network (GAN) to grow both the generator (which produces a sample) and discriminator (“adaptive loss function that gets discarded once the generator has been trained”). They sourced the photos from CelebA database – a database of celebrity faces.
They trained the neural network to use low-res photos of celebrities and add layers progressively until it generates 1024 x 1024 images. Then they trained the neural network with this CelebA HQ images. From these photos, the neural network is able to generate numerous faces – none of which actually exists. In addition to faces, the AI is also able to generate different objects and scenes in photorealistic images, as you can see in the video above. Although, it doesn’t really give impressive results with dogs and cats.
The scientists write in their paper that the quality of their results is “generally high compared to earlier work on GANs.” They note that the training is also “stable in large resolutions,” but they add that “there is a long way to true photorealism.” Still, many of these generated photos are very realistic, and if we disregard occasional irregularities, the results are pretty impressive. Although, when I remember these aren’t actual people, it makes me a bit uneasy.
[Progressive Growing of GANs for Improved Quality, Stability, and Variation via Engadget]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!