Thanks to Google Street View, we can see many corners of the world we may never even visit. Some artists even use it to create photos of remote places from their own home. Now, Google itself finds a way to produce professional-looking photos from their Street View shots. They have created Creatism, a deep-learning system that analyzes Street View scenes searching for a beautiful composition. The algorithm finds the scenes which it’s supposed to turn into shots worthy of professional photographers.
This week, Google’s AlphaGo program beat the world’s best Go player, Ke Jie, in 2 straight games in a best of 5 series. Go is considered to be the world’s hardest board game, and some AI experts didn’t think that a machine would be able to best humans for another decade.
In the area of photography, companies like Google have already introduced various aspects of machine learning allowing users to search for photos by keyword without having ever entered keywords. Combined with other features like facial recognition give the user surprisingly accurate and useful results. It’s clear that AI has reached a powerful inflection point.
Colorizing a black and white image in Photoshop requires a huge amount of time, and not to mention that you need exceptional skill to do it. A year ago, Richard Zhang and a team at University of California revealed Algoritmia, an app that does it automatically. It was fun to play with it, but there was still plenty of room for improvement. Now, a year later, they have found a new approach. And this time, the results are way more impressive.
Lots of interesting news has been coming from Google lately. They seem to be very devoted to the development of AI, and there is another novelty they may implement. Soon, Google could become able to remove the unwanted objects from your photos. In other words, if you take a photo through glass or a fence, the algorithm will automatically remove the obstruction and produce a clean photo.
This time last year, there was a pretty big fuss about FindFace, an app that uses facial recognition to discover people’s identities with pretty high reliability. But for 33-year-old Fu Gui from China, facial recognition technology turned out to be life changing. It helped him find his family and reunite with them after being apart for 27 years.
A few decades ago, it was impossible to imagine a camera without film. It was also hard to imagine a gadget such as a smartphone. Now, these two are merged together and becoming better and better all the time. But what would happen if you took away the camera from a smartphone, but still be able to take photos with it? A theory is that this could be awaiting us in the future, thanks to the artificial intelligence. The endgame for cameras in the future could be having no camera at all.
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
Stock photo search engine Everypixel is a tool that should make the quest for perfect stock photos easier. But what’s even more interesting is their tool called Everypixel Aesthetics. It uses neural networks to tell you how “awesome” your photo is.
According to the developers, this tool sees the beauty of stock photos in the same way as humans do. So before you buy a stock image or upload one of your own, you can run it through this quick test and see what neural network has to say about it. I tested it out, and the results were surprising, to say the least.
Ok, imaging AI is just getting ridiculous now. By now we all know of the Prisma app. It lets you take photos and turn them into images that resemble paintings by artists such as Van Gogh, Monet, etc. A team of reseachers at UC Berkley have come up with a system that does the exact opposite of that. It looks at paintings and turns them into something that resembles a photograph.
Using the same principles, though, the team have also taken things in an entirely different direction, allowing you to map individual objects between each other. Such as turning horses into zebras, or vice versa. And now, we really can compare apples to oranges.
An Adobe Research paper titled Deep Image Matting, might just put an end to green and blue screen techniques. Adobe collaborated with the Beckman Institute for Advanced Science and Technology, to develop a new system based on deep convolution neural networks. This system extracts foreground content from its background accurately and intelligently without any kind of blue or green screen background.
Eliminating the green screen isn’t a completely new idea. Lyryo’s cinema cameras are able to do this based on depth perception. But this solution is 100% software based. The paper outlines the process to evaluate images. It then determines what needs to be cut from the background, and how.