Chinese company Xiaomi is working on an algorithm that will improve low-quality images. The company wants to compete with Apple regarding smartphone photography, and it has just published a new paper on the AI network called “DeepExposure.” It uses machine learning to improve low-quality images by adding them detail while enhancing colors and brightness.
We’ve seen some of the algorithms that can enhance low-quality photos. The researchers from Oxford University and the Skolkovo Institute of Science and Technology in Moscow have developed a new approach for restoring damaged or low-quality images. Instead of training the neural network with thousands of photos, their system called Deep Image Prior works everything out from the original image. And without any previous learning, it turns a pixelated or damaged photo into a hi-res one.
Shutterstock has introduced their new search tool, which helps you narrow down the search results even further. Composition Aware Search lets you search the images by the position of the objects in them. The tool features a canvas on which you place the keywords. Then you can move them around and get the photos that contain specific objects in a specific order.
Stock photo search engine Everypixel is a tool that should make the quest for perfect stock photos easier. But what’s even more interesting is their tool called Everypixel Aesthetics. It uses neural networks to tell you how “awesome” your photo is.
According to the developers, this tool sees the beauty of stock photos in the same way as humans do. So before you buy a stock image or upload one of your own, you can run it through this quick test and see what neural network has to say about it. I tested it out, and the results were surprising, to say the least.
Ok, imaging AI is just getting ridiculous now. By now we all know of the Prisma app. It lets you take photos and turn them into images that resemble paintings by artists such as Van Gogh, Monet, etc. A team of reseachers at UC Berkley have come up with a system that does the exact opposite of that. It looks at paintings and turns them into something that resembles a photograph.
Using the same principles, though, the team have also taken things in an entirely different direction, allowing you to map individual objects between each other. Such as turning horses into zebras, or vice versa. And now, we really can compare apples to oranges.
An Adobe Research paper titled Deep Image Matting, might just put an end to green and blue screen techniques. Adobe collaborated with the Beckman Institute for Advanced Science and Technology, to develop a new system based on deep convolution neural networks. This system extracts foreground content from its background accurately and intelligently without any kind of blue or green screen background.
Eliminating the green screen isn’t a completely new idea. Lyryo’s cinema cameras are able to do this based on depth perception. But this solution is 100% software based. The paper outlines the process to evaluate images. It then determines what needs to be cut from the background, and how.
Neural networks, deep learning, artificial intelligence, whatever term you prefer seems to be a very hot topic lately. Every couple of weeks there seem to be new developments to show off what it can do. We’ve seen CSI-like enhancements, facial recognition that can see through obscuration, converting 2D to 3D and plenty more. This one, while it might initially seem quite gimmicky, is actually pretty cool.
Research Engineer Roland Meertens has been working with neural networks and the Nintendo Gameboy camera. His goal is to produce photorealistic results from 190×144 pixel images produced by the Gameboy Camera. Released in 1998, it became the world record holder at the time for the “smallest digital camera”. You could also get a little printer to go along with it. It was a very cool toy for its time, but it wasn’t exactly broadcast quality.
A detailed and accurate face mapping is a complex task. It requires a series of photos with ideal and consistent lighting from different angles. If you want to capture all the details and imperfections of the face, you need professional lighting and multiple shots. However, a group of researches is on the way of changing this method.
Well, this could be quite the problem for privacy advocates everywhere. In locations such the UK, it could cause even cause legal issues for things like the Data Protection Act. Pixelating faces is used throughout the world to conceal identities on camera. Often it’s to protect their identity. Sometimes they need protection, sometimes they’re caught in the background of footage about something else, and sometimes you just forgot to get a release signed.
Wired reports that researchers at the University of Texas at Austin and Cornell Tech claim to have trained a piece of software to see through it. The scariest thing about this is that the technology is already out there. They didn’t actually have to develop any new tools, simply feed a new set of data using existing facial recognition methods. The software then figured it out all by itself.
Facebook has announced that it is releasing three of its main image identification algorithms to the public. It’s not the first time Facebook has opened its research to the public, and it likely won’t be the last. In this particular instance, Facebook say that they hope the work will “rapidly advance the field of machine vision”.
Such technology has already come a long way in just the last few years. It’s a bit like what you see on Google when you search by uploading a image. It makes an attempt to identify the person, place, or object in the image, and offer similar or related results. It’s also similar to the technology coming in the iOS 10 update to automatically categorise your photos.