Lots of interesting news has been coming from Google lately. They seem to be very devoted to the development of AI, and there is another novelty they may implement. Soon, Google could become able to remove the unwanted objects from your photos. In other words, if you take a photo through glass or a fence, the algorithm will automatically remove the obstruction and produce a clean photo.
Google seems to heavily turning into an AI company above all else right now. AI is a necessary part of Google’s search in order to provide the most suitable results. But it’s gone far beyond that now. At Google’s I/O developer conference, CEO Sundar Pichai announced Google Lens.
It’s a new technology designed to leverage Google’s computer vision and AI technology. The goal is to make your phone’s camera “smarter”. Now your camera won’t only see what you’ll see, but it’ll understand what it is, in real time. This data can then be used in a multitude of ways, including search.
We are all witnesses to vast technology advancement, and it’s fun to watch how it can be used for art. Artist Damien Henry seems to think so as well, so he wanted to see what happens when he uses machine learning to create a video – from a single image.
He used a prediction algorithm and gave it one photo at the beginning. From then on, the machine calculated each following frame and predicted what it would look like. The result is almost an hour long video composed of more than 100,000 frames, and it’s pretty impressive.
Printing the photos gains popularity even in the digital age, and it undoubtedly has many advantages. Still, if you want to reverse the process and turn an old print into digital format, Google’s PhotoScan now makes it easier than ever.
The app now makes it possible to make the photos glare-free without taking multiple images. You can take a snap of a print, and the app will remove the glare from the single photo, in one tap.
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
Google has developed and launched a new encoder named Guetzli. It’s an open source algorithm that allows you to reduce the size of JPG files by up to 35% while keeping the quality unchanged. Additionally, you can increase the image quality while leaving the size unchanged.
Guetzli will allow you high compression density at a good quality of the image. It can be immensely helpful for saving images for the website. Using it will make the website use less data and thus be faster to load.
RAISR stands for Rapid and Accurate Image Super-Resolution. It’s Google’s prototype software which utilises machine learning to provide better quality upsampling of low resolution images. They first showed off the technology in November last year, but now Google have announced that RAISR has been implemented into Google+ for Android.
The point of the technology is to save bandwidth. Many mobile users have fairly limited bandwidth. Either they have low limits, or it’s just slow. Google see RAISR as an option to save bandwidth. The idea is to scale down the images before sending out. This means they’re smaller and easier to send. Then RAISR blows them back up to their original size on the receiving end. And it wants to do this with the minimal of impacts on quality.
It may be DXOMark’s highest scoring mobile device camera ever, but the Google Pixel is not without its photographic flaws. Quite a few users have reported getting flare or “halo effect” issues with their camera when it’s not even in the shot. The thing with lens flare, though, is that it’s a physical hardware issue. This is why DSLR and mirrorless lenses come with hoods. They block the light from entering the lens and reflecting inside the optics causing flare.
While Google acknowledge that the problem exists, and will be addressing it, they are combating this hardware problem with a software solution. The general idea will be that some algorithm will recognise the flare, and then mathematically subtract it from the image. So, it’s not really eliminating the flare, just faking its removal in software.
Panoramio has been one of my go-to location scouting tools for quite a while now. It’s an invaluable resource which combines Google Maps with user contributed imagery. Each of these images are GPS tagged, showing the exact spot at which they were made. Its design makes it fantastic for finding hidden gem locations nearby, or for checking out areas you’re visiting before you go.
In recent years, Google have been adding similar functionality into the main Google Maps service. Indeed, whenever you do a search for a location, there’s an “Explore” button at the bottom, and when you click it, a strip of images comes up. But for photographers, or others scouting locations, it doesn’t offer the most efficient workflow.
Google is definitely giving apple a run for its money. Their new Pixel phone (formerly Nexus) just scored a full 89 on DXO’s mobile camera test. This is the highest score that a smartphone ever got on DXO.
It’s true the iPhone 7 Plus does have dual lenses and some pretty awesome features, but as far as camera quality, Google is setting a high bar. Not to say that the iPhone 7 scored badly, it scored 86 with a stellar review, but it is still 3 points short.