The news that Google abandons Nik Collection affected users all over the world. Over the years, many photographers embraced it as a part of their workflow, and one of such users started a petition to bring the collection back. Sascha Rheker from Germany started the petition on Change.org, in an attempt to make Google go on with developing Nik collection.
Google haven’t so much announced as “slipped in” that they’ve ceased development of the Nik Collection via a banner. Google acquired Nik Software, the company behind the Collection, in 2012. It was only just over a year ago that Google announced it was making the Nik Collection a completely free download for all users. Now, it seems, that the new price tag doesn’t justify continued development.
The Nik Collection contains seven applications. Analog Efex Pro, Color Efex Pro, Silver Efex Pro, Viveza, HDR Efex Pro, Sharpener Pro and Dfine. For now it’s still available, compatible with Mac OS X 10.7 through 10.10, Windows Vista, 7 & 8, Adobe Photoshop CS4 through CC2015 and Lightroom 3 through 6/CC. Unofficially, the Nik Collection also seems to run fine on Windows 10, too, for now.
Lots of interesting news has been coming from Google lately. They seem to be very devoted to the development of AI, and there is another novelty they may implement. Soon, Google could become able to remove the unwanted objects from your photos. In other words, if you take a photo through glass or a fence, the algorithm will automatically remove the obstruction and produce a clean photo.
Google seems to heavily turning into an AI company above all else right now. AI is a necessary part of Google’s search in order to provide the most suitable results. But it’s gone far beyond that now. At Google’s I/O developer conference, CEO Sundar Pichai announced Google Lens.
It’s a new technology designed to leverage Google’s computer vision and AI technology. The goal is to make your phone’s camera “smarter”. Now your camera won’t only see what you’ll see, but it’ll understand what it is, in real time. This data can then be used in a multitude of ways, including search.
We are all witnesses to vast technology advancement, and it’s fun to watch how it can be used for art. Artist Damien Henry seems to think so as well, so he wanted to see what happens when he uses machine learning to create a video – from a single image.
He used a prediction algorithm and gave it one photo at the beginning. From then on, the machine calculated each following frame and predicted what it would look like. The result is almost an hour long video composed of more than 100,000 frames, and it’s pretty impressive.
Printing the photos gains popularity even in the digital age, and it undoubtedly has many advantages. Still, if you want to reverse the process and turn an old print into digital format, Google’s PhotoScan now makes it easier than ever.
The app now makes it possible to make the photos glare-free without taking multiple images. You can take a snap of a print, and the app will remove the glare from the single photo, in one tap.
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
Google has developed and launched a new encoder named Guetzli. It’s an open source algorithm that allows you to reduce the size of JPG files by up to 35% while keeping the quality unchanged. Additionally, you can increase the image quality while leaving the size unchanged.
Guetzli will allow you high compression density at a good quality of the image. It can be immensely helpful for saving images for the website. Using it will make the website use less data and thus be faster to load.
RAISR stands for Rapid and Accurate Image Super-Resolution. It’s Google’s prototype software which utilises machine learning to provide better quality upsampling of low resolution images. They first showed off the technology in November last year, but now Google have announced that RAISR has been implemented into Google+ for Android.
The point of the technology is to save bandwidth. Many mobile users have fairly limited bandwidth. Either they have low limits, or it’s just slow. Google see RAISR as an option to save bandwidth. The idea is to scale down the images before sending out. This means they’re smaller and easier to send. Then RAISR blows them back up to their original size on the receiving end. And it wants to do this with the minimal of impacts on quality.
It may be DXOMark’s highest scoring mobile device camera ever, but the Google Pixel is not without its photographic flaws. Quite a few users have reported getting flare or “halo effect” issues with their camera when it’s not even in the shot. The thing with lens flare, though, is that it’s a physical hardware issue. This is why DSLR and mirrorless lenses come with hoods. They block the light from entering the lens and reflecting inside the optics causing flare.
While Google acknowledge that the problem exists, and will be addressing it, they are combating this hardware problem with a software solution. The general idea will be that some algorithm will recognise the flare, and then mathematically subtract it from the image. So, it’s not really eliminating the flare, just faking its removal in software.