Not long ago, Google introduced Clips, an AI-powered camera trained to capture the best moments of your life. It has no LCD screen and there’s only a shutter button, which is completely optional. Google Clips uses artificial intelligence to recognize and save your “perfect moments” itself. But how is it possible? According to Google, it’s because they hired “a documentary filmmaker, a photojournalist, and a fine arts photographer” to help train the camera’s neural network.
Thanks to Google Street View, we can see many corners of the world we may never even visit. Some artists even use it to create photos of remote places from their own home. Now, Google itself finds a way to produce professional-looking photos from their Street View shots. They have created Creatism, a deep-learning system that analyzes Street View scenes searching for a beautiful composition. The algorithm finds the scenes which it’s supposed to turn into shots worthy of professional photographers.
Post-production focusing is something that’s gotten a good amount of attention in the past two months, thanks to the new HTC One and Google’s latest Camera update. But those guys weren’t the first to mess with the technology. Two years ago, a company named Lytro introduced the world’s first light-field camera, which allowed the refocusing of pictures after they’ve been shot already. Their first camera, however, was nothing more that a nice gadget with no real use. Today, the company announced their second entry into the game, and it’s absolutely nothing like what they released back in 2012.