Virtual reality can take you to places you otherwise couldn’t see, and there’s still plenty of room for improving and experimenting with VR technology. In a recent blog post, Google has announced that they’re experimenting with light field photography to create more realistic VR experience. To make this possible, the company is using a solution that seems pretty simple and clever: a rig made of 16 rotating GoPro cameras.
Portrait Mode has been simultaneously one of the biggest jokes and coolest advancements in smartphone camera technology. Google’s version of it can be found in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones. And they have just released their latest version of it as Open Source, available to any developer who can make use of it.
It’s detailed in Semantic Image Segmentation with DeepLap in Tensorflow on the Google Research blog. And reading how it works is quite interesting, even if you have no idea how to actually do it. Semantic Image Segmentation is basically the process by which pixels in an image are defined by labels, such as “road”, “sky”, “person” or “dog”. It allows apps to figure out what to keep sharp and what to blur.
A few days ago, Getty and Google announced the upcoming changes as a result of a licensing deal. The announced changes have arrived, and now you can’t see the “View Image” button on Google any longer. Instead, if you want to see the photo, you’ll have to go directly to the website where it’s hosted.
Not long ago, Google introduced Clips, an AI-powered camera trained to capture the best moments of your life. It has no LCD screen and there’s only a shutter button, which is completely optional. Google Clips uses artificial intelligence to recognize and save your “perfect moments” itself. But how is it possible? According to Google, it’s because they hired “a documentary filmmaker, a photojournalist, and a fine arts photographer” to help train the camera’s neural network.
Google’s Art & Culture app has an amusing new feature. If you take a selfie within the app, it finds your look-alike in a work of art. Google compares your face to over 70,000 artworks in their Art Project database and then tries to find your doppelgänger. Sometimes the results are stunningly accurate. But at other times they’re just hilarious.
Although artificial intelligence can be impressive, sometimes we get to witness that it’s not always the case. You may remember that time when the Google Photos app tagged a couple of African Americans as “gorillas.” After an apology and a promise it would fix it, Google indeed “fixed it.” It simply removed the label “gorilla” from its lexicon, along with some other words.
In a recent blog post, Google has introduced their new AI that can judge your photos based on both technical and aesthetic quality. According to Google researchers, the new network “sees” the photos almost like the humans would. With time, it could get even more accurate, and its application could affect image editing processes, judging images in competitions and more.
Google have now announced the availability of the final Developer Preview of Android 8.1. While the finalised version won’t roll out until December, the new preview features “near-final” system images. The new preview actives the Pixel Visual Core chipset in both the Pixel 2 and Pixel 2XL.
Essentially, this is an 8 core system on chip (SOC) which can run three trillion operations per second. Android Central reports that Google’s HDR+ routines will be five times faster using less than a tenth of the energy over using the standard image processor. This means better dynamic range and reduced noise through computational imaging.