Scientists at MIT and Adobe Research have developed a new way to identify materials using AI vision. They say that the system is not affected by shadows or inconsistent lighting and that it could one day be used to help guide robots that interact with objects in the real world. It may also ultimately come to Lightroom and Photoshop to assist with image editing more easily.
The machine-learning system allows you to specify just a single pixel of material, and it detects all pixels in the scene which match the material. It was trained using “synthetic” data, essentially 3D renders, although they say that it’s very effective at picking out materials in real scenes, too. There’s still a long way to go, although it looks like the team has taken a very big first step.
MIT says that the method is accurate even when objects have varying sizes and shapes. The system differs from past methods, which sometimes reference entire objects which can be made of multiple materials. The example provided is a chat with wooden arms and a leather seat. Others are limited to a pre-defined set of materials with pretty broad labels, like “wood”. This new system dynamically evaluates all of the pixels in an image to determine material similarities without having to actually know what those materials are.
The primary purpose of the machine-learning system appears to be to aid in robotics and have robots recognise the world around them. But, the researchers have other potential uses in mind, too, such as image editing. Being able to easily select just a subject’s shirt or all of the pairs of jeans worn by multiple people in an image would make for some very quick and easy corrections.
Knowing what material you are interacting with is often quite important. Although two objects may look similar, they can have different material properties. Our method can facilitate the selection of all the other pixels in an image that are made from the same material.
– Prafull Sharma, Lead Author
It’s a pretty cool technique that, although already quite advanced, is still in its very early stages of development. I expect we’ll probably see this come up in some Adobe MAX sneak peek video within the next year or two.
You can read the full report here.