Ok, imaging AI is just getting ridiculous now. By now we all know of the Prisma app. It lets you take photos and turn them into images that resemble paintings by artists such as Van Gogh, Monet, etc. A team of reseachers at UC Berkley have come up with a system that does the exact opposite of that. It looks at paintings and turns them into something that resembles a photograph.
Using the same principles, though, the team have also taken things in an entirely different direction, allowing you to map individual objects between each other. Such as turning horses into zebras, or vice versa. And now, we really can compare apples to oranges.
It is very cool, though. Being able to turn a painting into something that looks like a real photograph can offer some great insight. Admirers of the work will get some idea of what the artist was actually seeing when they created the piece. Landscape photographers will be able to see just how the light may have fallen on a location. Then maybe try to recreate it themselves.
As for the whole object/body swapping thing, though. That’s really kinda freaky, but surprisingly convincing, too.
Well, unless you’re a cat turning into a dog. Perhaps if you want to make your own version of the Resident Evil movies.
One thing that is particularly cool is the AI’s ability to change seasons. Go from summer to winter, or from winter back to summer in no time at all. This, again, may benefit landscape photographers immensely. Personally, I do a lot of my location scouting during the not so great weather months. It’s usually just before and just after winter. So, there’s few leaves on trees, there’s no sunshine, no snow (this is England).
Running some of my off-season location scouting photos through this might show me what I can expect it to look like during the nice warm summer months, or the cold snowy (lol – like I said, this is England) winter. Although the examples shown on the website look like the system still might have a little more learning to do in this respect.
Most AI systems which use paired data sets to try and figure out how an image got from Point A to Point B. It then uses that data to turn a new image into the desired result. This one, however uses what’s called “unpaired data”. So, it doesn’t have comparison photos to figure out how to go from one image to another. They just feed it a whole lot of images from sources such as Flickr. Then there’s some human refinement to train the system.
The system is definitely not perfect, and the team admits that the methods aren’t as reliable as using paired training data. The ultimate goal is to make the system “cycle consistent”. This means that when you send an original image into the system and generate something new, entering that new image back into the system and reversing the process gives you back the original.
It might be a way off that goal just yet, but it’s amazing just how good it’s getting.
If you run Linux and want to try it out for yourself. You can download the code here.
[via Engadget]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!