Google’s DeepDream AI turns Bob Ross into an LSD fuelled nightmare
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
Created by Vimeo user artBoffin, and titled Deeply Artificial Trees, this video is the stuff from which nightmares are created. But it’s like a train wreck, you just can’t look away once you start watching.
AI generated still images are now quite widespread and we’ve also seen simple animations showing the iteration process, too. But this is the first time I’ve seen it used so effectively on live action footage.
There’s nothing quite as surreal as watching Bob unpaint a tree. A tree that appears to randomly turn into some kind of alien scorpion. With other strange looking creatures making an appearance for a frame or two here and there.
What I find particularly interesting about this one, though, is what it’s done to the audio. The sound is also generated by AI. The original audio has gone through a similar process to look for patterns in speech and other sounds. It’s then generated its own track based on what it “hears”. What makes it so creepy is that because the original video runs backwards, the generated audio also appears to sound like it’s playing backwards.
It’s not the only time that artBoffin has played around with audio AI, though. He also uploaded this video showing how it “hears” certain popular voices. And while they don’t say a single intelligible word between them, you can quickly figure out who they’re supposed to be.
The videos above are amusing and entertaining with no real practical purpose. But the future potential for the technology is both amazing and scary. As the code evolves and learns from more content, I can see it creating completely believable images and sound from nothing other than its own “thoughts”.
John Aldred is a photographer with over 20 years of experience in the portrait and commercial worlds. He is based in Scotland and has been an early adopter – and occasional beta tester – of almost every digital imaging technology in that time. As well as his creative visual work, John uses 3D printing, electronics and programming to create his own photography and filmmaking tools and consults for a number of brands across the industry.