Question: can AI vision systems from Microsoft and Google, which are available for free to anybody, identify NSFW (not safe for work, nudity) images? Can this identification be used to automatically censor images by blacking out or blurring NSFW areas of the image?
Search Results for: artificial intelligence
I’m going to get it out of the way right at the start. Having watched the trailer, I want to go and see this movie. This trailer tells us absolutely nothing about what the movie’s about, but every cut pulls me in more. Until the guy starts talking at the end of the trailer, I’ve no idea that the movie is “an AI horror thriller”. But these are the kind of trailers I grew up with.
The trailer, for new movie Morgan, was created by artificial intelligence. Specifically, IBM’s Watson. Does it matter that the movie only scored 42% on Rotten Tomatoes? No, of course it doesn’t. A trailer’s job isn’t to tell us how good or bad a film is. A trailer’s job is to make us want to go and see the film no matter how good or bad it may ultimately be. For me, Morgan’s trailer does exactly that.
According to their website “Prisma transforms your photos into artworks using the styles of famous artists”. District 7 Media have taken it to the extreme by re-shooting timelapse of China using the Prisma app.
The future has never seemed less exciting.
A new video released by DJI presents the Phantom X Concept drone, including a bunch of new(ish) and (kinda) useful technology.
Claiming to “turn wide-eyed dreams of future possibilities into fact”, the Phantom X includes multi-angle shooting, artificial intelligence, obstacle avoidance and free-flight object tracking.
Enlisting help from companies and brands such as Adobe, Lexar, House of Cards and Agents of S.H.I.E.L.D., DJI also presents what I predict could become the next biggest PITA – drone sky painting.
Google apparently is not the most politically-correct mind on the planet. As a recent incident with the Google Photos app illustrates, the artificial intelligence engine is still learning…and making giant mistakes along the way.
Computer programmer and hobbyist photographer Jacky Alciné recently tweeted, “Google Photos, y’all f@#ked up. My friend’s not a gorilla,” along with a screen shot. Jacky had uploaded a photo of himself and a friend to Google Photos, and the automatic tagging feature got it completely wrong.
I got the Think Tank Airport International V 2.0 a few months back when I had a shoot planned that required me to hop on a plane. If you want the long story, you can find it in the review and video below. The short story is that it replaced my old Lowepro CompuTrekker Plus AW as my small-shoot go to bag even if there are no airplanes involved. (And I think I may have accidentally slept with it once or twice).
Chinese company Xiaomi is working on an algorithm that will improve low-quality images. The company wants to compete with Apple regarding smartphone photography, and it has just published a new paper on the AI network called “DeepExposure.” It uses machine learning to improve low-quality images by adding them detail while enhancing colors and brightness.
Before the proliferation of speedlights and portable strobes over the last few years, people always asked me why I’d take flash out in the daytime. It was often difficult to formulate an answer that they’d accept. They never really “got it” unless I took them on a shoot with me so they could see first hand.
This video from photographer Manny Ortiz embodies the answer in my head, though. Essentially it’s about having options. Sometimes the natural light will give me exactly what I want, and sometimes it won’t. In the horrible British weather, for me it’s more often won’t. So, I take flash with me.
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
With facial recognition technology you can take pictures of people in the street, run them through publicly available photographs online, and get a match.
You would have heard this statement if you had been listening to the 20 September 2016 episode of Seriously on BBC Radio 4, called ‘The Online Identity Crisis’. I only heard it yesterday, though, as I caught up with it by podcast. It did, however, set me thinking. Just how likely, or easy, is it that someone should take a photo of me in the street, run said image through facial recognition software, and be able to identify me?