I got the Think Tank Airport International V 2.0 a few months back when I had a shoot planned that required me to hop on a plane. If you want the long story, you can find it in the review and video below. The short story is that it replaced my old Lowepro CompuTrekker Plus AW as my small-shoot go to bag even if there are no airplanes involved. (And I think I may have accidentally slept with it once or twice).
Search Results for: artificial intelligence
Chinese company Xiaomi is working on an algorithm that will improve low-quality images. The company wants to compete with Apple regarding smartphone photography, and it has just published a new paper on the AI network called “DeepExposure.” It uses machine learning to improve low-quality images by adding them detail while enhancing colors and brightness.
Before the proliferation of speedlights and portable strobes over the last few years, people always asked me why I’d take flash out in the daytime. It was often difficult to formulate an answer that they’d accept. They never really “got it” unless I took them on a shoot with me so they could see first hand.
This video from photographer Manny Ortiz embodies the answer in my head, though. Essentially it’s about having options. Sometimes the natural light will give me exactly what I want, and sometimes it won’t. In the horrible British weather, for me it’s more often won’t. So, I take flash with me.
DeepDream is a computer vision AI created by Google which utilises a convolution neural network. It looks for and enhances patterns in images using a process called algorithmic pareidolia. Essentlly, it’s seeing things that aren’t really there. Like the face we may see on the surface of Mars or bunny rabbits & dragons in clouds.
We’ve seen it used on still images for a while and you can make your own here. But this video takes things to a whole new level. Based on a 5 minute clip from Bob Ross’ The Joy of Painting the visuals in this are just plain ridiculous. And if it wasn’t creepy enough already, the sequence is played backwards. So, have a watch of Bob Ross unpainting a picture on LSD.
With facial recognition technology you can take pictures of people in the street, run them through publicly available photographs online, and get a match.
You would have heard this statement if you had been listening to the 20 September 2016 episode of Seriously on BBC Radio 4, called ‘The Online Identity Crisis’. I only heard it yesterday, though, as I caught up with it by podcast. It did, however, set me thinking. Just how likely, or easy, is it that someone should take a photo of me in the street, run said image through facial recognition software, and be able to identify me?