The conspiracy theory about “the Giant of Kandahar” has been going around lately more than ever. This is thanks to a ridiculous AI image that has been fooling the internet over the past few weeks. It has gone viral and resulted in a bunch of comical memes – but also serves as a reminder of the potential dangers of AI-generated content.
AI images of Paris drowning under garbage go viral
A series of AI-generated images falsely portraying Paris submerged under heaps of garbage has gone viral. The pictures were shared through a TikTok video that has garnered an astonishing 450,000 views. These images depict iconic Parisian landmarks, including the Eiffel Tower, the Louvre Museum, and the Arc de Triomphe, all overshadowed by towering piles of trash.
Accompanying the images is a Thai-language text sticker conveying the message, “This is what the French capital city, Paris, looks like. The dream city… now turned into this in reality.” The caption adds, “The government invested money in war,” alluding to the aftermath of civil unrest triggered by a police shooting incident involving a 17-year-old boy during a routine traffic stop in July.
Battling misinformation: Tech giants to watermark AI-generated content
AI-generated images and videos are a significant threat to proper and accurate information. Fake imagery causes confusion, panic, hate, and bullying, and it’s now easier to create than ever. But there’s a step forward to resolving, or at least minimizing, the issue. Seven major tech companies, including OpenAI, Microsoft, and Google, have promised to create watermarks for AI-generated content. This way, it could become safer to share AI content without misleading people about its authenticity.
Viral rescue dog photo shared after Turkey and Syria earthquake a 2018 stock image
On the night of 6 February 2023, a devastating 7.8 Richter earthquake hit south-eastern Turkey, near the border with Syria. The epicenter was near the Turkish town of Gaziantep, and both countries were severely hit: at the time of writing this, the overall death toll has risen to nearly 10,000 people and counting.
As often happens in times like this, many heartbreaking photos have appeared, and some have become more viral than the news itself. Perhaps you’ve seen a photo of a Labrador dog guarding someone’s hand under ruins? While it’s certainly a gut-wrenching shot, and it will cause your friends’ reactions when you share it – it was taken in 2018 and has nothing to do with the recent catastrophe.
New Google Chrome extension helps you spot fake photos with 99.29% accuracy
With the rise in AI-generated imagery lately, creating fake images, even very realistic looking human faces is become more widespread and readily available than ever – especially thanks to services like This Person Does Not Exist – which uses NVIDIA tech that we’ve featured here before. To our eyes, it’s often almost impossible to tell what’s real and what’s not anymore.
But a new Chrome extension from V7 Labs wants to take the guesswork out of figuring out which faces on the web are fake and which are legit, and it does it with a claimed 99.28% accuracy. It’s designed to help rid us of misleading content and fake profiles for people who simply do not exist in the real world, as V7 Labs founder Alberto Rizzoli explains in a video published to Loom.
New tech is coming to help fight the spread of misinformation in photographs online
The Content Authenticity Initiative (CAI), launched in 2019 as a collaboration between Adobe, Twitter and the New York Times, looks like it might finally be making some visible progress. The initial goals to help provide authenticity in images shared online, particularly when it comes to social media, appear to be coming to some kind of early fruition.
Adobe has already shown some technology for preventing the manipulation of images. But a new article on the NYT R&D website shows how they’re fighting misinformation on social media where photographs are misused. For example, claiming that a photograph of a McDonald’s that burned down in a 2016 grease fire was something caused by rioters in 2020.
Google adds fact-checking to help you spot fake images
The speed of information flow on the Internet is a double-edged sword. While it lets us get informed about anything in no time, it also helps fake news spread like wildfire. This is why Google has joined the battle against doctored images. From now on, Google will fact check all the images you search and let you know if they’re fake.
Fox News pulls a Photoshop fail reporting fake news
Faking news and doctoring images is easier than ever nowadays, but sadly, it’s not reserved for “light” stuff that people share on Instagram. Fox News was recently accused of reporting fake news, and they got busted thanks to badly photoshopped images. And I mean really badly!
Twitter soon to start labeling manipulated photos and deepfake videos
As an attempt to stop fake news from spreading, Twitter is soon going to start labeling deceptive content. This includes “deceptively edited” photos, deepfake videos, and manipulated content that could cause “harm to physical safety, widespread civil unrest, voter suppression or privacy risks.”
Instagram now hides photoshopped images, flags them as “false information”
Not long ago, Instagram rolled out a feature that flags fake photos. The main goal is to remove misinformation and fake news, but the feature seems to have gone too far. It’s now hiding all photoshopped photos, flagging them as “false information.” This could have implications for everyone who uses Instagram to showcase their digital artwork and image composites.
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!