This Is Why We Still Can’t Trust AI Image Detectors 100%

Alysa Gavilan

Alysa Gavilan has spent years exploring photography through photojournalism and street scenes. She enjoys working with both film and mirrorless cameras, and her fascination with the craft has grown over the decades. Inspired by Vivian Maier, she is drawn to capturing everyday moments that often go unnoticed.

AI

Artificial intelligence has made creating lifelike images and videos easier than ever, but distinguishing reality from AI-generated content is proving far more difficult. 

In a report by The Guardian, AI tools flagged images from Iran showing mass graves as manipulated or not real. In fact, these photographs were authentic and actually documented an atrocity. 

This misidentification underscores a critical problem: AI detection tools, often touted as solutions to misinformation, are far from foolproof. For photographers, journalists, and everyday users, the tools you might rely on to verify authenticity can be misleading, creating false confidence or unnecessary doubt.

AI

When AI Detection Gets Reality Wrong

The Guardian report highlights a scenario that should concern anyone using AI detection software. Images from Minab in Iran, showing graves and human remains, were dismissed by AI systems as fake. 

But experts and on-the-ground investigations later confirmed that the photos were real. The AI models failed because they rely on subtle patterns, composition irregularities, or metadata anomalies, which are criteria that do not reliably distinguish real, complex scenes from synthetic ones.

The risk extends both ways. False negatives occur when AI content is mistaken for reality, while false positives label authentic images as AI-generated. In high-stakes situations, like reporting conflicts or documenting human rights abuses, these errors can undermine public trust. The Guardian perfectly shows the danger when AI tools questioned the authenticity of tragic evidence. This could delay awareness and action.

investigating

Tests Show Mixed Performance from AI Detectors

The New York Times in February 2026 investigated more than a dozen AI detectors and chatbots designed to identify fake video, audio, and images

Their tests revealed that while some tools were adept at spotting certain types of AI content, none could offer complete confidence. 

For example, a digitally generated seaside port with minor inconsistencies confounded nearly all detectors. Conversely, real images, including a harrowing photograph taken during the Israel-Hamas conflict, were questioned or flagged as fake by some AI models.

The takeaway is clear here. AI detectors can assist in identifying suspicious content, but they cannot replace careful human verification. You cannot rely solely on these tools to declare an image authentic or manipulated. 

How Industry Players Are Responding

In response to the shortcomings of existing tools, major tech companies are developing more advanced verification methods. 

OpenAI launched a tool claiming 99 percent precision in identifying AI-generated images. The system analyzes hidden patterns and metadata to assess authenticity, offering users a technical layer of verification. 

Google’s Gemini AI and SynthID platforms, meanwhile, provide similar capabilities, incorporating watermarking and provenance tracking to help determine if an image is fully synthetic or partially altered.

sony

Sony has also entered the verification space with camera‑linked authentication. Sony’s new system embeds verifiable metadata at capture, enabling later confirmation that an image or video originates from a specific camera. This approach integrates authentication at the source, rather than relying solely on post‑hoc detection of artifacts.

Despite these advancements, no tool is perfect. 

The New York Times’ tests show that AI detectors perform better at confirming real images than detecting fakes. Hybrid images that mix AI-generated elements with real photography remain a major blind spot. Even with advanced detection, human judgment, contextual knowledge, and cross-referencing with credible sources remain essential.

Verification should combine multiple approaches: careful observation, metadata checks, comparisons with trusted sources, and professional assessment when documenting sensitive or newsworthy content.

Critical Thinking Is Important

For photographers and content creators, AI detection tools can give a false sense of security. You may assume an image flagged as authentic is trustworthy or that a flagged fake is fabricated, but both assumptions can be wrong. 

AI image detection tools are an evolving technology, but they remain unreliable for making definitive claims about authenticity. In real-world scenarios, such as the Minab graves images reported by The Guardian, AI models can misclassify genuine photographs as fake. 

The most reliable approach is still critical thinking, careful observation, and multiple layers of verification. AI detection can provide guidance, but you cannot trust it blindly. In a world flooded with synthetic and manipulated media, maintaining human judgment and scrutiny remains your most essential tool for distinguishing fact from fiction.


Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

Alysa Gavilan

Alysa Gavilan

Alysa Gavilan has spent years exploring photography through photojournalism and street scenes. She enjoys working with both film and mirrorless cameras, and her fascination with the craft has grown over the decades. Inspired by Vivian Maier, she is drawn to capturing everyday moments that often go unnoticed.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *

One response to “This Is Why We Still Can’t Trust AI Image Detectors 100%”

  1. Charles Haacker Avatar
    Charles Haacker

    I am a longtime working photographer, dating back to the mid-1970s. I had a small studio with a “general practice,” portraits, weddings, groups, events, the usual mix, including some product and commercial/industrial. We sometimes worked with 8×10 cameras, then went blind cutting red frisket to mask off everything the client didn’t want seen. In color, we would make an 11×14 master print on matte paper and send it to a full-time professional airbrusher to erase unwanted things. The master would then have to be rephotographed to get a negative for the full run, but if you’ve ever seen this expensive multi-stage process, you know the quality was compromised, and we could see it. We hoped the client didn’t.

    I lost the studio and pegged my guns for years, refusing even to pick up a camera until 2007, when my wife spotted me a tiny $100 USD 7 MP P&S. I was over the moon. In rapid succession, I was upgrading cameras, learning Photoshop elements, finally graduating to Lightroom Classic and Full PhotoShop, and am to this day flabbergasted at what I can do that was either impossible or fabulously expensive in the Good Old Days (when, by the way, we were dumping toxic chemicals into the waterways).

    My big thing is disclosure. I am ethical. I know fellow photographers grow enraged if they discover they’ve been fooled. If I publish a picture where I’ve changed the background and maybe the sky, I make a point of saying so in the title, the caption, or both. Lately, I’ve been writing, “A.I. Assisted.”

    I am now only a hobbyist with pretensions. Sometimes I will see a picture, but if the weather is if-fy, or whatever else makes it sub-par, I can, if I choose, improve it in post. I was once at Devils Tower in Wyoming, USA, and found a terrific angle on the monument— from the parking lot! But, cars. This was when Adobe first had generational fill in Photoshop Beta. I made the picture, and deleted the cars. I also told viewers I had. Anyway, I don’t sell anything anymore, but ethics is ethics.

    I hope the technology catches up with the technology to defeat the bad guys.