What is Content Authenticity in photos, and why does it matter?

Dec 17, 2023

Sagiv Gilburd

Sagiv Gilburd

Sagiv Gilburd

News Editor

Sagiv Gilburd is an Israel-based commercial photographer and videographer with extensive expertise in studio work, event photography, and managing large-scale photography projects.

What is Content Authenticity in photos, and why does it matter?

Dec 17, 2023

Sagiv Gilburd

Sagiv Gilburd

Sagiv Gilburd

News Editor

Sagiv Gilburd is an Israel-based commercial photographer and videographer with extensive expertise in studio work, event photography, and managing large-scale photography projects.

Join the Discussion

Share on:

YouTube video

With the announcement of the Leica M11-P, “Content Authenticity” is finally moving from theory to practice. With many apps and cameras soon promising to support it, it’s interesting to take a closer look into this feature. What does it mean, how is it implemented, and how would it help fight fake news generated by AI?

What is Content Authenticity?

Content Authenticity is based on a new C2PA (The Coalition for Content Provenance and Authenticity) technology developed by the CAI (Content Authenticity Initiative). It is, in short, a technology that tracks image information. Specifically, the image creator’s name and the date the image was created. It does so by “stamping” the photo with this data. It can also track edits that were introduced to the image later on. If you want to read the technical explanation, CAI has delved into the details on their FAQ page.

The name of this “stamp” is Content Credentials.

What are the problems that Content Authenticity solves?

Knowing the original data of a photo, it will be harder to lie about photos being edited. It will be harder to lie about the creator of the image. And more importantly, with AI now roaming about, it will help to know if the image was even real to begin with.

In the age of the AI generation, such concepts become more and more common due to the ease of use of AI. You can always go and edit an image in Photoshop, but now it’s just a matter of writing a line of text. Faster, easier, and unlike Photoshop, it’s something literally everyone knows how to do.

In terms of theft, most users aren’t even aware their AI-generated content was created using someone’s data. And that it was generated without permission. This is as many AI tools just take whichever image data they can find. On the same note, if someone uses your image, you sadly won’t know.

(Sure, you can keep using watermarks to try to protect yourself instead, but free AI tools already exist to get rid of those. And sadly, they are very effective… although that is a different topic.)

Content Authenticity workflow

Supporting cameras will give you the option to include credentials like your name and image capture date within the image data. From there, supported editing apps like Adobe Photoshop will keep track of the embedded data and register a new info layer. That new layer will contain the edit info and editor credentials.

This seems to be a straightforward enough workflow. In theory, it solves the two problems we talked about:

  • It allows editors to keep track of who shot an image and the edits it went through.
  • It also helps reduce imagery theft. This a topic that is increasingly worrying during the age of AI generation.

This initiative will help content creators or photojournalists, as it aims to increase the credibility and transparency of digital content. This is relevant across social media platforms like Twitter and digital news sites like the New York Times, BBC, or the Wall Street Journal.

Can you fake Content Credentials?

Content Credentials may sound like more metadata attached to an image. It’s not much different from the metadata or EXIF we’ve had for years. A type of data we can just… edit. Easily! This begs the question: can’t someone simply edit the content credits and fake the embedded proof?

Not quite. The C2PA technology, at the base of the CAI initiative, is encrypted. This means you can’t (in theory) change records of content credits.

So, in theory, no matter what happens to the image after it was shot, you will be able to drop it into the CAI verification tool and see its history. At the root, you will see the original image, with the date and credentials of the photographer. From there, you will be able to see the edited image, who edited it, and what exact edits were made.

But you can say the same about many other things. Take Photoshop, for example. You are not supposed to be able to use a pirated copy for free. Yet, you can. This is wrong, of course, but it’s a wildly common phenomenon. This raises a concern about the strength of Content Authenticity technology. It just came out, and no one is using it yet. Hence, no one is trying to fake it… Yet. History is full of encryption protocols that got broken. From DVD decryption through Sony PlayStations and even iPhones. Eventually, someone will try to break succeed in breaking Content Authenticity too. While I hope that previous experience strengthened this protocol, I am not optimistic.

I mean, people are presumably using AI to “launder” copyrighted images. Who is to say those very same AI tools won’t just evolve to get rid of the content authenticity info as well?

Why does this sound familiar?

Canon and Nikon have both tried Content Authenticity technology before. Both failed. But why? What happened? When did that even happen?

In December 2010, ElcomSoft, a Russian security firm, announced they cracked Canon’s encryption software. A software that was used to prove photographs were genuine and unmodified. They did it again with Nikon’s image authentication software in 2011. This time, they even posted ridiculous images that still passed validation.

Companies that already support Content Authenticity

Leica just released the first camera in the market that supports this technology, the Leica M11-P. But they are not the only ones. Their competitors (or partners, for this matter) announced similar plans.

Sony, for example, announced their upcoming Sony a9iii would support this initiative. They will also update some of their existing cameras. Even some of their phone cameras. Similarly, Nikon has announced their Nikon Z9 will be updated with the CAI technology, but we still don’t have a date for the update.

In terms of software that will support Content Authenticity, we already mentioned Adobe Photoshop, but Lightroom and Firefly support it as well. These are all Adobe applications, but as the technology is open source, any application can be updated to support it down the line.

Is Content Authenticity mature enough to succeed?

There is a chance the new Content Authenticity initiative will become an industry standard. A lot still depends on how widely the initial adoption will be. It’s kind of a chicken and an egg situation. To see Content Authenticity succeed, we need all stakeholders to embrace it. We need cameras with Content Authenticity from more brands, but we also need media outlets to include CA checking in their workflows.

Two other major factors are ease of use and how strong this protocol is. It is hard to make a definite call until we see some real-world adoption.

Will Content Authenticity solve everything?

No. As CAI puts it: “No – there are generally three ways to address misinformation: education, detection, and attribution. The CAI is creating an attribution-focused solution.” In other words, this technology is just a tool that will help the problem, not solve it on its own. Misinformation can exist even with real, unedited images, all depending on the context.

Take this photo of a collapsed building after an earthquake in Nepal from 2019 for example:

Did you check if that building really is from Nepal before moving on? That building might’ve collapsed on purpose for construction work. Maybe it did fall to an earthquake, but it’s not in Nepal, or it didn’t happen in 2019. You won’t know unless you check, and in most cases, that’s what people don’t do. Check. The news these days is delivered quickly. Most people see the title and image and carry on.

Few verify the contents. Fewer check the bigger picture. And even among them, some may elect to believe what they themselves want to believe. As such, info like content credentials – a tool made for info verification – won’t help unless you choose to do some digging yourself.


So, it’s not going to solve everything, but the new technology by CAI is exciting nonetheless. It’s a method to fight misinformation, as long as you’re willing to use it. Well, that’s what it’s supposed to become, whence enough companies support it.

It’s an open-source project, but it’s still new. Time will tell how accessible it will become and how popular it will get. Time will also tell if it can be hacked or not. You might not be optimistic about CAI, considering the failure of past attempts, but I still recommend keeping an eye out for this technology. It might become a useful and reliable tool in the future. Maybe even a standard protocol in the photography industry.

Find this interesting? Share it with your friends!

Sagiv Gilburd

Sagiv Gilburd

Sagiv Gilburd is an Israel-based commercial photographer and videographer with extensive expertise in studio work, event photography, and managing large-scale photography projects.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *

4 responses to “What is Content Authenticity in photos, and why does it matter?”

  1. timothyhood Avatar

    Lounder? I’m not familiar with that word. Perhaps you meant launder.

  2. timothyhood Avatar

    If this information is added as an additional layer, it seems that the easiest way to circumvent it is to simply remove that layer and optionally create a replacement.

  3. Libby Sutherland Avatar
    Libby Sutherland

    Will be interesting to see how the Socials handle this. Because right now, everything is stripped when the image is uploaded and repackaged.

  4. VELS14 Avatar

    In its current form, in my opinion, the CAI, as implemented in Photoshop, for example is useless. It’s listing Generative AI for sure, but it’s also putting in other actions as looking equivalent such as cropping and removing dust blemishes, but not when I used the remove tool as it was listed as other, and it listed changing the levels to get the image to look like I saw it, it’s singled out. The if I change it to B&W it’s an unknown. Plus it admits that it doesn’t even record some edits or activity. It doesn’t differentiate between edits to bring a RAW image to look like the scene and AI generation. And I want to know if in the camera, it’s going to make note about jpg processing or say that’s native, out of the camera because there’s a lot going on when an image is saved as a jpg file. I’m so not impressed, I’ve turned it off. Moreover, what about the difference between documentary and art. There are so many ethical variables that aren’t being considered by CAI that I think that so far it’s an exercise in futility.