Are you worried about AI image generators stealing your artwork and spitting it out as its own? If so, you’re in luck. Scientists from Chicago University are fighting fire with fire, so to speak. They have developed a sort of AI invisibility cloak that will make your images unable to be sampled by the image generators.
The software is called Glaze and “cloaks” the images so that models incorrectly learn the unique features that define an artist’s style. This way, it makes AI plagiarism much less likely.
According to the webpage, Glaze was created by “an academic research group of PhD students and CS professors interested in protecting Internet users from invasive uses of machine learning.”
So how does it work? Apparently, the tool adds very small changes to any original artwork before it is posted online. According to the website, these changes are barely visible to the human eye. This means that the artwork still appears nearly identical to the original while preventing AI models from copying the artist’s style.
While this sounds like an amazing step for artists fighting back against AI image generators, the team at Glaze do admit that it’s not a permanent solution to the problem. AI evolves at such an alarming rate that it will be difficult to future-proof it completely.
However, we should all be happy that at least someone out there is addressing the issue in a practical way. “Glaze is not panacea,” the website says, “but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and follow-up projects will provide some protection to artists while longer-term (legal, regulatory) efforts take hold.”
Already several artists and stock photo companies have launched lawsuits again Stable Diffusion, Midjourney, and Dall-E for copyright infringement. The generative image companies freely admitted to training their machine learning on thousands of copyrighted images, many of which were work by living artists and photographers.