I don’t know many photographers these days who aren’t comfortable using post-processing software such as Photoshop or Lightroom. Many of these programmes are beginning to introduce AI features, with Adobe launching its own text-to-image generator Firefly this last month.
I can see the allure of using AI to fix skin retouching issues or make masking easier, for example. But how could we utilize text-to-image generators such as Midjourney to our advantage without denigrating ourselves as photographers? In this video, photographer Andrea Pizzini attempts to answer that question by demonstrating the new ‘describe’ feature.
For those of you that don’t know, the Describe feature in Midjourney it’s a sort of backward engineering way to create variations of an image. In simple terms, it’s an image-to-text generator. So you input a photo, and then the software translates that into a series of four different prompts, which you can then edit to your liking and then generate more images from them.
It seems like a strange idea at first, but as Pizzini explains in the video, it could actually be quite interesting for photographers, particularly as it develops over the next few months (or days!).
Instead of spending hours creating variations on setups, for example, you could hone one set-up and then create countless variations based on that image.
Pizzini does address the fact that this will feel like anathema to many photographers. AI has a lot of haters at the moment, and with good reason. We are all a little bit scared of the possibility of it replacing us, and in some aspects, it most definitely will.
However, by understanding it, perhaps we can develop ways to use it as a tool to create more and use our time better. Is it really that different from the dawn of Photoshop, for example?