Did you ever wonder where the Girl with a Pearl Earring might have been when Vermeer ‘captured’ her in his famous masterpiece? Well, now text-to-image AI DALL-E has announced that it will let users extend their creativity beyond the borders of an image by using outpainting.
The example given shows Vermeer’s masterpiece The Girl with a Pearl Earring standing in an old-fashioned kitchen. The kitchen is depicted in a similar painted style to the original and preserves the same tones and lighting. All in all, it’s pretty impressive.
Outpainting is the method of letting users continue an image beyond its original borders — adding visual elements in the same style, or taking a story in new directions — simply by using a natural language description.
In the video you see how the image was assembled, piece by piece like a jigsaw puzzle. Finally, the entire scene is realised with superb accuracy.
DALL·E’s Edit feature already enables changes within a generated or uploaded image — a capability known as Inpainting. DIYP showed some examples of this previously. Photographer Nicolas Sherlock was able to fix out-of-focus elements in a macro photograph of a ladybug using this technique.
Now, with Outpainting, users can extend the original image, creating large-scale images in any aspect ratio. Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image.
Photoshop has had something similar to this in its Content-Aware Fill commands for quite some time. It works pretty well, although obviously, it does depend on what you’re trying to achieve. Simple cloning areas work, anything much more complicated and it isn’t so dependable.
You certainly couldn’t use it to create something along the lines of the Vermeer image above. For that, you would need to use sample images and more traditional compositing methods. And that’s just with photographs. Trying to replicate the brushstrokes of an old master painting perfectly in the same style would take a lot of time and expertise.
Now though, you don’t need to learn any particular skills. You just need to be able to type a prompt to produce some pretty impressive results. I can see some useful applications for this already. It could be used for storyboarding for example.
Honestly, I really don’t know how I feel about this. On the one hand, it’s an incredible technology that can open up whole new ways of working. On the other, as we have discussed in recent articles on DIYP, it’s going to disrupt the creative industries like nothing before.
Where do you fall on the wonder versus fear scale when it comes to this kind of AI?