An interesting controversy fired up on social media over the weekend. I’m not sure where it stands now, but apparently Instagram was…well, let me start with a photo of my own. I made this photo a few weeks ago while walking at the Buffalo Outer Harbor, and I posted this edit to Flickr:
It’s a simple composition, really: a woman walking away on the pedestrian path that goes along the water. There are streetlights to her left, and up ahead is the edifice of an abandoned grain elevator, with faded corporate logo and a more recent giant graffiti at the bottom (I can’t get beyond my feeling that the graffiti has the interrobang backwards–it should be ?!, not !?). I increased the saturation in a few specific colors, bringing out the green both for contrast and to heighten the places where weeds are coming up between the sidewalk slabs, and I dialed up the contrast a little, because I like the sun on the woman’s shoulders and also the slight hint of reddish-brown in her hair. Oh, and I cropped it down, because I wanted the distant background to be entirely the old grain elevator, and not any of the sky above it.
But that’s not all I did to edit this photo.
Here’s the original jpeg that came out of the camera (I now shoot in both RAW and jpeg outputs):
Do you see it? Or, more precisely, them?
Two trash cans, at lower left. I didn’t want them there. One was easy: I cropped the photo so it was gone completely. But the other? For that I used a tool called “Generative Fill” in Lightroom, which you can use to remove things you don’t want in your photo. After you use a “brush” in lightroom to paint over what needs to come out, an AI-driven engine analyzes the photo and substitutes in what it thinks the photo would look like if the selected thing wasn’t there. This isn’t always an ideal tool–weird artifacts can remain in the photo that make it clear something was done there–but if you’re removing simple stuff, it does surprisingly well. My edited photo, to my eye, looks like there was never a trash can at all in that spot.
What came up over the weekend was this: Apparently Lightroom actually saves something in a photo’s metadata that indicates that an AI-assisted edit was made when the new photo is generated based on the Lightroom edits, and when such a photo is uploaded to Instagram, the service affixes a “Made with AI” tag to the photo.
The reaction to this was, naturally, one of annoyance, because surely there’s a difference between using an AI engine to remove a single minor element from an otherwise “real” photo, and the kind of “Made with AI” imagery that we associate with the term–the weirdly plastic-looking photos that start to take on a creepy tone as we look to the details and notice things like that person seems to have three hands, or that lady has six fingers while this guy only has three, or wow, look how tight that person’s clothes are, it’s almost as if they’re painted on. (By the way, you know what AI still doesn’t get right at all? The buckles on bib overalls!)
I did post my photo to Instagram, but it is thus far not marked as “Made by AI”. I’m not sure if that’s because IG has rethought this policy at all, or if there’s something about the fact that I posted the photo via sharing from Flickr, so maybe the metadata didn’t go along for the ride. I did note my edit in the image description, though, because that does seem like the right thing to do. But maybe I’m wrong.
Anyway, I’ll be following this issue with great interest.