An edited photograph of Catherine, Princess of Wales, and her family that several news agencies have since pulled for evidence of manipulation raises a thorny question: In an age when digital editing tools are more widespread and easier than ever to use, what even is a photo anymore?
From camera stabilization to advanced, AI-driven image filters and other tools, it has never been simpler to create stunning, high-quality images using basic consumer hardware. Smartphone makers and app developers increasingly promote this technology as helpful for creators and everyday users.
But it also introduces fresh opportunities for even the well-intentioned to misdirect viewers.
The image released Sunday by Kensington Palace shows the Princess of Wales surrounded by her children and appears intended to put to rest speculation about her health. But outlets including Agence France-Presse, Reuters and the Associated Press pointed to a misaligned sleeve and hand as evidence the photo had been doctored. (CNN is continuing to use the photo, appropriately captioned to reflect the debate around its authenticity.)
In a statement, Kate acknowledged that she used an editing tool or tools to alter the image.
Image of Princess of Wales retracted because of potential manipulation
“Like many amateur photographers, I do occasionally experiment with editing,” she said, apologizing for any confusion the photo may have caused.
The princess did not say what changes she made to the photo or what tools were used.
Although celebrities have altered photos with editing tools for years, images taken of the royal family have a historical importance and are expected to be authentic.
The use of image editing to present a false narrative goes back almost to the dawn of photography itself. Many older fake images, however, required the use of niche techniques or specialized knowledge of photography that wasn’t available to the public.
More recently, programs such as Adobe Photoshop allow editors to make changes to photos; some may be minor, editorially acceptable adjustments. But these tools can also be used to manipulate certain parts of an image, such as making things disappear or extending the background of a photo with just a few clicks. But the high cost and complexity of the software still presented some barriers to entry.
Today, much of the technical work of editing an image is done completely automatically. When users select “portrait mode” from a list of camera options, running in the background are sophisticated sensors that calculate how far away objects are, enabling features like Apple’s Focus Point, which lets users change which parts of an image are stylistically blurred — even after the photo has been taken.
Meanwhile, Google’s “Best Take” software in the Google Pixel 8 and 8 Pro smartphones uses AI for photos taken close together and creates a blended image aimed to capture everyone’s best expressions (i.e. no blinking).
There are three types of photo manipulation, according to Ramesh Raskar, an associate professor at MIT Media Lab who’s worked on special photography and images projects for Apple and Facebook over the years: appearance, where the tones or lighting may change; geometric, which involves moving around subjects or objects; and generative AI, a more recent concept that allows users to write text prompts that generate completely new images.
He said Kate’s picture appeared to have both appearance and geometric changes, and was likely not AI-generated.
“The manipulation in this image is very unique,” he said, noting changes that impacted the placement of her children’s clothing. “It’s unlikely that it was ever one single image. A photo editing app probably introduced these errors.”
He said he believes the “subtle” edits were done very finely, potentially with a third-party app or editing software such as Photoshop, versus using built-in smartphone tools.
He noted that similar errors can occur if a user starts splicing and merging different images together. “It all comes from real pieces of photos, but it’s more like a collage.”
But even Photoshop is becoming less complex to use with the introduction of generative-AI tools being added directly to the software.
Although Kate’s errors are apparent, Raskar said it will soon become harder to identify the authenticity of images. Image tools with generative AI will allow so much more than photo manipulation because people will be able to create images from completely new scenarios.
For example, an AI tool user in theory would be able to request and receive a picture of the royal, wearing a blue sweater and sitting on chair with her arms around her children, in seconds – without it ever occurring in real life.
Reece Hayden, a senior analyst at tech research firm ABI Research, said he believes the public will become increasingly skeptical and aware that images can be “faked” or “altered” in the months ahead.
As AI images hit the mainstream, it will open up more discussions around transparency and the need to regulate usage so news outlets and the public can better spot AI-generated and augmented images, he said.
Although AI image watermarks already being implemented on certain platforms, mandating that will be a challenge, Hayden said.
“Our expectation, for the medium term at least, is that ‘watermarks’ will not be forced but governments will rely on both business and users to add them in to bring transparency,” he said.