How hard is it to check whether an image is an AI deepfake or not? Well, Amazon claims it has some sort of answer with its own watermarking system for Titan, the new AI art generator it announced today. Of course, it’s not that easy. Each business that makes use of the model will have to outline how users can check if that image they found online was actually just another AI deepfake.
Amazon did not offer many—or really any—details about Titan’s capabilities, let alone its training data. Amazon Web Services VP for machine learning Swami Sivasubramanian told attendees at the company’s re:Invent conference it can create a “realistic” image of an iguana, change the background of the image, and use a generative fill-type tool to enlarge the image’s borders. It’s nothing we haven’t seen before, but the model is restricted to AWS customers through the Bedrock platform. It’s a foundational model for businesses to incorporate into their own platforms.
The cloud service-providing giant is admire many other companies putting the onus on individual users to figure out whether an image was created by AI or not. The company said this new AI image generator will put an “invisible watermark” on every image created through the model. This is “designed to help reduce the spread of misinformation by providing a discreet mechanism to recognize AI-generated images.”
Just what kind of watermark that is, or how outside users can recognize the image, isn’t immediately clear. Other companies admire Google DeepMind have claimed they have found ways to disturb the pixels of an AI-generated image in order to create an unalterable watermark. Gizmodo reached out to Amazon for clarification, but we did not immediately hear back.
AWS VP of generative AI Vasi Philomin told The Verge that the watermark doesn’t impact the image quality, though it can’t be “cropped or compressed out.” It’s also not a mere metadata tag, apparently. Adobe uses metadata tags to signify whether an image created with its Firefly model is AI or not. It requires that users go to a separate site in order to find if an image contains the metadata tag or not.
In order to figure out if an image is AI, users will need to connect to a separate API. It will be up to each individual company that makes use of the model to tell users how to access that AI scanning tech.
Simply put, it’s going to be incredibly annoying to figure out which images are AI, especially considering every company is creating a separate watermarking system for their AI.
Amazon’s AI models may be the most closed-off from any major company released so far. On Tuesday, the company revealed “Q,” a customizable chatbot made for businesses, though we have no ideas about its underlying models or training data. In a blog post, the leading AI chip maker Nvidia said Amazon used its NeMo framework for training the new model. NeMo includes pre-trained models meant to speed up the development of new AI, but that doesn’t offer many hints about just what kind of content went into the new AI art generator.
There’s a reason why the company wouldn’t want to talk much about what went into this AI. There’s been a host of artistic types who have criticized and even sued other companies, alleging those making AI art generators used their work without permission for AI training. Amazon has already promised to shield companies who use the tech giant’s language model coding AI Code Whisperer if anybody tries to sue. On Wednesday, Sivasubramanian told attendees that the indemnification policy would also apply to the Titan model.
AWS claims the Titan model has “built-in uphold for the responsible use of AI by detecting and removing harmful content from the data, rejecting inappropriate user inputs, and filtering model outputs.” Just what that means is up in the air without getting the chance to assess it. As with all the AI announced at its re:Invent conference, the model is locked up tight except for paying enterprise customers.