Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The world’s biggest cloud computing companies that have pushed new artificial intelligence tools to their business customers are offering only limited protections against potential copyright lawsuits over the technology.
Amazon, Microsoft and Google are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI — systems that can spew out humanlike text, images and code in seconds.
AI models are “trained” on data, such as photographs and text found on the internet. This has led to concern that rights holders, from media companies to image libraries, will make legal claims against third parties who use the AI tools trained on their copyrighted data.
The big three cloud computing providers have pledged to defend business customers from such intellectual property claims. But an analysis of the indemnity clauses published by the cloud computing companies show that the legal protections only extend to the use of models developed by or with oversight from Google, Amazon and Microsoft.
“The indemnities are quite a smart bit of business . . . and make people think ‘I can use this without worrying’,” said Matthew Sag, professor of law at Emory University.
But Brenda Leong, a partner at Luminos Law, said it was “important for companies to understand that [the indemnities] are very narrowly focused and defined”.
Google, Amazon and Microsoft declined to comment.
The indemnities provided to customers do not cover use of third-party models, such as those developed by AI start-up Anthropic, which counts Amazon and Google as investors, even if these tools are available for use on the cloud companies’ platforms.
In the case of Amazon, only content produced by its own models, such as Titan, as well as a range of the company’s AI applications, are covered.
Similarly, Microsoft only provides protection for the use of tools that run on its in-house models and those developed by OpenAI, the start-up with which it has a multibillion-dollar alliance.
“People needed those assurances to buy, because they were hyper aware of [the legal] risk,” said one IP lawyer working on the issues.
The three cloud providers, meanwhile, have been adding safety filters to their tools that aim to screen out any potentially problematic content that is generated. The tech groups had become “more satisfied that instances of infringements would be very low”, but did not want to provide “unbounded” protection, the lawyer said.
While the indemnification policies announced by Microsoft, Amazon and Alphabet are similar, their customers may want to negotiate more specific indemnities in contracts tailored to their needs, though that is not yet common practice, people close to the cloud companies said.
OpenAI and Meta are among the companies fighting the first generative AI test cases brought by prominent authors and the comedian Sarah Silverman. They have focused in large part on allegations that the companies developing models unlawfully used copyrighted content to train them.
Indemnities were being offered as an added layer of “security” to users who might be worried about the prospect of more lawsuits, especially since the test cases could “take significant time to resolve”, which created a period of “uncertainty”, said Angela Dunning, a partner at law firm Cleary Gottlieb.
However, Google’s indemnity does not extend to models that have been “fine tuned” by customers using their internal company data — a practice that allows businesses to train general models to produce more relevant and specific results — while Microsoft’s does.
Amazon’s covers Titan models that have been customised in this way, but if the alleged infringement is due to the fine-tuning, the protection is voided.
Legal claims brought against the users — rather than the makers — of generative AI tools may be challenging to win, however.
When dismissing part of a claim brought by three artists a year ago against AI companies Stability AI, DeviantArt and Midjourney, US Judge William Orrick said one “problem” was that it was “not plausible” that every image generated by the tools had relied on “copyrighted training images”.
For copyright infringement to apply, the AI-generated images must be shown to be “substantially similar” to the copyrighted images, Orrick said.