Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.


According to a new paper published by 23 AI researchers, academics and creatives, ‘safe harbor’ legal and technical protections are essential to allow researchers, journalists and artists to do “good-faith” evaluations of AI products and services.

Despite the need for independent evaluation, the paper says, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney. The paper’s authors called on tech companies to indemnify public interest AI research and protect it from account suspensions or legal reprisal.

“While these terms are intended as a deterrent against malicious actors, they also inadvertently restrict AI safety and trustworthiness research; companies forbid the research and may enforce their policies with account suspensions,” said a blog post accompanying the paper.

Two of the paper’s co-authors, Shayne Longpre of MIT Media Lab and Sayash Kapoor of Princeton University, explained to VentureBeat that this is particularly important when, for example, in a recent effort to dismiss parts of the New York Times’ lawsuit, OpenAI characterized the Times’ evaluation of ChatGPT as “hacking.” The Times’ lead counsel responded by saying, “What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced the Times’s copyrighted works.”

VB Event

The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond. Space is limited, so request an invite today.


Request an invite

Longpre said that the idea of a ‘safe harbor’ was first proposed by the Knight First Amendment Institute for social media platform research in 2022. “They asked social media platforms not to ban journalists from trying to investigate the harms of social media, and then similarly for researcher protections as well,” he said, pointing out that there had been a history of academics and journalists being like sued, or even spending time in jail, as they fought to expose weaknesses in platforms.

“We tried to learn as much as we could from this past effort to propose a safe harbor for AI research,” he said. “With AI, we essentially have no information about how people are using these systems, what sorts of harms are happening, and one of the only tools we have is research access to these platforms.”

Independent evaluation and red teaming are ‘critical’

The paper, A Safe Harbor for AI Evaluation and Red Teaming, says that to the authors’ knowledge, “account suspensions in the course of public interest research” have taken place at companies including OpenAI, Anthropic, Inflection, and Midjourney, with “Midjourney being the most prolific.” They cited artist Reid Southen, who is listed as one of the paper’s co-authors and whose Midjourney account was suspended after sharing Midjourney images that seemed nearly identical to original copyrighted versions. His investigation found that Midjourney could infringe on owner copyright without the user explicitly intending to with simple prompts.

“Midjourney has banned me three times now at a personal expense approaching $300,” Southen told VentureBeat by email. “The first ban happened within 8 hours of my investigation and posting of results, and shortly thereafter they updated their ToS without informing their users to pass the blame for any infringing imagery onto the end user.”

The type of model behavior he found, he continued, “is exactly why independent evaluation and red teaming should be permitted, because [the companies have] shown they won’t do it themselves, to the detriment of rights owners everywhere.”

Transparency is key

Ultimately, said Longpre, the issues around safe harbor protections have to do with transparency.

“Do independent researchers have the right where, if they can prove that they’re not doing any misuse or harm, to investigate the capabilities and or flaws of a product?” he said. But he added that, in general, “we want to send a message that we want to work with companies, because we believe that there’s also a path where they can be more transparent and use the community to their advantage to help seek out these flaws and improve them.”

Kapoor added that companies may have good reasons to ban some types of use of their services. However, it shouldn’t be a “one-size-fits-all” policy, “with the terms of the service the same whether you are a malicious user versus a researcher conducting safety-critical research,” he said.

Kapoor also said that the paper’s authors have been in conversation with some of the companies whose terms of use are at issue. “Most of them have just been looking at the proposal, but our approach was very much to start this dialogue with companies,” he said. “So far most of the people we’ve reached out to have been willing to sort of start that conversation with us, even though as of yet I don’t think we have any firm commitments from any companies on introducing the safe harbor,” although he pointed out that After OpenAI read the first draft of the paper, they modified the language in their terms of service to accommodate certain types of safe harbor.

“So to some extent, that gave us a signal that companies might actually be willing to go some of the way with us,” he said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Source link