In the new lawsuit filed by Elon Musk last week against OpenAI, its CEO Sam Altman, and its president Greg Brockman, the word “nonprofit” appears 17 times. “Board” comes up a whopping 62 times. “AGI”? 66 times.

The lawsuit’s claims, which include breach of contract, breach of fiduciary duty, and unfair competition, all circle around the idea that OpenAI put profits and commercial interests in developing artificial general intelligence (AGI) ahead of the duty of its nonprofit arm (under the leadership of its nonprofit board) to protect the public good.

This is an issue, of course, that exploded after OpenAI’s board suddenly fired Sam Altman on November 17, 2023 — followed by massive blowback from investors including Microsoft and hundreds of OpenAI employees posting heart emojis indicating they were on Altman’s side. Altman was quickly reinstated, while several OpenAI board members got the boot.

Plenty of people have pointed out that Musk, as an OpenAI co-founder who is now competing with the company with his own startup X.ai, is hardly an objective party. But I’m far more interested in one important question: How did nerdy nonprofit governance issues tied to the rise of artificial general intelligence spark a legal firestorm?

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

OpenAI’s unusual nonprofit structure is in hot water — again

Well, it all winds back to the beginning of OpenAI, which Musk’s lawsuit lays out in more detail than we have previously seen: In 2015, Musk, Altman and Brockman joined forces to form a nonprofit AI lab that would try to catch up to Google in the race for AGI — developing it “for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.”

But in 2023, the lawsuit claims, Altman, Brockman and OpenAI “set the Founding Agreement aflame”— with “flagrant breaches” such as breaching the nonprofit board’s fiduciary duty and breach of contract, including what transpired during the days after Altman was fired by the nonprofit board on November 17, 2023, and subsequently reinstated.

Much of the controversy winds back to the fact that Open AI isn’t just any old nonprofit. In fact, I reported on OpenAI’s unusual and complex nonprofit/capped profit structure just a few days before Altman’s firing.

In that piece, I pointed to the “Our structure” page on OpenAI’s website that says OpenAI’s for-profit subsidiary is “fully controlled” by the OpenAI nonprofit. While the for-profit subsidiary is “permitted to make and distribute profit,” it is subject to the nonprofit’s mission. 

Elon Musk’s lawsuit, however, shed even more light on the confusing alphabet soup of companies that are parties in the case. While OpenAI, Inc. is the nonprofit, OpenAI, LP; OpenAI LLC; OpenAI GP, LLC; OpenAI Opco, LLC; OpenAI Global, LLC; OAI Corporation, LLC and OpenAI Holdings, LLC, all appear to be for-profit subsidiaries.

Microsoft is now a non-voting member of OpenAI’s nonprofit board

As I wrote in November, according to OpenAI, the members of its nonprofit board of directors will determine when the company has “attained AGI” —  which it defines as “a highly autonomous system that outperforms humans at most economically valuable work.” Thanks to the for-profit arm that is “legally bound to pursue the Nonprofit’s mission,” once the board decides AGI, or artificial general intelligence, has been reached, such a system will be “excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.” 

But as the very definition of AGI is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached? what does the timing and context of that possible future decision mean for its biggest investor, Microsoft — that is now a non-voting member of the nonprofit board? Isn’t that a massive conflict of interest?

Musk certainly seems to think so. The lawsuit says: “Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsoft’s significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.’s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGI—and hence when it has developed an algorithm that is outside the scope of Microsoft’s license.”

Others have pushed back on OpenAI’s nonprofit status

Musk is not the first to push back on OpenAI’s nonprofit status. “I think the story that Musk tells in his complaint validates and deepens the case we’re making in California,” said Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization which wrote a letter on January 9 requesting that the California Attorney General investigate OpenAI’s nonprofit status. The letter raised concerns that OpenAI “may have failed to carry out its non-profit purposes and is instead acting under the effective control of its for-profit subsidiary affiliate.” 

And legal experts I spoke to say that Musk has a strong point in this regard: James Denaro, attorney and chief technologist at the Washington DC-based CipherLaw, told me that Musk does make a strong policy argument that if a company can launch as a non-profit working for the public benefit, collect pre-tax donations, and then transfer the IP into a for-profit venture, this would be a “highly problematic paradigm shift” for technology companies.

Musk’s lawsuit is not surprising because of the nonprofit vs. profit structural issues that have plagued OpenAI, added Anat Alon-Beck, associate professor at Case Western University School of Law, who focuses on corporate law and governance and recently wrote a paper about “shadow governance” by observing board members at tech companies.

According to the paper, “It was not until November 2023 that mainstream media started paying more attention to the concept of board observers, after OpenAI, the corporate entity that brought the world ChatGPT, gave Microsoft a board observer seat following the drama in OpenAI’s boardroom. But what the mainstream media did not explore in its coverage of the board observer concept was its seemingly less interesting nature as a non-voting board membership, which was an important element in the complex relationship between OpenAI and Microsoft. This signaled deepening ties between the two companies that also eventually got the attention of the DOJ and FTC, as well as the influential role of CVC [corporate venture capital] in funding and governing the research and development of OpenAI.”

“This lawsuit was due because of OpenAI’s structure,” she said, adding that OpenAI should be worried.

“You should always be worried because when you pick such a weird structure like OpenAI did, there’s uncertainty,” she said. “In law, when we’re representing large companies, we want to have efficiency, low transaction costs and predictability. We don’t know how courts gonna look at fiduciary duties. We don’t know because of court hasn’t decided on that. I’m sorry, but it’s a bad structure. They could have accomplished [what they wanted] using a different type of structure.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Source link