Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.


Back in January, I spoke to Mark Beall, a co-founder and CEO of Gladstone AI, a consulting firm that released a bombshell report yesterday, commissioned by the State Department. The announcement was first covered by TIME, which highlighted the report‘s AI safety action-plan recommendations — that is, “how the US should respond to what it argues are significant national security risks posed by advanced AI.”

When I first spoke to Beall, we chatted for a story I was writing about the debate among AI and policy leaders about a “web” of effective altruism adherents in AI security circles in Washington, DC. There was no doubt that Beall, who told me he was a former head of AI policy at the U.S. Department of Defense, felt strongly about the need to manage the potential catastrophic threats of AI. In a post on X that shared my story, Beall wrote that “common sense safeguards are needed urgently before we get an AI 9/11.”

For many, the term “AI safety” is synonymous with tackling the “existential” risks of AI — some may be drawn to those concerns through belief systems such as effective altruism (EA), or, as the report maintained, from working in ‘frontier’ AI labs like OpenAI, Google DeepMind, Anthropic and Meta. The Gladstone AI authors of the report said they spoke with more than 200 government employees, experts, and workers at frontier AI companies as part of their year-long research.

However, others pushed back on the report’s findings on social media: Communication researcher Nirit Weiss-Blatt pointed out that Gladstone AI co-author Eduoard Harris has weighed in on what many consider a far-out, unlikely “doomer” scenario called the “paperclip maximizer” problem. On the community blog LessWrong, Eduoard Harris wrote that the paperclip maximizer is “a very deep and interesting question.”

VB Event

The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond. Space is limited, so request an invite today.


Request an invite

Aidan Gomez, CEO and co-founder of Cohere, declined to comment on the report but said the following post “sums my take up” — presumably about the unscientific nature of the survey. The comments on the post included someone who said “you could get more representative data than this with a twitter poll.”

And William Falcon, CEO of open source AI development platform Lightning AI, wrote on X that “According to TIME, open source AI will cause an ‘extinction level event to humans.’ First of all, very silly claim. but if you really want to put on your tin hat, close source AI is more likely to cause this.”

Beall left Gladstone to launch ‘the first AI safety Super PAC”

As the debate swirled on X, I was particularly interested to read in the TIME piece that Beall, who was one of three co-authors of the Gladstone AI report (which was commissioned for $250,000), had recently left the firm to run what he told me in a message is “to our knowledge, the first AI safety Super PAC.” The PAC, he said, which launched yesterday — the same day as the Gladstone report came out — plans “to run a national voter education campaign on AI policy. The public will be directly impacted by this issue. We want to help them be as informed as possible.”

That, of course, will take money — and Beall told me that “we have secured initial investments to launch the Super PAC, and we plan to raise millions of dollars in the weeks and months to come.”

I also thought it was interesting that Beall’s Super PAC co-founder Brendan Steinhauser, is a Republican consultant who has a long history of working with conservative causes, including school choice and the early Tea Party movement.

Beall emphasized the bipartisan nature of the Super PAC. “We are bipartisan and want to see lawmakers from left and right get together to promote innovation and protect natural security,” he said. “Brendan has worked for nearly two decades in national politics and policy, and he is on the conservative side of the aisle. He has built strong, functional bi-partisan and diverse coalitions on issues like education and criminal justice reform.”

Super PAC launched same day as Gladstone report

Still, it seemed strange that the Super PAC, called “Americans for AI Safety,” would launch the same day as what appeared to be a non-political report, which Gladstone co-founder Jeremie Harris told me was commissioned by the State Department’s Bureau of International Security and Nonproliferation in October 2022.

“It is standard practice for these sorts of reports to be commissioned by government national security agencies, particularly to address fast-moving emerging tech issues when the government lacks the internal capacity to fully understand them,” Jeremie Harris said. “In this instance, the State Department asked us to serve as a neutral source of expert technical analysis.” He added that Gladstone had not taken any outside funding and “aren’t affiliated with any organizations with a vested interest in the outcome of our assessment.”

As to Beall’s Super PAC, Harris said that “we’re delighted for Mark that he’s launched his Super PAC successfully. That said, Mark’s PAC is run entirely independently from Gladstone, so we were not involved in decisions around the timing of his launch.”

‘Now, the real work begins’

But Beall did seem to make a connection between the Gladstone report and the Super PAC. He pointed out that Gladstone “did a great service with its academic paper,” but added “now, the real work begins,” adding that “We need Congress to work to get that first law passed that points us toward a flexible, long-term approach that can adapt to the speed and technical realities of AI development.”

When I asked Beall who he would be soliciting donations from — effective altruism organizations? AI leaders like Geoffrey Hinton and Yoshua Bengio, who have famously called for “policy action to avoid extreme risks” of AI? — he said “we aim to build as big of a coalition as we possibly can.”

“We expect to see a diverse group of funders invest in the organization, because the one issue that brings them together is AI safety and security,” he said. “If you pay attention to the political debates surrounding AI right now, you see that a vast majority of Americans are concerned about catastrophic risks. That means that there is an incredibly diverse array of people who are with us on the issue, and could decide to invest in Americans for AI safety.”

I wasn’t sure about that — I would venture to say that current risks like deepfakes and election disinformation, while less catastrophic, are on many minds — but what does seem inarguable is that when it comes to AI policy around AI safety and security, money and politics will continue to merge.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Source link