On the heels of Microsoft’s investment and partnership with French Large Language Model startup Mistral AI, the company continues to work hard to try to dispel the image that it’s blocking competition through its deep partnership (and financial stake) in OpenAI. Today the company launched a new framework it’s calling “AI Access Principles” — an eleven-point plan that Microsoft said will “govern how we will operate our AI datacenter infrastructure and other important AI assets around the world.”

The points cover areas such as the building and operation of an app store to allow businesses to pick and choose different LLMs and other AI products and a commitment to keeping company’s proprietary data out of its training models. It also includes a commitment to let customers change cloud providers, or services within the cloud, if they choose to. It also details a focus on building cybersecurity around AI services; attention to building data centers and other infrastructure in an environmentally-sound way; and education investments.

Brad Smith, the president and vice chair of Microsoft, announced the framework today at the Mobile World Congress in Barcelona. Although the implication here is that Microsoft is open to dialogue and conversation with stakeholders, ironically, Smith delivered the news in a keynote speech, with no scope for follow-up questions.

The announcement comes at the same time that Microsoft is coming under increasing regulatory scrutiny for its $13 billion investment in OpenAI, which currently gives it a 49% stake in the startup that is leading the charge for generative AI services globally. In January, the European competition watchdog said that it was assessing whether the investment falls under antitrust rules.

The take specific aim at how third parties might use Microsoft’s platforms and services to develop AI products, a critical business area and enterprise service that the company hopes to develop in coming years, not just with the carriers who attend MWC, but businesses and organizations from a much wider array of industries.

“If they are training a model on our infrastructure, if they are deploying it on our infrastructure, we recognise that their data is their data, we will not access it and use it to compete with the companies that are relying on our infrastructure,” Smith said.

These AI Access Principles, to be clear, are not binding rules for Microsoft — nor is there any kind of detail laid out around how any of the commitments might be verified or tracked — but they serve a purpose in anticipation of that. In the event of any formal regulatory investigations, they will likely be used by the company to argue that it is taking proactive efforts to ensure competition in the market. 

“In fact, as of today, we have almost 1600 models running in our data centres, 1500 of which are open source models,” said Smith on stage today, “showing how we as a company … focus on proprietary and open source models, companies, large and small.”

On the other hand, by laying them out publicly like this, the principles become a public pronouncement that the public, Microsoft’s competitors, and pointedly regulators, could use as a reference point if they believe Microsoft has failed to measure up.

Read more about MWC 2024 on TechCrunch

Source link