A whole bunch of big tech companies, have joined a US-based effort to advance responsible AI practices. The US AI Safety Institute Consortium (AISIC) will count Meta, Google, Microsoft and Apple as members. Commerce Secretary Gina Raimondo just announced the group’s numerous new members and said that they’ll be tasked with carrying out actions indicated by on artificial intelligence.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.
Biden’s October executive order was far-reaching, so this consortium will focus on developing guidelines for “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
Red-teaming that dates back to the Cold War. It refers to simulations in which the enemy was called the “red team.” In this case, the enemy would be an Those engaged in this practice will try to trick the AI into doing bad things, like exposing credit card numbers, via prompt hacking. Once people know how to break the system, they can build better protections.
Watermarking synthetic content is another important aspect of Biden’s original order. Consortium members will develop guidelines and actions to ensure that users can easily identify AI-generated materials. This will hopefully and AI-enhanced misinformation. Digital watermarking has yet to be widely adopted, though this program will “facilitate and help standardize” underlying technical specifications behind the practice.
The consortium’s work is just beginning, though the Commerce Department says it represents the largest collection of testing and evaluation teams in the world. Biden’s executive order and this affiliated consortium are pretty much all we’ve got for now. Congress meaningful