US Announces New AI Safety Consortium

Technology Edited by
US Announces New AI Safety Consortium

US Announces New AI Safety Consortium (Image: X/@SecRaimondo)

The Joe Biden-led US administration has announced the creation of the US AI Safety Institute Consortium (AISIC). The United States calls it “the first-ever consortium dedicated to AI safety.” More than 200 leading AI stakeholders will be part of the consortium.

This significant move was announced by US Secretary of Commerce Gina Raimondo. The US says that this initiative will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy AI technology. The new consortium will be under the US AI Safety Institute (USAISI).

In October 2023, US President Biden signed an Executive Order to promote the “safe, secure, and trustworthy” use and development of AI. This order establishes new standards for AI safety and security. The US says that with this Executive Order, the President directs the “most sweeping actions ever taken” to protect Americans from the potential risks of AI systems. The new consortium will contribute to the priority actions outlined in the Executive Order.

Leading AI tech companies, including Amazon, Google, Apple, Anthropic, Microsoft, OpenAI, and NVIDIA, will be part of the consortium. “The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said Gina Raimondo in a statement. Bruce Reed, White House Deputy Chief of Staff, said that the newly formed AI Safety Consortium provides a critical forum to work together to manage the risks posed by AI.

The US administration said that the consortium also includes state and local governments, as well as non-profits. The group will also work with organisations from like-minded nations.