As part of keeping artificial intelligence developments safe, 18 countries, including the United States and Britain, jointly unveiled new guidelines on AI cybersecurity. The guidelines for secure AI system development have been created by the UK’s National Cyber Security Centre (NCSC) and the US’s Cybersecurity and Infrastructure Security Agency (CISA). Apart from these agencies, the guidelines were formulated in cooperation with 21 other agencies from across the world. It also includes all members of the G7 group of nations and those from the Global South.
The latest guidelines, published on November 27, are mainly focused on raising the cybersecurity levels of AI. It also helps ensure that AI is designed, developed, and deployed securely. Germany, Australia, Singapore, Italy, Estonia, Poland Chile, Nigeria, and Israel were some of the nations that signed on to the new guidelines. According to the NCSC, the new guidelines will help developers of any systems that use AI make informed cyber security decisions at every stage of the development. The guidelines focus on four key areas within the AI system development life cycle. This includes secure design, secure development, secure deployment, and secure operation and maintenance.
In secure design, there are guidelines that are applicable to the design stage of the AI system development life cycle. This includes understanding risks and threat modeling. The guidelines for secure deployment are applicable to the deployment stage of the AI system, and they include protecting infrastructure and models from compromise and responsible release. According to NCSC CEO Lindy Cameron, the new guidelines are an important step in shaping a truly global, common understanding of cyber risks and mitigation strategies around artificial intelligence. “This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future,” said CISA Director Jen Easterly in the statement.