The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K.’s National Cyber Security Centre (NCSC), along with a group of 21 security agencies and ministries worldwide, have endorsed the publication of a set of guidelines for the development of secure and safe artificial intelligence (AI).
The theme of the 20-page Guidelines for Secure AI System Development, a joint effort of CISA and the NCSC, is to keep AI safe from rogue actors, pushing for companies to create AI systems that are "secure by design."
An additional 19 organizations, including tech giants Amazon, Google, IBM and Microsoft and all members of the Group of 7 major industrial economies, have also given their nod of approval to the document’s contents. The document provides essential recommendations for AI system development and emphasizes the importance of adhering to "Secure by Design" principles, the agencies said.
The participating agencies and countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and users safe from abuse, officials said.
CISA’s Director Jen Easterly told Reuters that it is important that so many countries put their names to AI systems making safety first.
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," she said, in stating that ensuring security at the design phase is what the guidelines cement.
A Closer Look at the AI Guidelines
The guidelines are broken down into four key areas within the AI system development lifecycle:
- Secure design guidelines apply to the design stage of the AI system development lifecycle, as well as understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
- Secure development guidelines apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
- Secure deployment guidelines apply to the deployment stage of the AI system development lifecycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes and responsible release.
- Secure operation and maintenance guidelines apply to the secure operation and maintenance stage of the AI system development lifecycle. Guidelines on actions are particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.
The AI guidelines mark a stepped-up strategy by the U.S. and other countries to manage the impact of AI systems.
CISA Releases AI Roadmap
Earlier this month, CISA released its inaugural Roadmap for Artificial Intelligence (AI). In October, President Joe Biden issued an Executive Order (EO) that reflects ongoing efforts to address various aspects of AI, including safety standards, national security, critical infrastructure protection, prevention of AI-related threats, intellectual property theft and talent retention.
Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said DHS Secretary Alejandro Mayorkas. “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development.”