Guidelines for Secure AI System Development

About This Document

This document is published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and the following international partners:

  • National Security Agency (NSA)
  • Federal Bureau of Investigations (FBI)
  • Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • anadian Centre for Cyber Security (CCCS)
  • New Zealand National Cyber Security Centre (NCSC-NZ)
  • Chile’s Government CSIRT
  • ational Cyber and Information Security Agency of the Czech Republic (NUKIB)
  • Information System Authority of Estonia (RIA)
  • National Cyber Security Centre of Estonia (NCSC-EE)
  • French Cybersecurity Agency (ANSSI)
  • Germany’s Federal Office for Information Security (BSI)
  • Israeli National Cyber Directorate (INCD)
  • Italian National Cybersecurity Agency (ACN)
  • Japan’s National center of Incident readiness and Strategy for Cybersecurity (NISC)
  • Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • Nigeria’s National Information Technology Development Agency (NITDA)
  • Norwegian National Cyber Security Centre (NCSC-NO)
  • oland Ministry of Digital Affairs
  • Poland’s NASK National Research Institute (NASK)
  • Republic of Korea National Intelligence Service (NIS)
  • Cyber Security Agency of Singapore (CSA)

Acknowledgements

The following organisations contributed to the development of these guidelines:

  • Alan Turing Institute
  • Anthropic
  • Databricks
  • Georgetown University’s Center for Security and Emerging Technology
  • Google
  • Google DeepMind
  • IBM
  • Imbue
  • Inflection
  • Microsoft
  • OpenAI
  • Palantir
  • RAND
  • Scale AI
  • Software Engineering Institute at Carnegie Mellon University
  • tanford Center for AI Safety
  • Stanford Program on Geopolitics, Technology and Governance

Disclaimer

The information in this document is provided “as is” by the NCSC and the authoring organisations who shall not be liable for any loss, injury or damage of any kind caused by its use save as may be required by law. The information in this document does not constitute or imply endorsement or recommendation of any third party organisation, product, or service by the NCSC and authoring agencies. Links and references to websites and third party materials are provided for information only and do not represent endorsement or recommendation of such resources over others.

This document is made available on a TLP:CLEAR basis (https://www.first.org/tlp/).

Contents

Executive summary

Introduction

Why is AI security different?

Who should read this document?

Who is responsible for developing secure AI?

Guidelines for secure AI system development

1. Secure design

2. Secure development

3. Secure deployment

4. Secure operation and maintenance

Further reading

Executive Summary

This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.

About the Guidelines

AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way.

AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

For this reason, the guidelines are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance. For each section we suggest considerations and mitigations that will help reduce the overall risk to an organisational AI system development process.

  1. Secure design
    This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
  2. Secure development This section contains guidelines that apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
  3. Secure deployment
    This section contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
  4. Secure operation and maintenance
    This section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.

The guidelines follow a ‘secure by default’ approach, and are aligned closely to practices defined in the NCSC’s Secure development and deployment guidance, NIST’s Secure Software Development Framework, and ‘secure by design principles’ published by CISA, the NCSC and international cyber agencies. They prioritise:

  • taking ownership of security outcomes for customers
  • embracing radical transparency and accountability
  • building organisational structure and leadership so secure by design is a top business priority

Guidelines for Secure AI System Development page 1.