AI/ML, Government Regulations

Governments Issue AI Security Guidance to Safeguard Critical Infrastructure

(Adobe Stock)

As AI adoption accelerates across industries, the U.S., U.K., Australia, and New Zealand have jointly released new guidelines to help organizations build more secure AI systems, reports CyberScoop. The advisory underscores the importance of protecting training data and controlling access to AI infrastructure, particularly in environments tied to critical infrastructure like energy, healthcare, and water systems.

The document, developed by national cybersecurity agencies including the FBI, NSA, and CISA, offers practical recommendations spanning the AI lifecycle—from data collection to deployment and operations. It highlights the need for secure digital practices such as digital signatures, trusted infrastructure, and periodic risk assessments to detect and manage potential threats before they can impact real-world systems.

A major focus of the guidance is on preventing data-related risks that can compromise AI model reliability. Recommended techniques include cryptographic hashing to verify data integrity, as well as anomaly detection algorithms to identify and remove suspicious or harmful data before model training. These steps aim to reduce the likelihood of both accidental and intentional data corruption.

The advisory also calls attention to broader challenges like bias, misinformation, and data drift—factors that can skew model behavior over time. With AI systems becoming embedded in operational technology, the guidance urges companies to treat security as a core design principle, rather than a feature added later. The recommendations signal a growing urgency among Western governments to close security gaps before adversaries can exploit them.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

You can skip this ad in 5 seconds