On Wednesday, the US Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Cyber Security Center (ASD’s ACSC), along with other partners, released joint cybersecurity guidance for critical infrastructure owners and operators using AI in their operational technology (OT) systems. The document presents the four main principles to help organizations benefit from AI in OT while managing risks. It highlights machine learning, large language models, and AI agents because of their complex security challenges. The guidance also covers systems that use traditional statistical models and logic-based automation.  

The document, “Principles for the Secure Embedding of Artificial Intelligence in Operational Technology,” outlines key steps for safely integrating AI into OT systems. It highlights staff AI risk training, secure development, and careful consideration of business needs. The guidance uses organizations to address short and long-term data security, implement strong governance to comply with regulations, and regularly test AI models. It also emphasizes ongoing oversight, transparency, and the inclusion of AI in incident response plans to protect safety and security.  

The Purdue model is still a common way to organize OT and IT devices and networks. The guidance gives examples of current and possible AI uses in critical infrastructure based on this model. Predictive machine learning models are usually in operational layers (0-3). Large language models are more often in business layers (4-5) and often work with OT data.  

Level zero covers field devices such as sensors, actuators, and other components that interact directly with physical processes. These devices generate OT data that can be used to train AI models, particularly predictive machine learning models, or to flag marked deviations that may signal anomalies or emerging issues.  

Level one includes local controllers, which are systems designed to provide automated regulation for a process cell or production line. This category includes devices such as programmable logic controllers and remote terminal units. Some modern PLCs and edge control controllers can run lightweight, pre-trained predictive systems that support tasks like anomaly detection, load balancing, and maintaining a known safe state.  

Level two covers local supervisory systems that manage a specific process line or cell. These include SCADA systems, distributed control systems, and human-machine interfaces. AI models, mostly predictive machine learning, analyze data from these systems to spot early equipment anomalies and notify operators when corrective action is needed.  

Level three involves site-wide supervisory systems that oversee an entire facility or major sections of it. These include manufacturing execution systems and historians. Predictive machine learning models analyze aggregated historian data to predict maintenance needs and plan repairs. These models can also be used in local supervisory tools to offer recommendations for operator decision-making on performance and measurements.  

Levels 4 and 5 refer to enterprise and business networks, which include IT systems that manage corporate processes and support decision-making in critical infrastructure settings. This can involve OT data analysis and autonomous security capabilities that span both OT and IT environments. AI systems, including agents and large language models, can be applied to improve business workflows, especially where engineering needs intersect with wider business objectives. AI can also analyze OT data alongside IT data to measure operations, detect anomalies and threats, identify hardening opportunities, and generate insights that help enterprises prioritize resiliency decisions.  

Transitioning to the first principle, it focuses on understanding AI’s impact on operational technology. It describes the distinctive risks posed by integrating AI into OT systems and outlines potential impacts. Key risks for critical infrastructure owners and operators are presented, though organizations are advised that the list is not exhaustive and should supplement it with their own assessments. Later sections of the guidance explain how to address these risks, providing cross-references and mitigation strategies.  

Principle two urges organizations to assess how AI fits in OT. Before adding AI to OT, owners and operators should check if AI fits their needs and offers any advantages over other technologies. They should also consider whether AI’s existing capabilities meet their needs before using more complex AI solutions.  

AI delivers unique benefits, but as principle two reminds us, it is still developing and needs continuous risk assessment. Organizations should consider factors such as security, performance, complexity, cost, and impact on OT safety before. For each use case, they should consider the pros and cons of using AI against the application’s needs.  

Owners and operators must assess their ability to manage AI in their OT environment. They should understand how AI could introduce risks, such as the need for additional hardware, software, or security measures. If AI is used, they must follow secure development practices and a risk management framework, such as the NIST AI Risk Management Framework, to keep the system safe.  

The guidance notes on how OT vendors influence the entry of AI into OT. Some devices now have built-in AI features that sometimes require an internet connection. Vendors mainly add AI tools, such as models that predict grid frequency, and develop smart devices for engineering and control tasks.  

Critical infrastructure owners should ask vendors for transparency about AI in their products. Vendors must commit to strong security. Contracts should clearly state AI features and operation. Vendors should explain their AI use, share a software bill of materials, and provide insight into their supply chain. If a vendor finds that an AI feature could cause errors, they should notify operators.  

Operators might not want vendors to train AI on operational data as it could contain intellectual property or sensitive information. A data usage policy should state where data is stored, how it is sent, and how it is encrypted. Buyers should check if the product can run on-site or without the vendor’s cloud. Operators should decide when and how to enable or disable AI features. These actions help organizations control and manage AI risks in OT systems.  

The third principle stresses the need for strong guidance to safely integrate AI into OT. This includes clear policies, procedures, and accountability for AI decisions. The governance structure must involve key stakeholders and AI vendors. This ensures oversight across buying, development, design, deployment, and operations.  

Each key stakeholder helps build effective AI governance. Senior leaders such as the CEO and CISO must support the effort. Their backing is essential to strong governance and to addressing AI security risks. In terms of functionality, experts in OT, IT, and AI should join in, as their knowledge reveals dangers and obstacles that others might miss.  

Cybersecurity teams add protection by making policies to keep OT data used by AI models safe. They find vulnerabilities and suggest ways to reduce risks. This helps secure systems and information.  

Principle four urges strong oversight and reliable backup practices for AI in OT systems. People remain responsible for safety. AI tools should support oversight and safe operation. This principle calls for AI systems that can be monitored, checked, and fixed when needed. The guidance explains that organizations should set up monitoring and oversight for AI in OT. This ensures operators always have control as systems change.  

Critical infrastructure owners should track all AI components and dependencies. They should log and monitor their inputs and outputs. It is important to set and maintain clear standards for safe OT operations so they know when maintenance or backup is needed.  

The document sets key performance indicators (KPIs) to track AI results. Owners and operators should meet regularly with stakeholders, such as vendors and boards. These meetings help review results, discuss issues, and identify opportunities for improvement.  

Commenting on the guidance, Hugh Carroll, Vice President of Corporate and Government Affairs at Fortinet, wrote in a written statement, “Leading global cybersecurity agencies, including the US’s CISA and the UK’s NCSC and Canada’s CCCS, have released much-needed guidance outlining principles for the secure deployment of artificial intelligence in operations technologies. Fortinet is honored to have the privilege to contribute to this important effort as we collectively work to best safeguard OT environments from today and tomorrow’s threats.”  

These new principles deliver timely and practical guidance to safeguard resilience and security as AI becomes central to OT. Marcus Fowler, CEO of Darktrace Federal, said, “It’s encouraging to see a strong focus on behavioral analytics, anomaly detection, and safe operating limits. These can identify AI drift, model changes, or emerging security risks before they influence operations.” This move from static thresholds to behavior-based oversight is vital. It helps defend cyber-physical systems, even when small deviations carry great risk.  

Fowler highlighted that the guidance also urges caution with LLM-first approaches to safety decision-making in OT environments. These approaches are unreliable and hard to explain. They create unacceptable risk when human safety and process continuity are at stake. It is important to use the right AI for the right job.  

Taken together, these principles reflect a maturing understanding that AI in OT must be paired with uninterrupted monitoring and transparent and separate identity controls. According to Fowler, we welcome this guidance and remain committed to helping operators implement these safeguards to strengthen resilience across critical infrastructure. We continue to see growing recognition of AI’s operational value in cybersecurity, as evidenced by recent NDAA provisions from bipartisan members of the House Armed Services Committee that emphasize AI-driven anomaly detection, securing operational technology, and incorporating AI into cybersecurity training. That’s an active step toward strengthening US cyber readiness.  

Floris Dankaart, Lead Product Manager at the cybersecurity consulting firm NCC Group, said this worldwide coordination is noteworthy. CISA, Australia’s ACSC, NSA, and other partners are coming together to address a shared challenge. This kind of coordination is rare and signals the importance of this issue. Equally important, most AI guidance addresses IT, not OT. It’s refreshing and necessary to see regulators acknowledge OT-specific risks and provide actionable principles for safely integrating AI in these environments.  

A major challenge will be addressing skill gaps in audit teams, especially those related to AI. OT environments are typically much more structured and deterministic than IT environments, which might be at odds with many modern LLM-based AI applications, according to Dankaart. At the same time, anomaly detection based on machine learning models has been commonplace in OT threat identification and monitoring for some time and continues as a key component of the defender’s arsenal.  

He added that balancing these factors and getting to the heart of what we really mean by AI will be key for critical infrastructure owners. Luckily, some of the best practices in OT and AI use overlap. The idea that you must always have a manual fallback procedure, the ability to operate in island mode, and human-in-the-loop controls, to name a few.  

In conclusion, the guidance identified that adopting AI in OT presents both opportunities and risks for critical infrastructure owners and operators. While AI can increase efficiency, productivity, and decision processes, it also introduces new challenges that require diligent management to support the safety, security, and dependability of OT systems.  

To successfully manage the risks of adding AI to OT systems, critical infrastructure owners and operators must follow the guidance’s principles, understand AI, consider its use in OT, set up governance and assurance frameworks, and build safety and security into AI and AI-enabled OT systems. By adhering to these steps and frequently monitoring, testing, and improving AI models, organizations can achieve a balanced, secure integration of AI into OT systems that support vital public services. 

SourceGlobal security agencies issue joint guidance to help critical infrastructure integrate AI into OT systems 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *