CISA, in partnership with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the U.S. National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the UK National Cyber Security Centre, has released Careful Adoption of Agentic AI Services. The publication is the first multinational, government-issued guidance focused specifically on the security of agentic AI: AI systems that do not just generate content, but reason, plan, and take action on their own.
For federal mission owners, integrators, and contracting officers, the document is worth reading in full. The short version is this: agentic AI is not a new flavor of generative AI, and it cannot be secured with the same controls organizations have been building around chatbots and copilots. It demands a layered defense rooted in established cybersecurity disciplines, applied to a new class of systems that act autonomously across tools, data sources, and other agents.
What makes agentic AI different
A generative AI model produces an answer. An agentic AI system uses that answer to do something. As the guidance describes it, agentic systems combine an LLM with external tools, external data sources, memory, and planning workflows so they can perceive their environment and take action toward a goal. The same authoring agencies note that some agentic systems are even capable of spawning sub-agents to handle sub-tasks without human intervention.
That shift from output to action is the entire security story. A hallucination from a chatbot embarrasses an organization. A hallucination from an agent that has write access to a production system, an inbox, or a contract repository can cause real, durable harm before anyone in the loop notices.
The four risk categories worth memorizing
The guidance organizes agentic AI risk into four categories that map cleanly onto how federal cybersecurity teams already think about defense in depth:
Privilege risks. Agents inherit permissions. When those permissions are too broad, evaluated only at startup, or shared across components, a single compromise becomes a privilege escalation event. The guidance specifically calls out the confused deputy pattern: a low-privileged actor manipulating a trusted, high-privileged agent into performing actions the actor could not perform directly. The audit logs in that scenario look entirely legitimate, which is part of why these incidents are hard to detect.
Design and configuration risks. Static role checks, unvetted third-party components, cached authorization decisions, and weak segmentation between agent environments all create conditions where an agent operates outside its intended privilege envelope. These are not novel cybersecurity problems. They are familiar problems amplified by autonomy.
Behavioral risks. This is where agentic systems diverge most sharply from traditional IT. The guidance describes deceptive behavior (agents adapting their behavior under evaluation), emergent capabilities (designers cannot fully anticipate what their own systems will do), and malicious exploitation through prompt injection, jailbreaks, data poisoning, and adversarial inputs. A compromised agent can function as an insider threat using legitimate access while appearing to operate normally.
Structural risks. Tool and agent squatting, compromised third-party components, sensitive data aggregation, and rogue agents in multi-agent systems all introduce systemic exposure. In multi-agent environments, a single compromised agent can spread incorrect information, exploit consensus mechanisms, and propagate malicious plans peer-to-peer at machine speed.
The guidance’s central principle
One sentence from the document deserves to be on every federal AI governance committee’s wall: AI security should be addressed within established cybersecurity frameworks, not as a separate discipline. Agentic systems run on software and hardware, operate over networks, and interact with other digital services. They are exposed to the same threats as traditional IT, plus additional ones. Secure by Design, defense in depth, identity and access management, continuous monitoring, and incident response all still apply. They simply have to be applied to a class of system that can take initiative.
That framing matters because it is also the path to compliance. Federal authorities are increasingly going to expect that agentic AI deployments live inside the same RMF, FedRAMP, and CMMC structures that govern the rest of the enterprise. Treating agentic AI as a side project under a separate governance track is a path to both security gaps and audit findings.
Best practices, distilled
The guidance organizes its recommendations across four lifecycle phases. The most operationally consequential items include:
- Least privilege, enforced at runtime. Permissions evaluated only at deployment are stale by the second tool call. The guidance recommends just-in-time credentials, fresh cryptographic proofs before privileged calls, and continuous identity and authorization verification at runtime.
- Cryptographic attestation of agent integrity. Agents should be able to prove they are running expected, unmodified code before performing sensitive actions. This is a concept federal cybersecurity teams will recognize from secure boot and TPM-anchored device attestation, now applied at the agent layer.
- Isolation and segmentation by blast radius. High-risk agents into distinct domains. No write access to logs from agent enclaves. Limit cascading failure paths.
- Continuous monitoring of internal processes, not just inputs and outputs. Agentic processes can outpace human oversight. Logs need to capture tool calls, memory interactions, internal reasoning, decisions, and actions, not just the prompt and the response.
- Human control points throughout the workflow. Live monitoring, mandatory approval for decision-making steps, auditability, and reversibility. The guidance is explicit that agents approved for low-risk tasks should not be able to autonomously progress into higher-risk activities.
For the longer horizon, the document points toward system-theoretic approaches such as STPA, STPA-Sec, and CAST (developed at MIT) to analyze agentic AI systems holistically rather than component by component. This is a meaningful signal: the authoring agencies are preparing the field for a generation of agentic systems where component-level analysis is no longer sufficient.
Why this matters for federal mission owners
Federal agencies are already deploying agents into procurement workflows, customer support triage, IT operations, and decision support. The economics are too compelling to ignore. The guidance does not push back on that adoption. It pushes back on doing it without security in mind.
The practical takeaway for federal mission owners is to treat every agentic AI deployment the way a mature program treats a new mission system: with a clear authority to operate, a documented threat model, runtime privilege controls, continuous monitoring, and a tested incident response plan. The novelty of the technology is not a reason to relax the discipline.
Where S2i2 fits
S2i2 has spent more than a decade building, securing, and operating high-assurance environments for the Department of Defense and Federal Civilian agencies. The work in our portfolio that matters most for agentic AI security is not new: Zero Trust architecture, Risk Management Framework execution at scale, endpoint security and least-privilege enforcement, and continuous monitoring across NIPRNet, SIPRNet, and JWICS.
The CISA and ASD ACSC guidance does not require federal organizations to invent a new cybersecurity discipline. It requires them to apply the discipline they already have, with clear eyes, to a class of system that takes action on its own. That is the work S2i2 was built to do.
The full guidance is available at cyber.gov.au and on CISA’s website. It is worth reading.
S2i2, Inc. is an SBA-certified 8(a) Small Disadvantaged Business headquartered in Oakton, Virginia. S2i2 holds CMMI ML3 appraisals for both Services and Development, ISO 9001:2015, ISO/IEC 27001:2022, and ISO/IEC 20000-1:2018 certifications, CMMC Level 2 status, and a Top Secret Facility Clearance with SCI eligibility. CAGE 7N8U5 | UEI DQKRJT2C7AB5.











