The rise of AI agents in healthcare: A breakthrough—or a breach waiting to happen?

Healthcare’s newest workforce may be AI agents, but their autonomy introduces serious security concerns.

As AI agents gain system access and autonomy, healthcare must consider identity management, security, and oversight.

For a healthcare industry under relentless pressure, AI agents promise efficiency, automation, and new insights.

Autonomous systems can analyze data, make decisions, trigger actions, and coordinate across clinical systems…with minimal human oversight.

But there’s a darker side.

The same capabilities that make agentic AI powerful also make it potentially dangerous—especially in healthcare, where a single system error, security breach, or automated decision can ripple directly into patient safety risks.

For example, with a few clever prompt injections, the clinical AI tool Doctronic was compromised in several potentially devastating ways – including compelling it to prescribe triple the standard dose of Oxycontin.

Examples like this force healthcare leaders to confront a difficult truth: The rise of agentic AI could fundamentally reshape the healthcare threat landscape.

The New Attack Surface in Healthcare

Healthcare already faces relentless cyber threats. Agentic AI dramatically expands the attack surface because these systems typically require: broad system permissions, access to protected health information (PHI), and/or integration with EHRs, clinical systems, and operational platforms.

Agentic AI can initiate actions on their own, interact with APIs, query electronic health records, schedule workflows, and even coordinate tasks across multiple digital systems. In effect, AI agents behave less like software—and more like digital employees. But unlike people, they operate at machine speed, can interact with thousands of systems simultaneously, and if compromised, they can become lethal insider threats.

In some cases, an AI agent may execute thousands of API calls per minute across critical healthcare infrastructure. If an attacker hijacks that agent—or manipulates its instructions—the consequences could be catastrophic.

Security researchers warn that attackers don’t need to breach the entire hospital network anymore.

They just need to compromise the AI agent controlling it. That’s because AI agents effectively become high-privilege machine identities embedded deep inside healthcare infrastructure. As such, once an attacker controls the agent’s authentication token, they gain a mechanized insider capable of scraping data and executing actions at machine speed.

The unique high-stakes nature of healthcare further complicates the challenge of agentic AI. A workflow failure in retail might cause inconvenience, perhaps some financial loss. But a workflow failure in healthcare could delay medication administration, misroute clinical alerts, or disrupt care coordination.

If an AI agent misinterprets instructions, receives manipulated input, or drifts from its objective, it could:

  • Trigger incorrect workflows
  • Route sensitive patient data to unauthorized systems
  • Generate inaccurate clinical recommendations
  • Or silently propagate errors across interconnected platforms

The healthcare industry has decades of safety protocols for human clinicians, but what about autonomous software acting on their behalf? Who is responsible when AI makes a mistake?

As AI systems become more embedded in healthcare decisions, legal and ethical accountability becomes murky. Experts warn that determining liability in AI-driven medical failures could become extremely difficult, as responsibility may be shared across clinicians, hospitals, and technology vendors.

In other words, if an AI agent makes the wrong call, the patient may be the one left holding the consequences.

The Identity Crisis of AI Agents

At the center of this problem lies Identity. Every AI agent is a new digital identity inside the healthcare environment. They need credentials, permissions, and access to sensitive systems, just like a human clinical users.

Yet many security frameworks were designed for humans logging into applications—not autonomous machines performing complex tasks across systems. Without strong identity management, organizations risk creating an environment where AI agents accumulate excessive privileges, credentials persist beyond their intended use, and oversight becomes fragmented across departments.

Researchers have already identified a looming risk of “agent sprawl,” where organizations deploy numerous AI agents with unclear ownership and inconsistent controls. Left unmanaged, these agents could become the next generation of unmanaged privileged accounts.

The Path Forward: Security by Design

None of this means healthcare should stop adopting AI. Agentic AI has enormous potential to improve efficiency, reduce clinician burnout, and enhance patient care. But if healthcare wants the benefits, it must confront the risks head-on.

That starts with building security and identity controls directly into the foundation of agentic AI systems, including:

  • Strong identity and access management for AI agents
  • Continuous monitoring of machine identities and behaviors
  • Explicit governance for autonomous system actions
  • Human oversight and “kill switches” for agent operations

Once autonomous systems are embedded into clinical workflows, retrofitting security becomes exponentially harder.

So, unless the industry prioritizes identity, security, and governance from the start, the transformational power of agentic AI could also introduce some of the most serious digital risks healthcare has ever faced.

Learn more about how Imprivata addresses agentic AI identity management.

You are currently browsing

Product availability varies by region. Would you like to choose a different region?

No thank you, I'd like to continue