RSA recap, the LiteLLM breach, and the quest to fix AI agent security
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
DESCRIPTION
Learn more about solving agentic AI identity and access gaps → https://www.hashicorp.com/en/blog/agentic-runtime-security-solving-agentic-ai-identity-and-access-gaps
LiteLLM is a nifty little Python library that gives you access to about 100 different AI services through one API. It gets an estimated 3.4 million downloads a day.
And last week, it was turned into a Trojan horse, distributing infostealers to hundreds of thousands of devices. (At least, that’s what TeamPCP says—the hackers behind the LiteLLM breach and a slew of other high-profile software supply chain attacks in recent weeks.)
Quote Andrej Karpathy: This is “basically the scariest thing imaginable in modern software.”
On this episode of Security Intelligence, Suja Viswesan, Dave McGinnis and Jeff Crume help us break down the LiteLLM breach and the broader campaign TeamPCP is waging.
We’re also joined by HashiCorp Field CTO Jake Lundberg in the first segment for a discussion of how organizations are trying—with varying degrees of success—to tackle the agentic AI problem.
AI agents are identities—but identities our existing frameworks weren’t built to house. Simply porting existing human and non-human identity management practices onto them won’t cut it.
But the question remains: What do we need instead?
All that and more on Security Intelligence.
Segments
00:00 -- Intro
1:13 -- Who will fix AI agent security?
21:17 -- RSAC 2026 Recap
29:31 -- 2026's most dangerous cyberattacks
40:45 -- The LiteLLM breach
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Explore the podcast → https://www.ibm.com/think/podcasts/security-intelligence





