95 Million Users at Risk: The LiteLLM Supply Chain Hack and Your AI Security

If you feel like every time you open a news tab there’s a new "critical vulnerability" in the AI world, you aren’t alone. But the recent news surrounding LiteLLM and the TeamPCP supply chain attack isn’t just another headline you can scroll past. This one hits the very foundation of how modern businesses are building and using AI.
At B&R Computers, we’ve been tracking the rapid evolution of the "AI Supply Chain." It’s a fancy term for a simple problem: your AI tools rely on other people’s code, and if that code is compromised, your business is compromised too. The LiteLLM hack is a textbook example of why we need to stop treating AI security like a "future problem" and start treating it as a "right now" emergency.
What is LiteLLM and Why Should You Care?
Before we dive into the hack, let’s talk about what LiteLLM actually does. If your business uses AI: whether it’s for a custom chatbot, data analysis, or internal automation: you’re likely using multiple "Models" (like GPT-4, Claude, or Gemini).
LiteLLM is a popular open-source library that acts as a "universal translator" or proxy. Instead of writing separate code for every AI model you want to use, you just use LiteLLM, and it handles the rest. It’s incredibly efficient, which is why thousands of developers and millions of users rely on it daily.
But here’s the catch: because LiteLLM sits in the middle of your data and the AI models, it has access to your API keys and your sensitive prompts. When a tool this central gets hit, the ripples turn into a tsunami.

The Anatomy of the TeamPCP Supply Chain Attack
In March 2026, a group known as TeamPCP successfully executed a supply chain attack targeting the LiteLLM repository. They didn’t hack a specific company; they hacked the source code that everyone uses.
The attack was sophisticated yet exploited a common weakness: a vulnerable GitHub workflow. By exploiting this workflow, the attackers were able to steal a personal access token belonging to a service account. With that token in hand, they had the "keys to the kingdom." They could potentially inject malicious code into the LiteLLM library itself.
This wasn’t just a LiteLLM issue, either. The attackers also targeted Trivy, a widely used security scanner. Think about the irony there: the tool businesses use to scan for vulnerabilities was itself targeted to spread them.
When we talk about a "Supply Chain Attack," this is exactly what we mean. You might have the best firewalls in the world, but if the software you download and trust comes "pre-hacked" from the source, those firewalls won't save you. This is a massive escalation from previous incidents, such as The Langflow Hijack, showing that attackers are moving deeper into the infrastructure of AI development.
The 95 Million User Risk: Is Your Data Gone?
The figure of 95 million users at risk sounds staggering, and it is. While not every single user may have had their data exfiltrated, the potential for exposure is what keeps IT departments up at night.
If you are an SMB owner and your developers integrated LiteLLM into your tools, here is what was potentially at risk:
- API Keys: Your keys to OpenAI, Anthropic, and Google. If these are stolen, attackers can run up massive bills on your account or use your access to scrape data.
- Proprietary Data: The prompts your employees send to AI often contain sensitive business logic, client info, or trade secrets.
- Customer Privacy: If your AI tool interacts with customers, their personal information could have been intercepted.
The reality is that many businesses are operating in a state of Shadow AI. This happens when teams implement these powerful tools without a formal security review. They see the efficiency of LiteLLM, they plug it in, and they forget that every piece of external code is a potential doorway for a hacker.

Why AI Supply Chain Attacks are the New Frontier
For years, cybersecurity focused on protecting the "perimeter": your office network and your email. But AI has moved the goalposts. AI development moves so fast that developers often prioritize speed over security. They grab open-source libraries like LiteLLM or Trivy because they are "standard," assuming someone else has checked the locks.
TeamPCP proved that assumption wrong. By targeting the repository security workflows, they bypassed traditional defenses. They didn't need to guess your password; they just waited for you to update your software.
This is why at B&R Computers, we advocate for a "Zero Trust" approach to AI. You cannot assume a tool is safe just because it’s popular on GitHub. You need active monitoring, credential rotation, and a clear understanding of your software bill of materials (SBOM).
How to Protect Your Business Moving Forward
If you’re reading this and thinking, "We use AI, but I have no idea if we use LiteLLM," you’re in the majority. Here is the roadmap for SMBs to navigate this new threat landscape:
1. Audit Your AI Stack
Ask your IT team or your external developers a simple question: "What libraries are we using to connect to AI models?" If LiteLLM or Trivy are on that list, you need an immediate audit of the versions being used.
2. Rotate Your API Keys
If there is even a 1% chance your proxy was compromised, your API keys for OpenAI, Claude, and others should be considered compromised. Rotate them immediately. It’s a minor inconvenience that prevents a major disaster.
3. Implement a Security Framework
You don't have to reinvent the wheel. Adhering to established frameworks like NIST CSF 2.0 provides a structured way to identify, protect, detect, and respond to these kinds of supply chain threats. It’s the "gold standard" for a reason.
4. Monitor GitHub Workflows
If your company develops its own software, ensure your GitHub Actions and workflows are locked down. Use "least privilege" tokens that expire quickly rather than long-lived personal access tokens.

The B&R Take: Efficiency vs. Security
We love AI. We use it, we consult on it, and we help our clients implement it to stay competitive. But we also know that the "Wild West" era of AI development is coming to a close. The LiteLLM hack is a signal that professional hackers have moved into the AI space in a big way.
You don't need to stop using AI; you just need to stop using it blindly.
Managed IT and cybersecurity isn't just about fixing broken printers anymore: it's about understanding the deep, interconnected web of code that powers your business. Whether it’s protecting you from the latest supply chain attack or helping you build a secure AI roadmap, B&R Computers is here to make sure your technology is an asset, not a liability.
Don't wait for the next "95 million" headline to include your company's name. Let's get your security posture where it needs to be.
Want to make sure your AI tools aren't leaving the back door open?
Book a BRC Cyber Strategy Session today and let’s take a look under the hood of your technology stack. Whether you’re in Allentown, Reading, or anywhere in between, we’ve got your back.
