When the scanner becomes the weapon: inside the TeamPCP supply chain attack
March 27, 2026 · 9 min read · Merakey Team
This week, a coordinated threat actor group known as TeamPCP did something that deserves attention from every organization that writes software. They did not break into a company. They did not phish an employee. They published malicious versions of three widely-used open-source tools to the Python Package Index (PyPI) and waited for developers to install them.
The three targets were LiteLLM, one of the most widely used Python libraries for integrating with AI model APIs like Claude, GPT-4, and Gemini; Trivy, an open-source container and filesystem vulnerability scanner used by thousands of development teams; and Checkmarx KICS, an infrastructure-as-code security scanner. Each one is a tool that security-conscious teams use deliberately. That is exactly the point.
What TeamPCP actually did
The attack combined two well-documented techniques: typosquatting and dependency confusion. Malicious package versions were uploaded to PyPI with names close enough to the legitimate packages that automated dependency resolution, CI/CD pipeline installs, and developer workflows would pull the compromised version without any explicit decision being made.
Once installed, the packages deployed a multi-stage credential stealer. The word "multi-stage" matters here. The initial install appeared clean to standard static analysis. The malicious payload was activated by a secondary mechanism, in some cases a post-install hook, in others a delayed execution pattern triggered on specific import calls. Security scanners checking files at rest would not catch this. Detection required behavioral monitoring or runtime analysis at the point of execution.
The credential stealer targeted authentication tokens, API keys, and environment variables, the full set of secrets that a developer machine or CI environment typically holds. Multiple sources, including Sonatype, ReversingLabs, Arctic Wolf, and Palo Alto Networks, documented the attack mechanics in detail as it unfolded across the week.
Why security tooling is a better target than application code
There is a specific logic to targeting Trivy and Checkmarx KICS rather than a general-purpose library. Security tools are installed with elevated permissions because they need them. A vulnerability scanner has to read filesystems, inspect environment variables, and examine network configurations. A credential stealer delivered through a security scanner has immediate access to exactly the data it is looking for, without any privilege escalation required.
The attack surface created by trusting a security tool is larger than the attack surface created by trusting most application libraries. And the psychological effect compounds the technical one: developers who see a tool listed as a security scanner are less likely to question whether it is itself a threat. Trivy is supposed to find problems. The assumption that it does not create them is exactly what TeamPCP exploited.
This is not a new insight. Researchers have documented the theoretical risk of weaponizing security tools for years. What TeamPCP demonstrated this week is that the theory is now operational practice, and that the security community's response time, however fast, is still measured in days while the attacker's installation window is measured in hours.
The LiteLLM angle: AI tooling as credential surface
The LiteLLM compromise deserves separate attention because of what it exposes specifically. LiteLLM is the abstraction layer that many AI-first applications use to route requests to multiple model providers through a single interface. Organizations building on top of Claude, GPT-4, or Gemini frequently rely on LiteLLM as the integration point between their application code and their AI infrastructure.
The credential stealer in the compromised LiteLLM versions was designed to target API keys in particular: the keys that authenticate your application to your AI model provider, your cloud infrastructure, and your data pipelines. A compromised LiteLLM install does not just leak credentials in the abstract. It exposes the keys to AI-powered production systems, keys that typically carry broad permissions and that rotate slowly because rotating them requires updating every service that depends on them.
This is the second high-profile AI library to be targeted in a supply chain attack in the past year. As AI-first development workflows become standard and organizations manage API keys across multiple model providers, cloud services, and data infrastructure, the credential surface that attackers can reach through AI tooling continues to grow. LiteLLM is a signal, not an outlier.
What defenders caught and what they missed
The security industry's response to the TeamPCP campaign was substantive. GitGuardian published a case study demonstrating how secret scanning would have detected the credential extraction. Palo Alto Networks broke down the Trivy attack mechanics. Arctic Wolf issued a comprehensive threat assessment covering all three packages. ReversingLabs traced the malicious versions back to the threat actor group. The documentation is thorough and the detection vendors are legitimate.
But the detection story has a significant gap. The malicious packages were live on PyPI before any registry-level block was applied. Organizations that had already installed the compromised versions received no automatic notification. The burden of determining whether your environment was affected fell entirely on your own team: audit your CI logs, check which package versions your pipelines pulled, scan for signs of credential extraction, rotate potentially compromised keys. For a small engineering team, that is a meaningful incident response exercise triggered by external reporting rather than internal detection.
The difference between an organization that knew it was in the blast radius within hours and one that found out weeks later came down to whether they had the monitoring infrastructure in place before the attack, not after it.
Where SBOM practices actually matter
This attack is a concrete illustration of why Software Bill of Materials documentation is a practical security tool, not a compliance checkbox. An organization with an accurate, current SBOM knows exactly which version of LiteLLM its production systems are running at any given point. The moment a malicious version is publicly identified, the check is immediate: are we running that version? If yes, when did we install it? What systems were affected?
Organizations without SBOM practices start from scratch. They are pulling logs, tracing pipeline executions, and manually checking package lock files across multiple environments. The investigation is slower, the scope assessment is less reliable, and the window between compromise and containment is longer. Policy mandates for SBOM, which are advancing through both US federal procurement and Canadian government contracting discussions, are increasingly grounded in exactly this kind of practical incident response scenario.
The pattern TeamPCP is following
TeamPCP is not a new actor. The group has run multiple supply chain campaigns, and the Trivy, Checkmarx, LiteLLM operation fits a consistent profile: target developer tooling rather than end-user software, use the trusted context of that tooling to gain privileged access at install time, extract credentials for downstream compromise, and move on before the initial infection is widely detected.
The specific focus on tools used in security and AI workflows reflects a deliberate targeting logic. These tools sit at the intersection of broad adoption, elevated runtime permissions, and credential-rich environments. They are used by the exact teams most likely to push back on security alerts as false positives. And they are updated frequently, which means the window for a malicious version to be installed is tied to normal, expected developer behavior rather than a detectable anomaly.
The campaign is expanding, not winding down. The progression from Trivy to Checkmarx KICS to LiteLLM across a short window suggests an active operation testing the boundaries of PyPI registry detection and organizational response time. Treating this as a one-time event rather than an active campaign is the wrong posture.
What monitoring actually catches this class of attack
The TeamPCP campaign required a specific set of monitoring capabilities to catch before credentials were extracted. Static package name checking is necessary but not sufficient, since the malicious versions used names and metadata that passed initial review. Behavioral analysis at install time, specifically watching for post-install scripts and deferred execution patterns, catches the multi-stage loader before the payload activates. Secret scanning in CI environments detects when credentials are being read or transmitted outside normal application flows. And dependency graph monitoring, knowing not just what you have installed but when each version was pulled and from where, provides the forensic baseline for rapid scope assessment when a malicious version is identified.
Sentinel is built around this monitoring model. The premise is that the open-source dependency graph is an active attack surface, not a trusted baseline, and that the security tooling layer is itself part of the attack surface, not exempt from it. The LiteLLM and Trivy compromises are textbook examples of the threat class Sentinel was designed to monitor: coordinated malicious packages targeting developer and security tooling, with credential extraction as the primary objective.
Existing scanners like Trivy and Snyk remain useful for vulnerability detection in application code. What Sentinel adds is a monitoring layer that treats those tools and the broader dependency graph as potential vectors, not just instruments. When the scanner becomes the weapon, you need something watching the scanner.
Related articles
The $7.4M question: what a healthcare data breach actually costs
IBM's Cost of a Data Breach Report puts the average healthcare breach at $7.4 million. We look at where the costs come from and what Canadian agencies should prioritize.
Self-hosted vs. cloud AI: why 43% of healthcare orgs are choosing local
Healthcare organizations are moving AI workloads off the cloud. Data sovereignty, PIPEDA compliance, and breach costs are driving the shift.
PIPEDA and AI chatbots: what healthcare organizations need to know in 2026
The Personal Information Protection and Electronic Documents Act applies the moment a chatbot touches patient data. Here is what PIPEDA actually requires.
See what Sentinel monitors in your stack
The TeamPCP campaign is still active. Sentinel monitors your open-source dependency graph, flags malicious package versions, and detects credential extraction before it becomes a breach. Request early access.
Book a Demo