Malicious Code Found in Popular LiteLLM AI Package
- •Malicious credential stealer discovered in LiteLLM version 1.82.8 on PyPI
- •Vulnerable package targets local SSH keys, cloud credentials, and cryptocurrency wallets
- •Attack linked to security scanner exploit and unauthorized PyPI package publishing
A critical supply chain attack has struck LiteLLM, a widely used library designed to simplify interactions with various Large Language Models. In version 1.82.8, developers discovered a sophisticated credential stealer embedded within a litellm_init.pth file. Unlike traditional malicious code that requires an active function call, this specific execution method triggers the moment the package is installed, making it particularly dangerous for automated environments and continuous integration pipelines.
The data exfiltration targets a wide array of local secrets, including SSH keys, Git configurations, cloud provider credentials for AWS and Azure, and even local database passwords or cryptocurrency wallet files. The use of base64 encoding allowed the malicious code to bypass initial automated checks before being flagged and quarantined by the Python Package Index (PyPI). The breach effectively allowed attackers to hoover up sensitive developer data directly from their machines.
Investigations suggest the breach began with an exploit of Trivy, a security scanning tool used in LiteLLM's internal pipeline. By compromising the scanner, attackers likely gained access to the secrets necessary to publish directly to PyPI under the official LiteLLM account. This incident serves as a stark reminder of the fragile nature of the AI software ecosystem, where even security-focused tools can become vectors for high-impact supply chain vulnerabilities.