CVE-2026-42208
BerriAI LiteLLM contains a SQL injection vulnerability that allows an attacker to read data from the proxy's database and potentially modify it, leading to unauthorised access to the proxy and the credentials it manages.
View on NVDAnalysis
LiteLLM, a popular AI gateway used to proxy LLM APIs, contains a critical SQL injection vulnerability in its API key validation logic. An unauthenticated attacker can exploit this to read or modify the proxy database, potentially compromising all managed LLM credentials. This vulnerability is currently being exploited in the wild, and users should update to version 1.83.7 immediately.
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:HCWE-89CISA KEV
Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
EPSS
Affects
litellm:litellmTechnical description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.81.16 to before version 1.83.7, a database query used during proxy API key checks mixed the caller-supplied key value into the query text instead of passing it as a separate parameter. An unauthenticated attacker could send a specially crafted Authorization header to any LLM API route (for example POST /chat/completions) and reach this query through the proxy's error-handling path. An attacker could read data from the proxy's database and may be able to modify it, leading to unauthorised access to the proxy and the credentials it manages. This issue has been patched in version 1.83.7.