Familiar Security Failures, AI Acceleration
The first wave of AI security writing focused on model behavior. Prompt injection. Unsafe outputs. Data leakage through chat interfaces. Those risks are both pre-existing and real, but now it’s clear they’re not the entire problem. Existing security problems are being exacerbated by the speed of AI adoption.
The recent TeamPCP supply-chain campaign impacting Trivy and LiteLLM, with follow-on compromises affecting Checkmarx GitHub Actions and downstream projects, points somewhere more concrete. A separate Langflow exploitation event pointed to the same operational lesson from another angle. These tools sit in privileged positions. They hold credentials, route requests, or execute workflow logic on behalf of other systems.

