Familiar Security Failures, AI Acceleration

The first wave of AI security writing focused on model behavior. Prompt injection. Unsafe outputs. Data leakage through chat interfaces. Those risks are both pre-existing and real, but now it’s clear they’re not the entire problem. Existing security problems are being exacerbated by the speed of AI adoption.

The recent TeamPCP supply-chain campaign impacting Trivy and LiteLLM, with follow-on compromises affecting Checkmarx GitHub Actions and downstream projects, points somewhere more concrete. A separate Langflow exploitation event pointed to the same operational lesson from another angle. These tools sit in privileged positions. They hold credentials, route requests, or execute workflow logic on behalf of other systems.

The Loop is Closed. The Oversight is Not.

Adding a human review step to AI-assisted development is the right immediate response. The problem is what happens when organizations treat it as the destination.

security AI governance engineering agents

Musings

I remember that I still have a website, and capture some observations since I last posted

personal insight

Enjoying Typescript

I remembered I like this stuff.

tech typescript

Yet Another Refresh

Finally landed some significant updates.

site