Recently, Anthropic demonstrated with Project Glasswing how AI can uncover vulnerabilities in software that have gone unnoticed for years. Not just finding them—but analyzing them and translating them into concrete ways to exploit them in cyberattacks.
More background: https://www.anthropic.com/glasswing
The potential impact is huge.
Not just for security teams, but for how your processes and systems are structured.
The problem is often in your processes and not your code
In practice, vulnerabilities rarely start with “bad code.”
They usually start with:
- Excel files functioning as systems
- Manual handoffs between tools
- Logic that exists only in someone’s head
- Integrations that were set up quickly and never revisited
Inefficient? Yes. But also vulnerable.
Because the less control and visibility you have, the harder it becomes to detect risks or respond quickly when something goes wrong.
AI is changing the speed of the game
Where it used to take significant time and expertise to find vulnerabilities, AI lowers that barrier dramatically.
That means more vulnerabilities are being discovered in less time, without requiring deep expertise. (In other words: almost anyone can do it.)
Within Project Glasswing, vulnerabilities were even found in major operating systems and browsers, issues that had been hidden in code for decades.
The biggest shift isn’t just what is possible, but how fast it happens.
The time between discovery and exploitation is shrinking fast.
And that shifts security from a control problem to a speed and response problem.
The other side of the same story
That same development also works in your favor.
AI is extremely good at:
- Analyzing existing systems
- Detecting inconsistencies
- Exposing weak points in processes and data flows
But this only works if your foundation is somewhat in order.
If your processes are fragmented and logic is scattered everywhere, it becomes much harder to turn those insights into action.
Security is part of how you design your systems
What we’re seeing more and more is that security isn’t something you “add later.”
It’s embedded in:
- How processes are structured
- How systems communicate
- Where decisions and logic live
That also means improvements often don’t start with a security tool, but with redesigning a process.
Smaller. Clearer. More consistent and often faster and more efficient as a result.
Starting small beats planning big
Instead of launching large, complex initiatives, it’s often more effective to make things concrete:
- Pick one process
- Make it transparent
- Remove manual steps
- Centralize data and logic
Not as a final solution, but as a first step toward more control and speed.
Why this matters now
AI in cybersecurity shows one thing clearly:
The bar is getting lower for attackers and higher for system owners. And system owners have far less time to respond.
That makes it critical to look beyond tools and measures, and instead focus on how your organization and systems actually operate.
Where are the blind spots?
Where are you dependent on manual work?
Where does logic exist without being properly documented?
Those are often the exact places where both risk (and opportunity) live.
Back to basics
AI makes it easier to analyze systems, improve them, but also to attack them.
The difference is no longer just in technology. It’s in how fast and how well you have your fundamentals in place.
And that usually doesn’t start with something big.
It starts with one process that simply works the way it should.