Can AI Replace Security Engineers? A Realistic Answer
What Happens When Security Stops Being a Task and Becomes a System
Over the past few months, something unusual has been happening.
Cybersecurity stocks .. companies that should be thriving in a world full of increasing threats .. have taken a hit. The trigger? AI.
When Anthropic started showcasing more advanced capabilities in tools like Claude Code Security and hinted at upcoming models like Mythos, the market reacted fast. There were headlines about billions being wiped off cybersecurity companies almost overnight.
That reaction wasn’t really about the tools themselves. It was about the question those tools forced everyone to ask:
If AI can find vulnerabilities, write fixes, and automate large parts of security work… what happens to cybersecurity professionals?
It’s a fair question. But it leads to the wrong conclusion.
The Short Answer
No, AI will not replace security engineers. But it will fundamentally change what it means to be one.
And that distinction is where most people are getting it wrong.
What Just Happened (And Why It Matters)
What we’re seeing now is not incremental improvement. It’s a shift in capability.
Let’s look at what triggered the panic.
Anthropic released new capabilities where AI can:
Scan entire codebases
Identify complex vulnerabilities
Suggest and even generate fixes
Operate semi-autonomously
This isn’t theoretical. This is already happening.
Some reports suggest AI systems are now capable of performing 80–90% of cyberattack operations in certain scenarios.
At the same time, AI-powered tools are being positioned to defend systems just as aggressively.
That creates a strange dynamic:AI is accelerating both attackers and defenders. And that’s where the confusion begins.
What Gets Automated
To understand what’s really happening, you have to be honest about something.
A large portion of cybersecurity work has always been repetitive.
Checking code for known patterns. Reviewing dependencies. Writing reports. Triaging alerts. These are important tasks, but they are also structured and predictable.
And that’s exactly what AI is good at.
If your day-to-day work mostly involves spotting known issues, following checklists, or producing standardised outputs, AI is going to take over a significant part of that workload.
Not because it’s smarter. But because it’s faster, more consistent, and doesn’t get tired.
That’s the uncomfortable reality many people are starting to feel.
What Remains Human
But cybersecurity was never just about finding vulnerabilities.
It’s about understanding context, making decisions, and managing risk.
And this is where things change.
AI can tell you that something is vulnerable. What it struggles with is deciding what actually matters.
It doesn’t truly understand business priorities. It doesn’t carry accountability. It doesn’t weigh tradeoffs the way a human does.
Should we fix this now or later?
What is the business impact?
What risk are we willing to accept?
These are not technical questions. They are judgment calls. And they remain human.
The same applies to adversarial thinking. Real attackers don’t follow neat patterns. They adapt, combine techniques, and exploit gaps in logic and process. AI can simulate this to some extent, but it still lacks the unpredictability and intent that human attackers bring.
And then there’s architecture. Designing secure systems is not about checking boxes. It’s about understanding constraints, anticipating future risks, and making decisions under uncertainty. That level of thinking is still very much human-driven.
Additionally when something does go wrong you cannot say, “The AI made the decision.”
Organizations need accountability and ownership. And that still sits with humans.
The Role Is Changing
So if the work isn’t disappearing, what is actually happening?
The role is shifting. We are moving away from people doing security work manually, toward people designing systems that perform security work continuously.
Instead of being the person who runs a security review, you become the person who builds a system where security reviews happen automatically.
Instead of checking for issues, you define the rules that ensure those issues never make it into production in the first place.
This is the shift from security engineer to security system designer.
And it’s already happening. You can see this change in the types of work that are becoming more valuable.
There’s a growing need for people who can design how AI fits into security workflows. People who understand not just security, but how to structure systems that enforce it.
There’s also a new layer of governance. As AI becomes part of the decision-making process, questions around control, trust, and risk become more important. Who defines the rules? What data does the system rely on? What happens when it gets something wrong?
These are not problems you solve with more scanning tools.
They are problems you solve with better system design and oversight.
At the same time, human involvement doesn’t disappear. It shifts upward. Instead of doing everything manually, you guide the system, review critical decisions, and step in where judgment is required.
Here’s what that looks like.
1. AI Security Architect
Instead of doing reviews manually, you design systems that:
Enforce policies automatically
Integrate with AI tools
Continuously monitor and adapt
This is exactly what tools like Claude Code enable.
2. Security Workflow/Orchestration Engineer
Your job is no longer:
“Did we check this code?”
Your job becomes:
“How do we build a system where insecure code never gets committed?”
This includes:
Defining workflows
Creating automation pipelines
Embedding security into development
3. AI Governance and Risk Specialist
As AI becomes part of security:
Who controls it?
What data does it access?
What decisions can it make?
These are governance problems. And they are growing fast.
How to Get Ready
The real disruption is not that AI replaces people. It’s that it changes the baseline.
AI compresses the gap between average and expert.
Someone with decent security knowledge and strong systems thinking can now operate at a level that previously required years of experience. Not because they know more, but because they’ve learned how to leverage AI effectively.
Meanwhile, someone who relies only on manual processes, even with deep expertise, will start to fall behind.
That’s the shift. And it’s happening faster than most people expect. You don’t need to panic. But you do need to adjust how you think about your role.
The biggest mistake right now is staying focused on tasks. If you define your value by the work you do manually, you are tying yourself to the part of the job that is most likely to be automated.
The better approach is to start thinking in systems.
Instead of asking how to do a better security review, ask how to design a system where security reviews happen automatically and consistently.
Instead of focusing on tools, focus on how those tools connect, how they enforce rules, and how they scale beyond you.
And most importantly, invest in your ability to make decisions. Risk prioritisation, tradeoff analysis, and architectural thinking are becoming more valuable, not less.
Because those are the parts AI cannot easily replace.
Final Thought
The fear that AI will replace cybersecurity professionals is understandable. The speed of change is real. The capabilities are real. And the impact is already visible.
But the conclusion most people are drawing is too simplistic.
AI is not replacing security engineers. It is replacing how security work gets done.
And the people who understand that early won’t just adapt. They’ll be the ones defining what cybersecurity looks like in the next decade.





<p>the market pricing in AI replacements for security engineers feels like selling your home insurance because you bought a really good lock.</p>