Let’s tackle the age old question: can new technology fix broken or missing processes?
And then let’s add: does AI and AI agents change the answer you would give?

This is the question which I recently debated with some friends, with a few AIs and with myself. The context was of course within cybersecurity, but I suspect some lessons apply more broadly.
Starting point: given all my experience in information security first and then cybersecurity (ha!), my default answer is “NO, technology cannot fix process failures.” It is my “head” answer (observation based on past data) but it also matches my “gut” answer (intuition based on my “lived” experiences).
But is this a correct answer in 2025?
Let’s have a debate.
Position: No New Technology Can Ever Fix a Broken/Missing Process
Technology implements a process; it is always the servant to the process as a leader. This may not be factually true, but reality largely behaves like that.
People who automated a bad process ended up with a bad, automated process rather than an improvement. Sometimes they ended in a worse place. The tool faithfully and beautifully executes the underlying brokenness.
If your incident response process is a chaotic mess, buying a SOAR platform will just help you execute your chaotic mess at machine speed. You’ll send the wrong alerts to the wrong people, isolate the wrong machines, and create tickets in the wrong systems — all with breathtaking efficiency. The technology faithfully automates the underlying broken logic.
Many people who bought tools that support your implementation of a process without having said process ended up with tools sitting on the shelf (SIEM, SOAR, IT GRC, CSPM, etc).
Some tools are designed to optimize a process and make it run faster, such as IT GRC for risk management. These tools largely do nothing if there is nothing to speed up or optimize.
People who purchased tools for which they were not ready in terms of process maturity — for example, tools that support threat hunting when they are barely logging — have found the tools unused and not delivering value (example).
Also, a broken process is rarely a technical problem. It’s a people, culture, and political problem. It’s about siloed teams that don’t communicate, a lack of accountability, and a culture of “that’s not my job.” No GRC platform, no matter how slick, can fix a business unit that refuses to own its security risks. Technology does not solve human problems. A broken process is often a symptom of a dysfunctional culture. The tool becomes a digital monument to a human-centric failure (good one, Gemini!).
Sometimes a tool is purchased to improve the process, but the old process wins: the most common outcome of a new technology purchase is not process transformation, but forcing the new tool to mimic the old, broken workflow. For example, buying a cutting-edge SIEM with AI and then using it for basic log searches and daily PDF reports, just like the old log management tool it replaced. “Process is gravity” strikes again!
Cherry on top: when the new tool — shackled by a broken process and a toxic culture — fails to deliver miracles, the organization blames … the tool (“The SIEM is generating too many false positives!”).
Position: New Technology May Improve a Broken Process or Replace a Missing Process
There are definite examples where a manual process and a tool-supported process are dramatically different. You can say that the manual process was impossible, but tools enabled a new process. This is a reality.
You simply cannot implement certain modern technologies without changing your process. For example, trying to secure a CI/CD pipeline with a Change Advisory Board that meets once a month is laughably impossible. Adopting DevSecOps tools forces you to embed security checks directly into the pipeline, thus shattering the old, slow process.
Perhaps technology can create a new operational reality, and the process has no choice but to adapt or be completely bypassed and ignored? The old way becomes untenable.
Outdated processes often survive because their brokenness is hidden. A new, superior technology can shine a harsh light on the inefficiencies. When a CSPM tool shows you have thousands of critical misconfigurations that your manual audit process missed for years, it provides the undeniable evidence — and political capital — needed to kill the old process and build a better, automated one (but humans change the process in this example, not the tools…)
There’s another case where tools shortcut the steps of a process, for example, where you have to do five things manually, and now you can do two and the tool does the rest. The tool essentially transforms a process.
A great process might be too complex for a junior analyst to follow. A new technology, like a well-designed SOAR platform with good playbooks (or AI SOC with dynamic ones), can encapsulate that best-practice process. The technology becomes the vessel for the best-practice process, transforming excellence from an artisanal craft into a reliable, industrialized output (thanks again, Gemini!)
A good example is again security in CI/CD pipelines. Sometimes there is an elaborate process to achieve outcome X, but a dramatically improved tool can just give you X right away without the need for a process. In this sense, the tool replaces the process. So?
Conclusion
Upon reviewing the arguments, I am still voting “no” on whether technology can fix a broken/missing process. Ultimately, the process emerges victorious. Let me be clear: this is often a tragic, Pyrrhic victory, but it’s a victory nonetheless.
Process is gravity. Technology is an engine trying to achieve escape velocity. While a powerful engine can break free, gravity is relentless, constant, and it never gets tired. The moment the engine sputters — the project champion leaves, the budget gets cut, attention shifts to the next crisis — gravity pulls the shiny new technology right back down into the orbit of the old, comfortable way of doing things.
Also inertia is the strongest force at many (most?) large organizations: An organization at rest will stay at rest. A broken process, for all its flaws, is a known quantity. People have built their habits, their little silos or vast empires, and their workarounds on top of it. A new tool requires effort, learning, and changing behavior. In a fight between “doing something new” and “doing what I did yesterday,” the latter wins and wins often.
The Role of AI and AI Agents
The new question now is whether AI, and specifically AI agents, can put its “mechanical finger on the scale” and change the balance toward tools delivering the process.
I think the answer here is really interesting. For the impatient ones: In theory, agents can replace a process because you can ask an agent to plan a process, execute it, and then stick to it.
In theory.
What is real here? Do AI agents and agentic AI fundamentally change this balance, strengthening the “YES” side? Do they strengthen it enough for it to actually happen? In a battle of agentic AI vs organizational inertia, who wins?
So, while a traditional tool automates a task within a human-defined process, an AI agent can — in theory — create, refine and then automate the entire process, including the reasoning and decision-making. This is the crucial difference. An agent doesn’t just follow the old path; its entire purpose is to analyze the terrain and find the absolute fastest way to the destination, even if that means blazing a new trail.
“Automating Stupidity” risk — in theory — decreases as the AI can check for these risks and include them in process design: A simple script will automate a bad process without question. An AI agent, however, can be designed to reason about its goal. If you give it a goal that seems illogical or counterproductive based on its training, it can flag it or ask for clarification. The risk shifts from automating a bad process to the agent learning the wrong lessons or having a badly defined goal.
“Paving the Cow Path” risk almost disappears: An AI agent is often designed for optimization. Forcing it to follow a clunky, inefficient, human-centric workflow is like trying to make a self-driving car obey the whims of a backseat driver who only trusts backroads. The agent will constantly try to find and use the most efficient path, which is almost always through an API, not a series of manual approvals.
However, 2 new risks stand in the way of answering “YES, with AI agents, a technology CAN fix broken and non-existing process”
- Capability — whatever we say above is theory, will it work in a (messy, legacy-infested) real world?
- Trust — assuming the above happens (it does work!), will people trust it?
While AI agents have — in theory — the raw power to obliterate old processes and build-then-automate the new ones, they may mess up given wrong and incomplete data (common in “layered cake” legacy environments) and they introduce a new, far more potent, human-centric obstacle: trust.
Anyhow, ask me again in 2 years?
Vaguely related blogs:
The post The Gravity of Process: Why New Tech Never Fixes Broken Process and Can AI Change It? appeared first on Security Boulevard.
Anton Chuvakin
Source: Security Boulevard
Source Link: https://securityboulevard.com/2025/09/the-gravity-of-process-why-new-tech-never-fixes-broken-process-and-can-ai-change-it/