More AI Agents Usually Make the Workflow Worse
-
Taylor Brooks - 26 Apr, 2026
My current contrarian take on AI workflows is pretty simple.
Most people do not need more agents.
They need a better control layer.
I keep seeing the same pattern. A workflow feels messy, so the answer becomes adding another model, another prompt chain, another autonomous step, another clever routing layer. It looks like progress because the diagram gets more impressive. In practice, the system usually gets harder to trust.
That has been the big lesson for me building with AI every day.
When a workflow is already shaky, adding more intelligence on top rarely fixes the real problem. It just gives the confusion more places to hide.
The real bottleneck is usually coordination
Most failures I run into are not about raw model capability.
They are about handoffs.
A step runs too early. A tool gets the wrong input shape. A model returns something technically valid but useless for the next step. Nobody is totally sure which part is responsible, so the default move is to bolt on another layer and hope the system smooths itself out.
That move feels modern. I think it is usually wrong.
I wrote recently about AI demos not surviving random Tuesdays. This is the same problem from a different angle. The workflow does not get stronger because it has more moving parts. It gets stronger when the moving parts are easier to inspect.
More agents can be a trap
I am not anti-agent. I use Claude and ChatGPT constantly. I think agentic workflows are real. I also think a lot of people reach for them before they have earned the complexity.
If one agent cannot do useful work inside a clear sequence, five agents probably will not save you.
They might make the output look smarter in a demo. They might even improve the happy path. But they also create more state, more retries, more weird edge cases, and more places where responsibility gets blurry.
That is a bad trade unless the underlying workflow is already solid.
Even Anthropic’s guide to building effective agents makes a similar point in practice. Start with simple patterns. Add complexity when it is justified. That advice gets ignored because simple systems do not sound exciting.
What I want instead
I want a boring control layer.
I want to know:
- what triggered the workflow
- what each step received
- what each step produced
- where a failure happened
- what should happen next if something breaks
That is the part I trust.
For me, that usually means keeping the workflow visible in a repo like GitHub, making the steps explicit, and resisting the urge to hide sloppy process behind smarter prompts.
If the sequence is unclear, I try to fix the sequence.
If the handoff is weak, I try to fix the handoff.
If the output is inconsistent, I try to tighten the contract before I add another model to clean it up.
This sounds less ambitious than building a swarm of agents.
I think it is actually more ambitious because it forces you to understand the work.
The thing I have started watching for
When someone shows me an AI workflow now, I am not mostly asking how smart the model is.
I am asking whether the control layer makes sense.
Can a normal person trace the job from start to finish?
Can they tell what failed without turning the whole thing into a detective story?
Can they change one step without breaking three others?
If the answer is no, I do not think the main problem is model quality.
I think the system design is doing too much improvising.
That is why I am increasingly skeptical of workflows where the fix for every rough edge is “add another agent.”
Sometimes the grown-up answer is less magic.
Fewer agents. Better boundaries. Clearer steps. More boring control.
That is usually the version that survives real work.