There’s a particular kind of anxiety spreading through senior engineering teams right now, one that doesn’t announce itself loudly so much as show up in the way experienced architects go quiet during AI discussions, or pivot a little too quickly toward risk framing. It’s the fear of a specific kind of obsolescence: not being outpaced by a machine, but being outpaced by a junior developer holding one.
I believe that fear deserves a serious answer, not just reassurance. So, I wanted to talk about it.
The Abstraction Ladder
Software engineering has always progressed by climbing levels of abstraction, and the pattern at each rung has been remarkably consistent.
When we moved from writing Assembly to writing C, the underlying activity didn’t change: we were still translating human intent into machine-executable instructions. What changed was the distance between the idea and its manifestation, with C letting us think in closer proximity to the problem and further from the hardware. From C to higher-level languages, from raw SQL to ORMs, from imperative to declarative, from hand-rolled infrastructure to cloud primitives, each jump compressed the distance between concept and execution, and each jump triggered the same anxiety: will this replace us?
It never did though, not the good ones anyway.
AI-assisted development is the next rung, and the abstraction shifts from “write this code” to “design the task such that an AI can write the code correctly.” That’s a meaningful shift, and worth looking carefully at in terms of who it actually threatens.
The Craftsman Test
When power tools became widely available, there was genuine concern that apprentices wielding them would replace master craftsmen, and what actually happened was more precise than a simple yes or no: power tools replaced mediocre craftsmen, the ones whose primary advantage was speed or physical output. The masters, the ones carrying deep knowledge of materials, joinery, proportion, and structural integrity, found their leverage increased because the tool closed the gap on execution while being entirely unable to close the gap on judgment.
The same dynamic is playing out now. AI is a power tool for code generation, and a junior developer with a good AI assistant is genuinely more capable than one without, specifically on isolated, well-specified, low-context tasks: the kind that were never really what senior engineers were being paid for anyway.
The architect, the person who carries the system in their head and reasons across NFRs and integration surfaces and failure modes and organizational constraints, isn’t competing on that terrain and never was.
One intent. Many hands.
I ran into a clean illustration of this while building Tekhton, an agentic pipeline project I’ve been developing as both a practical tool and a portfolio artifact. Watching the pipeline work through successive changes to a codebase, I noticed that the broader the change, the more the outputs drifted from the original architectural intent, in the same way a large human-authored refactor can quietly violate assumptions that were never written down anywhere. No prompt had asked me to look for that pattern; I accumulated observations across multiple runs, connected them to something I recognized from years of watching human teams work, and introduced a mechanism for tracking architectural drift as changes occurred. That insight didn’t come from the pipeline. It came from someone who had seen the problem before in a different form and recognized its shape when it appeared in a new one.
AI doesn't change what a great architect is. It just makes it harder to fake being one.
That last point is the uncomfortable one. Not every senior title reflects senior thinking, and for those running on tenure rather than judgment, the threat is real. For the rest, your natural habitat just became the most valuable real estate in the field.
The Bottleneck Shifts, It Doesn’t Disappear
There’s a version of AI optimism that goes roughly: eventually the models will be good enough that anyone with a vague idea can produce a perfect artifact. This view is widely held, but it misses something fundamental about the nature of the communication problem.
Programming has always been a lesson in precision; to get an exacting output, you must supply an exacting input. The compiler doesn’t infer your intent, it executes your instruction. AI models are meaningfully more forgiving, but they operate on the same fundamental dependency: the quality of the output is bounded by the quality of the specification. Garbage in; garbage out. That law didn’t retire with the transformer architecture.
A fair counterargument is that models are getting genuinely better at inferring intent from imprecise input, asking clarifying questions, making assumptions explicit, iterating toward what you meant rather than what you literally said, and this is real progress that does erode the GIGO problem at the margins. Where the argument gets rigorous rather than merely reassuring is the next step.
Ambiguity Is Not a Bug in the System
The improvement of AI at resolving imprecision is a limit problem, something you can approach asymptotically but never actually reach, and the reason is that ambiguity isn’t a technical obstacle waiting to be engineered away. It’s a fundamental property of language, intent, and human cognition itself, present not because we’re imprecise communicators but because ideas often aren’t fully formed when we begin to express them. The act of articulation is itself part of the thinking, and you cannot resolve what hasn’t yet cohered.
No model, however capable, can infer an intent the human hasn’t yet had. It can surface the ambiguity, ask the right questions, make the gap visible; it cannot collapse it from the outside. That work belongs to the human.
A concrete version of this shows up in UX work, where I’ve repeatedly had the experience of looking at a screen that was technically correct, meeting every specified requirement, and knowing immediately that something was wrong before being able to say what. The information hierarchy felt off, or the navigation pulled attention in the wrong direction, or the real estate allocation communicated the wrong priorities to the user. The ambiguity wasn’t in the spec, because no spec had captured the feeling of moving through the interface as a person rather than as a process. Resolving it required experiencing it first, sitting with the discomfort of something being not quite right, and then doing the work of translating that felt sense into precise, actionable changes. That’s a loop that begins in a place no prompt can reach.
The greater the ambiguity, the easier to specify. The more subtle the ambiguity, the greater the challenge and skill necessary.
This is the asymptote that matters. As AI gets better at handling rough input, the value of truly precise thinking doesn’t decrease, it concentrates, because the floor rises for everyone while the ceiling on clear articulation stays where it always was, having never been about tools in the first place.
What this means practically is that the bottleneck doesn’t disappear, it shifts upward, and the person who can sit at the ambiguity frontier, pulling structure out of chaos and externalizing a mental model with enough fidelity that something else can execute against it accurately, compounds in value as everything below them gets automated.
What This Actually Requires
None of this is passive, and the abstraction advantage only activates through engagement.
The architects who will thrive are those who get hands-on with the tools, who learn to decompose a system design into AI-executable tasks, who develop the instinct for when a model’s output is architecturally wrong and why, and who understand how to constrain a prompt the same way they’d constrain a system boundary. The ones watching from a safe distance, delegating “the AI stuff” to junior team members while continuing to design systems the way they always have, are spending down an advantage they haven’t earned yet.
The mental model you’ve spent a career building is the asset. AI is the new leverage on that asset, and leverage requires contact.
The Frontier
A frontier isn’t a destination; it’s the edge of what’s currently mappable, and as AI extends the reach of execution it simultaneously pushes the frontier of what requires human judgment further out, not away from us but ahead of us.
The ambiguity frontier is where the interesting work will always live: at the boundary between what can be made precise and what hasn’t been thought through yet, between the artifact that exists and the vision that hasn’t fully cohered, between the system that runs and the one that should have been built instead.
For the people who’ve spent their careers learning to think at that level, that’s not a threat.
That’s a job description.
If this resonated, or if you think I got something wrong, I’d genuinely like to hear it: Come find me at LinkedIn.com/in/geoffgodwin or GitHub.com/geoffgodwin