06EssayAIEngineeringPractice

Not the productivity-influencer version. The actual, somewhat-deflating version, from someone who ships every day.

By Tomiwa FolorunsoPublished NOVEMBER 30, 2025Read 7 min

Every six months for the last three years, someone has told me that AI is about to replace engineers. Every six months, I have shipped more code than the six months before. These two facts are related in a way that almost nobody who talks about AI on the internet describes honestly.

The thing that actually changed

The honest version is this: AI didn't make me faster at the hard parts. It made me faster at the boring parts, which freed me up to spend more time on the hard parts, which now feel harder than ever.

Two years ago, a typical day for me involved maybe forty minutes of staring at a problem and four hours of typing out things I already knew how to type. Today, it's the inverse. I spend most of my day staring at problems. The typing happens in bursts, mediated by a model that gets the shape right but almost never the details.

This sounds like a win, and it is. It's also exhausting in a way the old work wasn't.

What it didn't change

It didn't change the part where you have to decide what to build. It didn't change the part where you have to know whether the thing the model wrote is correct. It didn't change the part where you have to hold five interacting systems in your head and predict how they'll behave at 3am under load nobody simulated.

If anything, AI made all of these parts more important, because the cost of getting them wrong has gone down — and when cost goes down, volume goes up, and now you're reviewing four times as much code as before, all of it superficially plausible.

The model is very good at producing code that looks right. It is not yet good at producing code that is right. The gap between those two is where the entire job has migrated.

A small scene from last Tuesday

I was building a rate limiter. The model wrote it in fifteen seconds. The code compiled. The tests it generated passed. It looked, to a casual reader, like a perfectly reasonable rate limiter.

It was wrong in a specific way: it used wall-clock time for the window boundary, which meant that under sustained traffic at the edge of the limit, requests would cluster at the start of each second and the system would burst-fail in a pattern that looked random in monitoring.

I knew this would happen because I have, twice in my career, been on-call for the consequences of exactly this mistake. The model has not been on-call for anything. It has read a lot of rate limiter code. It has never had a rate limiter wake it up at 3am.

I rewrote the boundary logic. The model and I shipped the feature in an afternoon. Two years ago this would have been a two-day task. The afternoon felt harder than the two days used to.

The skill that's appreciating

The skill that's appreciating in value is taste — the ability to look at a plausible solution and know, in your gut, whether it's the right one. Not whether it works on the happy path. Whether it works in February at 4am when half the cluster is restarting.

Taste is the hardest thing to learn from a model, because the model can't teach you what it doesn't have. You learn taste the way you've always learned it: by being on the wrong end of decisions, in production, with users watching. There's no shortcut, and AI hasn't created one. If anything it's raised the bar for entry into the work where taste develops at all.

I am not anxious about AI replacing me. I am anxious about the people three years behind me, who are skipping the boring typing that used to be how you accidentally built intuition. The boring parts were never the point — but they were where the point got made.

— Filed under
AIEngineeringPractice