The Pragmatic Engineer
The impact of AI on software engineers in 2026: key trends
Unknown Author
Apr 14, 2026
The impact of AI on software engineers in 2026: key trends
Source: The Pragmatic Engineer · Authors: Gergely Orosz & Elin Nilsson · Date: 2026-04-14 · Original article
⚠️ Note: This article is a paid subscriber post. The freely-accessible portion covers sections 1–4 (and previews of 5–7). This summary reflects the accessible content; the deep dives into "Coasters," changing roles, and other craft impacts sit behind the paywall.
This piece distills 900+ survey responses from Pragmatic Engineer readers about AI coding tools. The big-picture takeaway: AI does not flatten the engineering profession into one new shape — it amplifies the tendencies that engineers already had. Costs are climbing, usage limits are biting, and the same tool can feel like a superpower to one engineer and a frustration to another, depending on what kind of engineer they are.
The article identifies seven trends. The publicly available portion covers four of them in depth.
1. Costs: the bill keeps growing, and finance teams are noticing
Around 15% of all respondents brought up AI tool costs unprompted — a strong signal in survey data, where people usually only mention what is genuinely on their mind.
Who pays?
Most AI tooling is paid for by employers, not engineers personally. Two rough tiers show up repeatedly:
- Company-paid: ~$100–$200/month per engineer. These are the "max" tiers of Claude Code, Cursor, Codex, often as enterprise deals — sometimes heavily discounted in exchange for vendor lock-in. Some companies layer usage-based (API) spending on top of the monthly base.
- Personally paid: ~$20/month or free tiers. Engineers stack these (e.g., Copilot + a free ChatGPT). About 5% maintain both a work and a personal subscription, often to keep personal projects separate or to use a tool their employer hasn't approved.
A common posture from leadership at small US-based companies right now is: "don't worry about the cost, we're still figuring out best practices." One US CTO openly said they aren't sweating costs because they're trying to evolve practices — though some devs are "blowing through budget" so they may start instituting caps.
Breaking the budget
Real anecdotes from the survey illustrate how easily things spiral:
- A CPTO (Chief Product & Technology Officer) at a mid-sized company said they personally racked up $600/month Cursor bills while their team was on the standard ~$100/month plan. They're now migrating the team to Claude Code because, dollar-for-dollar, it gives them more.
- Heavy users sometimes get dedicated higher budgets. One senior C++ engineer in the video game industry called himself the team's "AI champion" — he has uncapped limits but deliberately sticks to what teammates have, so he can demo realistic workflows to them.
A cultural split: US vs. UK/EU
Push-back on AI spending is disproportionately European. Most of the "finance won't approve $30–50/month per engineer" stories come from UK and EU companies. One memorable example: a 10-person seed-stage startup whose CEO questioned spending £25/month per engineer — one of the cheapest tools on the market.
The pattern, according to the authors:
- European companies want to see clear value before increasing tool spend.
- US companies invest first and measure later.
This matters because the productivity gains from these tools are genuinely hard to quantify right now, so a "show me the ROI first" stance effectively means slower adoption.
Educating devs to use cheaper models
A niche but interesting practice from a 1,000+ person digital transformation company in Europe: their AI Enablement team trains new joiners on which model to pick for which task (e.g., Sonnet for routine work, Opus only when needed). After several incidents of users overshooting limits, model literacy became part of onboarding.
Why the cost trajectory worries people
Three reinforcing pressures:
- Heavy users hit limits, which forces upgrades to higher plans.
- API-priced usage trends only upward as adoption grows.
- Subsidies are doing heavy lifting. Many of the cheapest enterprise plans are clearly being subsidized by vendors. Experienced engineering leaders in the survey explicitly compare this to the early cloud era, when AWS/GCP/Azure offered loss-leading pricing for years before raising prices on customers who were now locked in. They expect the same playbook here.
A principal engineer at a fintech put it bluntly: "The AI hype has created a special, generous budget for AI tools, and there's no effective budget – yet!" — meaning the hype is currently insulating AI tooling from the kind of scrutiny every other line item gets.
But CFOs are starting to push back. One CTO at a sports-tech company said the argument that finally landed with their CFO wasn't productivity gains — it was the loss of productivity when engineers hit daily limits and have to stop work mid-task.
A founder at a European seed-stage company did rough math on the unit economics:
Claude Code's Max 100 plan is $100/month. A single small task using Kimi K2.5 via OpenCode can cost $5, mostly in input tokens. If third-party inference providers are running at sustainable prices, then the more expensive Opus model cannot possibly be sustainable, let alone profitable, at current plan prices.
The implication: prices will have to rise. When that happens, European companies (already squeamish about current prices) will feel it first and hardest.
2. Usage limits: ~30% of engineers are hitting walls
The second dominant theme. Roughly:
- ~30% of respondents regularly hit usage limits (token caps, request caps, or daily/hourly resets). The pain is sharpest when the limit hits mid-task or in flow state — you can't just "pause" a half-built mental model.
- ~20% of respondents comfortably stay under limits — typically because they're on premium plans, have a lot of non-coding work, or do enough manually that AI usage isn't dominant.
Most complaints come from people on cheap (~$20/month) plans, but even the expensive plans are not immune.
Why people hit limits
- Two opposite groups blow through credits for opposite reasons. An engineering manager at a Canadian mid-sized company described the bind: "new users" still learning burn tokens through inefficient prompting, while power users burn tokens through legitimate heavy usage. Both push for higher limits, but raising them is expensive — "a tough balance."
- Using Opus (the heavyweight model) for everything. Opus is markedly more expensive per token than Sonnet/Composer-class models. One European engineer described a disciplined workflow that emerged from getting burned: start in "plan" mode with Opus (paste the acceptance criteria + issue description, let Opus produce a plan), then switch to Sonnet or Composer to actually execute the plan. This uses Opus's reasoning strength sparingly and lets the cheaper model do the bulk typing.
- Token-eating mistakes. Things like attacking a problem from the wrong angle, using AI for something a 10-line script would solve, or experimenting with new agentic techniques like OpenClaw and Ralph Loops that can devour tokens before you realize what's happening.
What people do when they hit a limit
Three common responses:
- Switch tool or model. About a quarter of those who hit limits do this. An Atlassian engineer described having Cursor, Windsurf, and an internal tool called "codelassian" — when one's exhausted, they hop to another.
- Upgrade the plan. A no-brainer at most companies because the alternative is paying engineers to wait for a reset. One senior EM said upgrading their team to Claude's Max 20x plan eliminated limit-hitting.
- Move to API pricing. The smoothest way to keep working without abandoning a half-finished task. One senior engineer admitted that when corporate Claude/Copilot limits are hit, "I tend to use API keys that my teammates give me."
3. Impact on "Builders": more leverage, more slop, and an identity crisis
The article introduces three engineer archetypes that the survey data sorted respondents into. Understanding them is the heart of the piece, because the same AI tool feels completely different to each archetype.
- Builders — care deeply about quality, architecture, good practices, and the craft of software engineering. They talk about code as a thing that should be elegant and durable.
- Shippers — focus on outcomes: features in production, experiments with users, time-to-value. Many leaders, managers, and product engineers fall here.
- Coasters — engineers who aren't standouts but get work done. Less concern for taste or quality; they execute what they're told.
Crucially, the authors argue: AI doesn't change which archetype someone is — it amplifies it. A builder becomes a more productive builder. A shipper ships even faster. A coaster… coasts faster, often producing more "AI slop" along the way.
What builders gain from AI
- Large, mechanical changes are now cheap. Refactors, framework migrations, raising test coverage, sweeping codebase rewrites. These are tasks that are tedious but not intellectually hard — perfect for AI agents, because the builder already knows exactly what should happen and just needs the labor.
- "Quality of life" fixes that used to be uneconomical. The barrier to fixing a small nagging bug or polishing a rough edge drops dramatically. The article uses a vivid example from DHH (Ruby on Rails creator), retold from a recent podcast: at 37signals, an engineer wondered, "What about P1? Can we fix the floor?" — i.e., the fastest 1% of requests, currently at 4ms. Without AI agents, nobody would have bothered. With agents, he ran a side project for a couple of days, opened 12 pull requests, ~2,500 lines of code, and pushed the floor below 0.5ms. DHH's framing is the key insight: "the explosion of the pie suddenly lets us look at problems we would never have contemplated looking at before." AI doesn't just speed up known work — it widens the universe of work that's economically worth doing.
- Typing stops being the bottleneck. Several builders report falling more in love with coding because they can stay at the conceptual layer — designing, deciding, debugging — and let the agent handle keystrokes. A staff engineer at a large US tech company captured it: "the AI can read and write 100x faster than me. I get to stay at the conceptual level of shipping a product, and I can dive into debugging with the agent as needed."
What builders lose
- AI slop overload. Builders are the most derailed by reviewing AI-generated code from colleagues — much of which they consider low-quality. Code review becomes higher-volume and lower-signal.
- More debugging. AI-generated code introduces a steady stream of subtle bugs, and builders — being the ones who actually care about correctness — end up doing the disproportionate work of finding and fixing them.
- Loss of professional identity. This is the most poignant theme. Some builders report genuine grief at no longer doing hands-on coding. They can't justify typing code themselves when an agent writes "pretty decent" code faster than they can. The thing that defined them professionally for years has been quietly automated away from under them, even though they're nominally still in charge.
4. Shippers: the biggest fans, but with hidden costs
Shippers are the archetype most enthusiastic about AI tools — and also the ones doing most of the public hyping, because their personal experience genuinely is dramatically faster shipping.
The accessible portion previews shipper upsides without the full deep-dive (which is paywalled), but the trade-offs the authors flag are important:
- Shippers add tech debt faster because shipping speed is what they optimize for.
- Shippers can build the wrong things faster, since AI removes the natural friction that used to force a pause-and-think moment.
In other words, AI is a force multiplier on a shipper's strengths and their blind spots.
5–7. Previewed but paywalled
The remaining sections are introduced but locked. Brief previews from the table of contents:
- Coasters: learning faster while generating AI slop. Less-skilled engineers can level up quickly with AI, but they generate a lot of low-quality output along the way — which is precisely what frustrates the builders who have to review it.
- Engineer and EM roles are converging. Engineers now have to orchestrate and context-switch more (running multiple agents, managing what each is doing). Engineering managers can be more hands-on again because agents lower the cost of writing code yourself. The two roles, historically distinct, are starting to look more alike.
- Other impacts on the craft. A shift from "how to build" to "what to build"; solo developers seeing outsized gains; workloads (counter-intuitively) increasing with AI tools rather than decreasing.
The through-line
If you read only one idea from this piece, make it this: AI tools are an amplifier, not an equalizer. They make builders more leveraged builders (with more slop to wade through), shippers faster shippers (with more debt), and coasters faster coasters (with more slop). They make managers more hands-on and engineers more managerial. And they're doing all of this on top of a cost structure that survey respondents — including the people writing the checks — broadly believe is not yet sustainable, propped up by vendor subsidies that experienced leaders expect to evaporate the same way early-cloud subsidies did.
The interesting question for 2026 isn't "will AI change software engineering?" — it clearly already has. It's "what happens when the subsidies end and prices rise to cover real inference costs?" The companies that have used the cheap-tools window to figure out where AI actually creates value will absorb the price hike. The ones who used it to spray spend across every engineer with no measurement will be the ones in trouble.
Author
Unknown Author
Continued reading
Keep your momentum

MKT1 Newsletter
100 B2B Startups, 100+ Stats, and 14 Graphs on Web, Social, and Content
This is Part 2 of MKT1's three-part State of B2B Marketing Report. Where Part 1 looked at teams and leadership , Part 2 turns to what marketing teams are actually doing — what their websites look like, how they use social, and what "content fuel" they're producing. Emily Kramer u
Apr 28 · 10m
Lenny's Newsletter (Lenny's Podcast)
Why Half of Product Managers Are in Trouble — Nikhyl Singhal on the AI Reinvention Threshold
Nikhyl Singhal is a serial founder and a former senior product executive at Meta, Google, and Credit Karma . Today he runs The Skip ( skip.show (https://skip.show)), a community for senior product leaders, plus offshoots like Skip Community , Skip Coach , and Skip.help . Lenny de
Apr 27 · 7m

The AI Corner
The AI Agent That Thinks Like Jensen Huang, Elon Musk, and Dario Amodei
Dominguez opens with a claim that is easy to skim past but worth stopping on: the difference between elite founders and everyone else is not raw IQ or speed — it is that each of them has internalized a repeatable mental procedure they run on every important decision. The procedur
Apr 27 · 6m