
AI coding tools are everywhere. Many teams see faster local throughput: more code drafted, more ideas explored, less friction to start.
And yet, many leaders are seeing an uncomfortable reality: delivery doesn’t speed up proportionally—and sometimes it gets harder.
This is the core lesson behind a simple thesis: software development is a team sport—and in the AI‑first era, the “team parts” (coordination, review, integration, decision‑making) become the constraint.
We’ll focus on what matters for executives: how to redesign the operating model so AI translates into business outcomes—not just activity.
This is the pattern that many teams experience: in a controlled setting, Google found AI assistance made individual coding tasks ~21% faster. But METR’s study on real‑world work by experienced developers on their own projects found them ~19% slower.
The contradiction is only apparent. The studies measured different realities:
In Goldratt’s terms: speeding up a non‑bottleneck doesn’t speed up the system. If AI reduces time spent writing code, the constraint moves to communication, review, integration, and decision latency.
DORA’s 2025 and 2026 research (see Balancing AI tensions) adds a critical nuance: AI’s impact on the SDLC is not a linear improvement—it’s a set of tradeoffs.
In a thematic deep dive of 1,110 open‑ended responses from Google engineers (Q3 2025), DORA found AI is most visible in code generation, information seeking, code review, and testing. Engineers broadly reported higher velocity—but also a growing verification tax: time saved generating drafts gets re‑spent auditing, prompting, and checking correctness.
At the macro level, DORA reports that higher AI adoption is associated with both higher delivery throughput and higher delivery instability. AI behaves like an amplifier: strong internal platforms, good testing, and clear workflows get stronger; fragmented tooling and fragile systems accumulate debt faster.
DORA frames three tensions leaders must explicitly manage:
The implication is clear: the fix is not “faster code generation.” The fix is redesigning how teams coordinate, review, integrate, and learn.
Most organizations implicitly treat coordination as overhead:
Those are good instincts—until they become a denial of reality.
Modern software delivery is a network: services, platforms, security, data, product, design, SRE, compliance. In a networked system, coordination is a first‑class engineering activity.
In AI‑first environments, the need rises:
AI‑first teams don’t need less teamwork. They need higher‑quality teamwork at higher speed.
Every team should be able to answer:
Leadership move: treat team interfaces the way you treat service interfaces: document them, version them, keep them stable.
Dependency load forces waiting, increases “people sync,” and amplifies ambiguity. AI can make changes feel cheap—but dependencies still exist (ownership, architecture, risk, operations).
Leadership move: invest in boundaries that reduce negotiation (clear ownership, paved roads, stable platforms, standard release paths).
When teams don’t know what “good” looks like, what tradeoff is acceptable, or who can decide, they stall—or push risk downstream.
AI can worsen this by producing many plausible options.
Leadership move: clarify decision rights and set decision latency SLAs (e.g., product decisions within 24h; security exceptions within 48h).
DORA’s guidance matches what high‑performing orgs are converging on:
Leadership move: optimize the end‑to‑end review + integration loop, not the speed of code drafting.
The hardest constraints to spot are often not in systems:
This is where measurement becomes a competitive advantage.
DevEx Surveys help you quantify delivery constraints that system-data dashboards miss.
Most teams aren’t slow — they’re unclear.
With a lightweight cadence (monthly or quarterly), you can measure:
For AI‑first teams, the question isn’t “Are we writing more code?” It’s:
If you’re rolling out AI tools, run DevEx Surveys to establish a baseline and track whether changes to your operating model actually improve delivery.
Most teams aren’t slow — they’re unclear and their time is fragmented.
Most productivity loss doesn’t show up in tools. It lives in meetings, and in interruptions within and across teams.
WorkSmart AI reveals how time and attention flow — and where they’re lost:
This gives leaders an additional lens on delivery constraints that typical engineering dashboards miss, and helps teams protect focus while improving coordination.