Skip to content

AI Coding Tools: Are They Slowing Down Experienced Developers?

  • News
AI Coding Tools: Are They Slowing Down Experienced Developers?

Hold On… AI Coding Assistants Might Actually Be Slowing Experts Down?

Hey everyone, John here! Welcome back to the blog where we break down the latest in AI without the confusing jargon. Today, we’re looking at a story that made me do a double-take. We’re always hearing that AI tools are making computer programmers faster and more efficient. But what if, for the most experienced pros, the opposite is true? A new, detailed study suggests just that, and the results are pretty stunning.

Let’s dive in and unpack what’s going on.

The Shocking Discovery: A 19% Slowdown

Imagine you give a master chef a fancy new kitchen gadget that promises to make dicing vegetables faster than ever. You’d expect them to fly through their prep work, right? Well, a new study did something similar with expert software developers. They looked at 16 seasoned pros who work on huge, well-known software projects.

The researchers, from a group called METR, had them work on their usual, complex tasks. Sometimes they could use popular AI coding assistants (like Cursor Pro and Claude), and other times they couldn’t. The result? When using the AI tools, these expert developers took 19% longer to finish their work. That’s right—the AI made them slower!

Lila: “Wait a minute, John. You mentioned they worked on ‘mature repositories.’ What exactly is a repository in this context?”

John: “Great question, Lila! Think of a repository as a massive, shared digital folder for a software project. It’s like a giant library that holds all the blueprints (the code), the history of every change, and all the notes. For a big project with millions of lines of code, it’s the single source of truth that all the developers work from. So these weren’t simple, small-scale tests; they were happening in the real, complicated world of software development.”

Our Brains vs. The Clock: A Tale of Two Realities

Here’s where it gets even stranger. Before the study, the developers predicted the AI would make them about 24% faster. And even after the study, when they were actually 19% slower, they felt like the AI had made them 20% faster! There’s a huge gap between how productive they felt and how productive they actually were.

One expert quoted in the article warns that companies might be “mistaking developer satisfaction for developer productivity.” It’s a bit like buying a new car that’s incredibly comfortable and fun to drive. You might enjoy the ride so much that you don’t realize you’re getting stuck in more traffic and taking longer to get to your destination. The AI tools made the experience of coding feel better, but that good feeling didn’t translate into more speed for these experts.

How Did They Figure This Out? A Peek Inside the Experiment

You might be wondering how the researchers were so sure about their findings. They used a very reliable scientific method called a randomized controlled trial (or RCT).

Lila: “Okay, ‘randomized controlled trial’ sounds super technical. Can you break that down for me?”

John: “Absolutely! It’s a fancy name for a simple idea. Imagine you want to test if a new vitamin helps people run faster. You’d get two groups of runners. You give the vitamin to one group, and a fake pill (a placebo) to the other group, without them knowing which is which. Then you have them all run a race. By comparing the results, you can see if the vitamin really worked. The study did the same thing: one group of tasks was done with AI, and the other group (the ‘control’ group) was done without it. This way, they could directly measure the AI’s true impact.”

The tasks weren’t easy, either. They took about two hours on average to complete, and the developers were working on projects they’d known for years. This makes the results a vital reality check compared to hype you might see elsewhere.

So, Why Would AI Make an Expert Slower?

This is the million-dollar question. The slowdown wasn’t because the AI was “bad,” but because of how the experts had to interact with it. Here are a few key reasons the study uncovered:

  • The Trust-But-Verify Problem: The developers didn’t blindly trust the AI. In fact, 75% of them said they read every single line of code the AI suggested. They only ended up accepting less than half of the AI’s suggestions.
  • The “Fixer-Upper” Code: Even when they did accept the AI’s code, it often wasn’t perfect. Over half of the developers reported having to make major changes to clean up the AI’s work and make it fit into the complex project.
  • Lack of Deep Context: Big software projects are like intricate spiderwebs. Every piece is connected. The AI, as smart as it is, struggled to understand all these deep connections and specific rules of the project. It was offering suggestions without knowing the full history and architecture.
  • The “Ooh, Shiny!” Effect: Some developers admitted they spent time just experimenting with the tool, playing around with its features beyond what was strictly necessary to get the job done.

One expert in the article described this friction perfectly. He said it’s about integrating “probabilistic suggestions into deterministic workflows.”

Lila: “Whoa, John, that’s a mouthful! ‘Probabilistic suggestions into deterministic workflows’… what on earth does that mean?”

John: “Haha, I know it sounds complex, but the idea is simple! ‘Probabilistic’ means the AI is making a highly educated guess. It’s saying, ‘Based on everything I’ve seen, this is probably what you want.’ But software development is ‘deterministic’—the code has to work in one specific, predictable way, every single time. There’s no room for ‘probably’. So, the expert developer has to take the AI’s ‘maybe this will work’ suggestion and spend time carefully checking and molding it until it becomes a ‘this will definitely work’ solution. That extra step is where the time gets lost.”

This Isn’t Just a One-Off Finding

The METR study isn’t alone. The article also points to Google’s huge 2024 DORA report, which surveyed over 39,000 tech professionals. It found that as companies used more AI, their speed of delivering software actually went down slightly, and the stability of their systems dropped. A striking 39% of people in that survey said they had little or no trust in AI-generated code.

One developer compared it to the early days of Stack Overflow (a popular Q&A site for programmers). You’d find a solution, copy-paste it into your project, and then everything would explode because you didn’t fully understand it. The same caution applies to AI.

Now, to be fair, other studies have shown that AI tools like GitHub Copilot can make developers much faster. However, those studies often used simpler, more isolated coding problems, not the massive, interconnected projects that the METR study focused on.

The Path Forward: Is AI a Co-Pilot or a Gimmick?

After hearing all this, you might think these AI tools are on their way out. But that’s not the case at all. Interestingly, 69% of the developers in the study continued to use the AI tool even after the experiment was over. Why? Because they must value something else besides pure speed, like reducing mental strain or handling boring, repetitive tasks.

The takeaway here isn’t to ditch AI. It’s to be smarter about how we use it. An expert in the article recommends a “portfolio mindset.” Think of it like a toolbox. You wouldn’t use a hammer to saw a piece of wood. Similarly, you should use AI for the jobs it’s good at:

  • Writing documentation
  • Creating simple, repetitive “boilerplate” code
  • Generating basic tests

For the really tricky, creative, and complex parts of a project, the human expert’s knowledge and experience are still irreplaceable. The goal is to treat AI as a “contextual co-pilot,” not as the captain of the ship.

Our Final Thoughts

John: “For me, this is a fascinating and much-needed reality check. We’re constantly bombarded with hype about how AI is going to automate everything. This study shows that for high-level, expert work, human oversight, experience, and deep context are more important than ever. The tool is only as good as the person wielding it, and we’re all still learning the best way to work together.”

Lila: “As someone just learning about all this, I find it really encouraging! It means the future isn’t about humans vs. AI, but humans with AI. It shows there’s immense value in developing deep expertise that a machine can’t replicate. It makes the field feel more approachable, knowing that the goal is to become a smart partner for the technology, not to be replaced by it.”

This is a perfect example of how the real-world impact of AI is often more nuanced and interesting than the headlines suggest. It’s a journey of discovery for all of us!

This article is based on the following original source, summarized from the author’s perspective:
AI coding tools can slow down seasoned developers by
19%

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *