An explosion in software engineers using AI coding tools?

👋 Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover topics related to Big Tech and high-growth startups through the lens of engineering managers and senior engineers. In this article, we cover one out of four topics from today’s subscriber-only The Scoop issue. To get full issues twice a week, subscribe here.

GitHub just published a survey about developer productivity and AI coding tools. The company hired an external research agency to survey 500 US-based software developers who work at large, 1,000+ person organizations.

I reached out to GitHub to get more details on how the survey was conducted. Some details about the population surveyed — which were not published on the original survey:

  • All respondents work full-time, and are individual contributors
  • Specialization: 32% fullstack, 23% frontend, 17% backend, 18% DevOps, 4% mobile, 6% operations
  • Age: 47% 30-49, 41% 30-39, 6% below 29 and 6% above 50
  • Gender: 70% male, 30% female

The industry split was also pretty evenly distributed:

Answering the question “How would you best describe the area in which your company operates?” and “Which of the following categories includes your company’s annual sales revenue?”
Answering the question “How would you best describe the area in which your company operates?” and “Which of the following categories includes your company’s annual sales revenue?”

One finding really jumps out: 92% of developers say they use AI coding tools at work:

92% of respondents use AI coding tools at work
92% of respondents use AI coding tools at work

Back in April, we covered the productivity impact of AI tools, based on a survey of engineers who’ve been using AI coding tools for some time. While I know of few developers who don’t occasionally use ChatGPT, or have not tried an AI coding assistant, I always assumed this must be within my bubble. But this research indicates these tools have spread far and wide.

What do AI coding tools help the most with? The survey lists the top 3 areas mentioned by developers:

  1. Learn: Develop coding language skills (57%)
  2. Productivity: become more productive (53%)
  3. Focus: spend more time building and creating, less on repetitive tasks (51%)

These findings chime with the biggest productivity gains developers mention in the article on AI coding tools, from using AI coding tools like Copilot, or generative AI tools like ChatGPT. Organizing the most common use cases mentioned from The productivity impact of AI coding tools in the three categories from the GitHub survey:

Learning (Generative AI)

  • New concepts, tools and frameworks, and research. Interestingly, the use case of learning unfamiliar topics was mentioned most frequently in the context of ChatGPT. Researching new topics is also a common use case.
  • Improving code quality: by asking ChatGPT to do this, or asking it to criticize the code. Another use case is to input code and ask ChatGPT to refactor it.
  • Getting started: kicking off new projects and tasks, and overcoming initial barriers. As one respondent shared: “it breaks the initial barrier of ‘where to begin?’” Generating code for greenfield tasks or projects is also a common use case.

Productivity (AI coding tools & Generative AI)

AI coding tools:

  • Scaffolding: “I have been using it mainly to get basic structure ready without typing it all out by myself. Helps me to do my task in a very short amount of time in most cases.”
  • Autocomplete: "I have it always on, suggesting autocomplete in the IDE."

Generative AI tools:

  • Debugging: One use case is to give ChatGPT some code and ask it why that piece of code isn’t behaving as expected.
  • Prototyping: several engineers mention they use ChatGPT to throw together a prototype quickly.

Focus (mostly AI coding tools)

AI coding tools:

  • Boilerplate code (AI coding tool): “Integrating with a very inconsistent SOAP service is an example of when I would go mad if I had to type it all out. Copilot is able to use bits of notes I have in IDE to generate reasonably-looking skeletons while adjusting types/names/literals.”
  • Generating repetitive code (AI coding tool): "It generates a lot of code I would type anyway. It speeds up my work."
  • Generating tests and documentation (AI coding tool): "I use it for everything − writing code, writing tests, writing documentation" and "generation of (block) comments, generation of tests, documentation."

Generative AI tools:

  • Routine, boring tasks: for example, SQL or Elastic queries and understanding what JSON responses mean.

We should expect even more heated competition between AI coding tools, off the back of this data. If close to 90% of developers are already using something to help them code, then we are in the “early adopter” phase, where 90% of the population is experimenting and is early in adopting these tools. This means the future market leaders are tools which are available today, or will launch very soon.

Now is a good time to recap AI coding tool alternatives. We previously covered these:

In the 2 months since publishing that list, several new tools have launched, including:

Visual Studio Code plugins:

None of the above are endorsements: do your research on matters like which code models these companies use, their policies to keep your code secure, and other due diligence. See a comparison of the first 9 tools here. This space is evolving rapidly: it’s exciting and hard to keep up with!

Developers in this survey seemed to feel they are already evaluated roughly as they expect they should be. One interesting question was how these devs think their managers should rate their performance, and how they actually do. Unfortunately, the survey does not make apples-to-apples comparisons possible for most categories. I took the categories where direct comparison is possible, and the result is pretty surprising:

How developers in the survey think they should be evaluated, vs how they are. Data source: GitHub
How developers in the survey think they should be evaluated, vs how they are. Data source: GitHub

I’ll add that I find it problematic that GitHub has not released the full survey results, as it feels like they are cherry-picking some results, and making an apples-to-apples comparison hard to do. For example, the report says “developers want more collaboration (...) developers want collaboration to be a top metric in performance reviews.” But, when looking at the apples-to-apples data, this doesn’t ring true: 35% of developers said they’d like collaboration and communication to be measured for performance, and 33% said that their company already does this.

So how will evaluation criteria change if everyone uses AI coding tools? Developers think it won’t change much, based on directly comparable data from the survey.

How developers think they should be evaluated, if using AI coding tools. The results are almost identical: 2% more think that quality and time to complete a task will be a bit more important. Data source: GitHub
How developers think they should be evaluated, if using AI coding tools. The results are almost identical: 2% more think that quality and time to complete a task will be a bit more important. Data source: GitHub

There are interesting takeaways from the survey, but I feel the survey was parsed in a way to fit the narrative of how AI coding tools lead to more collaboration. From looking at the raw data – the limited amount that was released, of which much was curiously hidden – I did not reach the same conclusion. Sure, people will “collaborate” more with the AI itself, but I got no sense – or read on the data – that using AI tools would result in more collaboration between software engineers.

However, despite my suspicion about data being cherry-picked, I do agree with this takeaway of the survey:

“As AI technology continues to advance, it is likely these coding tools will have an even greater impact on developer performance and upskilling.”

Generative AI and within-IDE coding tools still seem to be distinct categories. In my observation, there’s still a big divide between two types of AI coding helpers:

  1. Generative AI: chat “buddies” you can ask things like “explain how generics works in Go, and how it is different to generics in Java.” These tools greatly help learning and can also help scaffold or prototype ideas.
  2. Within IDE AI coding tools: these aid coding workflow by enabling users to focus more on “interesting” work, and make coding more productive.

This was one out of the four topics covered in this week’s The Scoop. A lot of what I share in The Scoop is exclusive to this publication, meaning it’s not been covered in any other media outlet before and you’re the first to read about it.

The full The Scoop edition additionally covers:

  • AWS’s us-east-1 outage: a deep dive. Amazon’s most important region went down for 3 hours, and the whole of the web felt it. Which services and companies were impacted and what really caused this incident? I spoke with engineers at AWS to get answers. Exclusive.
  • Why Meta is reducing its number of managers. On a recent podcast, Meta’s founder and CEO shared his reasoning for why the tech giant now has fewer managers. I talked with current Meta engineers for their reaction – and give my two cents as well. Analysis.
  • HashiCorp’s ‘optimized’ layoff process. The infrastructure provider cut 8% of staff, and seems to have ‘optimized’ this process from a business perspective. How did the company go about communicating redundancies, and what would a more humane process have been? Exclusive.

Read it here.


Featured Pragmatic Engineer Jobs

  1. Senior DevOps Engineer at Polarsteps. Amsterdam.
  2. Senior Software Engineer at Ladder. $150-175K + equity. Palo Alto (CA) or Remote (US).
  3. Senior Software Engineer at GetYourGuide. Berlin, Germany.
  4. Senior MLOps Engineer at GetYourGuide. Berlin, Germany.
  5. Senior Software Engineer (Reporting) at CAST.AI. €72-96K + equity. Remote (Europe).
  6. Senior Software Engineer (Security) at CAST.AI. €60-90K + equity. Remote (Europe).
  7. Senior Sales Engineer at CAST.AI. Remote (Europe, US).
  8. Senior Frontend Developer at TalentBait. €60-80K + equity. Barcelona, Spain.
  9. Technical Lead at Ably. £95-120K + equity. London or Remote (UK).
  10. Senior Software Engineer, Missions at Ably. £80-100K + equity. Remote (UK).
  11. Software Engineer at Freshpaint. $130-210K + equity. Remote (US).
  12. Senior Software Engineer, Developer Ecosystems at Ably. £80-100K. Remote (UK).
  13. Senior Web Engineer, Activation at Ably. £75-85K. Remote (UK).
  14. Web Engineer at Ably. £70-75K. Remote (UK).
  15. Staff Software Engineer at Onaroll. $170-190K + equity. Remote (US).
  16. Staff Software Engineer at Deepset. Remote (US, Europe).

The above jobs score at least 10/12 on The Pragmatic Engineer Test. Browse more senior engineer and engineering leadership roles with great engineering cultures, or add your own on The Pragmatic Engineer Job board and apply to join The Pragmatic Engineer Talent Collective.

Want to get interesting opportunities from vetted tech companies? Sign up to The Pragmatic Engineer Talent Collective and get sent great opportunities - similar to the ones below without any obligation. You can be public or anonymous, and I’ll be curating the list of companies and people.

Are you hiring senior+ engineers or engineering managers? Apply to join The Pragmatic Engineer Talent Collective to contact world-class senior and above engineers and engineering managers/directors. Get vetted drops twice a month, from software engineers - full-stack, backend, mobile, frontend, data, ML - and managers currently working at Big Tech, high-growth startups, and places with strong engineering cultures. Apply here.