Generative AI providers targeted coding skills in updates and new tools last year, pitching enterprises on the potential productivity gains of equipping developers with coding companions.
Interest has given way to experimentation as enterprise leaders look closer at the implication of these tools on software security and quality. Code churn is one commonly tracked metric that indicates code has been rewritten or deleted shortly after being written.
It's a normal part of the development process, but one recent study found the metric has been rising as AI-powered code generation tools come online.
2023 marked a significant uptick in how much code was generated by AI, according to Bill Harding, CEO at developer tool provider GitClear. “The percentages of copy/pasted, moved and churned code had been relatively stable in 2020 and 2021 with a slight uptick in 2022,” Harding said.
GitClear collected and analyzed 153 million changed lines of code, and projects code churn will double in 2024 compared to its 2021 pre-AI baseline.
AI coding tool vendors like GitHub reject the link between AI tools and code quality.
“There's no way of knowing whether the changed lines of code GitClear analyzed were authored by developers using AI tools," a GitHub spokesperson said in an email. "The study does not account for any other factors that impact code quality."
As CIOs feel the pressure to adopt generative AI quickly, code-generation tools can provide a seemingly instant boost to operations, but analysts say the story is more complex. Adopting AI-powered coding tools is unlikely to be as transformative, or harmful, as some would suggest.
Philip Walsh, senior principal analyst at Gartner, said he has found both “boomers” and “doomers” in his discussions with enterprise engineering leaders about AI coding assistants.
“Boomers are like, ‘All your developers are going to be 50% more productive,’ and that’s pretty misleading and we’re not seeing that,” Walsh said. Doomers suggest code generation tools will lead to “spaghetti code” and unsustainable technical debt.
Despite enterprise interest, the adoption of coding tools isn’t yet significant even if it's growing, Walsh said, making it hard to discern the actual impact of the tools.
In the first half of 2023, only 13% of engineering leaders said their teams were using AI coding assistants, such as GitHub Copilot or Code Whisperer, according to a Gartner survey. By October, 18% of engineering leaders confirmed they had rolled out a coding companion and 25% said they were in the deployment process.
“That’s still not enough, I think, to really have a significant sense of whether the negative impact [of AI-powered coding assistants] on code quality is an established, legitimate problem,” Walsh said. “On the flip side, it’s also too early to tell if it will lead to transformative productivity gains, acting as that silver bullet.”
Incremental improvements
Businesses across industries are intrigued by potential productivity gains of AI-written code, including leaders at Vanguard, Bank of America, General Motors and Papa Johns.
The key, as adoption moves forward, is viewing the technology as an incremental improvement to the developer experience.
“We should not see AI coding assistance as a significant cost reduction mechanism that’s going to allow you to reduce headcount because your developers are going to become radically more productive,” Walsh said. “Developers like the tools, they do save time and developer experience is improved, but it’s not a transformative event.”
Bret Greenstein, data and AI leader at PwC, echoed the sentiment.
“When you treat the tool as a partner, to help you make sure you’re not missing things, to properly document and identify better patterns, it’s great,” Greenstein said. “But if you’re not a great coder, you’re not going to be able to have the judgment to know if the code that comes out is going to do what you want in the way that you want.”
PwC is in the process of executing a $1 billion, 3-year roadmap to bolster its AI offerings for customers and employees.
Currently, all employees have access to generative AI through a variety of tools and coding assistants are in the hands of all the company’s developers, Greenstein said. The company is leaning on several existing mitigation techniques and a few new ones to ensure generated code is up to par.
“It’s not that different, it’s just now it’s a little easier to inject things so it also has to be easier to detect things,” Greenstein said. “The constant elevating battle of security is going to continue at its pace.”
More than half of organizations encounter security issues with AI-generated code sometimes or frequently, according to a Snyk survey. Nearly 9 in 10 developers are concerned about the security implications of using AI coding tools.
PwC uses generative AI tools and automated scans to validate generated outputs. Human reviewers remain integral to the process as well.