Dive Brief:
- AI is not yet widely integrated in the software development lifecycle, but those who do use it turn to AI regularly, according to a GitLab survey of 1,000 individual contributors and leaders in IT operations, development and security.
- Just one-quarter of organizations are using AI in software development, and nearly half of them use it multiple times a day according to the report published Tuesday. Adopters turned to AI to improve efficiency, speed cycle times and increase innovation.
- Software developers are using the technology to power natural language chatbots and automated testing, as well as for generating summaries of changes made to code, GitLab survey data shows.
Dive Insight:
While generative AI adoption has put pressure on tech leaders to find high-value use cases in IT operations, the integration of AI into software development predates the rise of generative AI. Tech leaders can't ignore AI's ability to speed up the time it takes to write code, develop and test systems.
More than half of developers said their organizations are interested in AI-powered code generation and code suggestions, according to the GitLab report.
PwC fine-tuned OpenAI’s technology to serve as a conversational AI assistant for employees. The professional services company expects the tool to help workers throughout the organization with daily tasks as it is being rolled out in phases.
“My excitement is around generating code, code reviews, testing or quality assurance,” Scott Likens, global AI and innovation technology leader at PwC, told CIO Dive in August. “We can accelerate what our engineers are doing.”
Generative AI upgrades have improved popular tools, like GitHub’s Copilot. The AI-based assisted coding tool generated an average of 46% of code when developers used it in February, up from 27% in June. However, advancements in technology have raised new questions.
Apple reportedly restricted internal use of GitHub’s Copilot in May, citing concerns over confidential corporate data. There are also fears regarding copyright infringement when it is unclear what data sets were used to train a model and how a model came to its generated output.
“Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI,” said Brad Smith, vice chair and president at Microsoft, and Hossein Nowbar, CVP and chief legal officer at Microsoft, in a blog post Thursday. “This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services.”
So long as customers use guardrails and content filters built into its products, Microsoft said it will assume responsibility for the potential legal risks involved. The added protection comes on the heels of an inquiry from the U.S. Copyright Office last week as it considers whether legislative or regulatory action on AI-generated content is warranted.