Banks were a proving ground for the value of digital transformation and cloud migration. Now, the sector is helping make a compelling case for generative AI, thanks to its mountains of ingestible data siloed away in cavernous legacy systems.
The success of banking rests on its ability to rapidly and reliably process millions of daily transactions at scale — an undertaking that inherently benefits from a combination of cloud and AI.
“At the core of the bank, technology is increasing in importance, as it has been for the last five decades,” Michael Abbott, Accenture senior managing director and global banking lead, said.
The challenge as the next wave of modernization crests is to leverage complex technologies to simplify systems and processes, according to Accenture’s 2025 banking trends report.
“Generative AI is going to impact every function within a bank — every single part of the job,” Abbott said, pointing to agentic solutions tailored to manage dispute resolutions and coding assistants trained to refactor legacy applications.
The impacts are already apparent across the industry.
Citigroup armed 30,000 developers with generative AI coding tools and rolled out a pair of generative AI-powered productivity enhancement platforms to its broader workforce last year. Goldman Sachs aims to furnish roughly 10,000 employees with an AI assistant by the end of the year, the bank’s CIO Marco Argenti told CNBC this week.
A technology step-change
As enterprises worked through generative AI pilots, accuracy and reliability emerged as concerns, particularly for customer-facing use cases. While issues persist, LLM-based coding assistants have come a long way in the last year, Abbott said.
Tools that can help engineers refactor COBOL applications were, by Abbott’s estimation, 70% to 75% accurate a year ago. Now, they achieve closer to 98% or 99% accuracy in reverse engineering legacy code.
“It’s not just translating from COBOL to Java — now you can transform the application, move it into an API, and forward engineering is going to improve in leaps and bounds this year,” Abbott said. “You’ll be able to forward engineer the application into any code base you want.”
The economics of generative AI are becoming more favorable, as well, even as many companies struggle to achieve returns on their AI investments.
JPMorgan Chase executives expect the bank’s LLM Suite and other AI capabilities to create $2 billion in value, President and COO Daniel Pinto said last fall.
“CIOs should understand that the cost of this stuff is collapsing — the API cost for OpenAI services has come way down,” Abbott said.
Accenture tracked a 74% annual cost decline between GPT-3 and GPT-3.5 Turbo from December 2021 to December 2024. GPT-4’s annualized cost came down 58% between March 2023 and the end of last year, the firm found.
Hyperscale cloud providers have continued to pour tens of billions of dollars into infrastructure to support model training and inference. Abbott compared the economics of a data center gold rush to the 19th-century U.S. transcontinental railroad boom.
“It costs an enormous amount of money to build the first model and train it, just as it takes a lot of capital to lay a railway track,” Abbott said. “But once you got the track laid, moving a ton of coal a mile is marginally cheap.”
Abbott pointed to the January release of DeepSeek-R1, an open-source model family with greatly reduced usage fees compared to ChatGPT, as a harbinger of cost reductions to come.
“It’s way too early to tell if large banks will use DeepSeek, but it is open and freely available so they can analyze it and understand everything in there,” Abbott said.