Accenture Banking Blog

In previous blogs in this series, I explored how leading financial services (FS) organizations are scaling AI for value and using a human-led approach, including agentic architectures that reimagine work.

In this blog, I focus on how to lead that change well: pursuing business outcomes, setting the right investment and risk appetite, managing rapid and distributed change and keeping the shift human-led.

A growing divide between leaders and laggards 

Our research shows that 86% of executives plan to increase their generative AI (GenAI) investments in 2025; 80% expect AI’s value to exceed expectations. Yet only 34% of organizations have scaled AI for a core process. Those that have are three times more likely to exceed their expected ROI.

These leaders share clear traits: strong CEO sponsorship, value-first strategies, and solid foundations — secure digital cores, quality data, responsible AI frameworks, skilled teams, and enterprise education. This pattern is especially strong in FS.

With agentic AI, the gap is widening faster. Leaders are 4.5x more likely to invest strategically in agentic architectures and 6x more likely to increase GenAI investment significantly in 2025. They are pulling ahead and accelerating.

Barriers to scaling

Organizations stuck in proof-of-concept stages or hesitating are falling behind. What are some of the barriers to scaling AI in financial services?

• Leadership uncertainty, constrained investment and undefined risk appetite
• Overly rigid, one-size-fits-all risk governance
• Limited business engagement, treating AI as only a tech change
• Legacy platforms and fragmented data
• Business-case models that focus narrowly on cost, not enterprise change or reuse

But the biggest barrier is the level of investment in talent, change, and adoption. To be successful in AI we must take the workforce with us on the journey.

At Davos 2026, Accenture emphasized this gap directly: while companies today invest $3 in technology for every $1 in people, those that balance both are 4x more likely to achieve long-term profitable growth. The Pulse of Change survey also revealed that although executives see AI as a growth engine, only 43% say they are prioritizing workforce reskilling for AI roles. This underinvestment in human capability, particularly in change management and upskilling is the critical brake on AI scaling.

Value pools

The next question is where to invest? Prioritization must reflect feasibility, risk appetite, speed to value and overall business outcomes. While each firm’s priorities are unique, clear patterns are emerging. Using proprietary task-level analytics across 220 banks, we identified a 29% uplift in PBT, a $255B opportunity over three years. The richest value pools lie in customer servicing, sales, IT engineering, software delivery, product development, pricing, and risk.

Investment appetite

If we know where to start, the next question is whether we can afford it. This is a fascinating question because AI has changed so quickly and has traditionally been quite a small proportion of investment for most banks and insurers. Leading CEOs are working with CFOs, CDOs/CIOs and boards to set a new investment appetite for GenAI and agentic AI.

We have seen significant redirection of existing change and technology funding (30% in some cases), as well as additional one-off funding and the acceleration of existing change investments using AI. Some AI leaders are already starting to self-fund their investments through early returns.

The investment portfolio

A balanced portfolio of investment for AI is useful (Hosanger, 2025), considering the spread of investments: across different business areas and functions; between cost reduction, growth and reinvention; between quick wins (that build consensus, momentum and learning) and longer-term change (needed for more meaningful gains).

A balanced AI portfolio spreads investment across:

    • Business areas and functions
    • Cost reduction, growth, and reinvention
    • Quick wins that build momentum
    • Longer term initiatives that unlock deeper value

What change should these investments be directed to? Clearly, there needs to be investment in AI itself, procurement and training of agents from ecosystem partners through to the development of in-house agents and testing. What tends to be missed are the ongoing investments in ‘good foundations’ such as:

    • Unified AI platforms to reduce sprawl
    • Monitoring capabilities
    • Data fabric and readiness
    • Responsible AI tooling
    • Process and value stream reinvention

The biggest multiplier of ROI is investment in people, leadership, adoption, skills and new ways of working. Yet today, organizations are spending $3 on technology for every $1 on people, leaving significant value untapped.

Real world example: Rapidly getting to the right investments

For a large Asian bank, we partnered with the CEO to establish responsible AI, build capability and establish the AI CoE and delivery platform, alongside rapidly assessing hundreds of ideas for desirability, feasibility and viability. The result: 35 GenAI changes delivered in 18 months, unlocking $200 million of annual productivity benefits and halving customer query handling times while cutting credit assessment time by 80%. A structured way to assess and manage the portfolio is essential.

Reuse and repeatable investment patterns

Reuse is essential. Modular AI components reduce deployment costs and increase speed. Utility agents that extract, summarize, research, review, test can be used across multiple workflows. For example, a document extraction agent can support KYC, applications, underwriting and servicing.

Each reuse still requires contextual testing, adoption design and monitoring. Like employees, agents need both corporate “induction” and role specific training.

Risk appetite and responsible AI

So far, so good? Value and investment are rising, but scaling AI requires clear decisions about where and how to use it responsibly.

A core leadership and board responsibility is shaping the “where to play” risk appetite for AI: deciding which decisions, processes, and customer interactions are appropriate for AI — and which are not. These choices should be explicit, reviewed periodically, and aligned to business strategy, regulatory expectations, and cultural values.

Risk appetite must be continuous, not episodic. Leading FS organizations deploy real-time monitoring and AI control rooms to track model drift, data flows, agent chains, adoption quality, and responsible AI outcomes.

Boards, regulators, and leadership teams need clear transparency: where AI is deployed, what it does, and whether it is performing as intended. Strong accountability frameworks must reinforce this transparency, ensuring ownership, escalation, and timely intervention when risks emerge.

Making AI a responsible change at scale

Let’s break down the key aspects of responsible AI further and look at the change they require.

Bias and harm prevention are critical for both colleagues and customers. Our aim is simple: avoid harm and do good with AI. We reduce bias through good design, high quality training data, equitable treatment across groups, and rigorous testing and monitoring. Even well tested models can drift, so they need ongoing checks and escalation when issues arise, particularly around protected attributes. Humans carry biases too, so we must design interactions where people and agents can co-learn and counteract bias together.

In HR, responsible AI requires special care. The EU AI Act limits how employers can use AI in decisions that affect people’s careers and lives, such as hiring, promotions and pay. Even when AI provides predictions or recommendations about the workforce, its use must be ethical, clearly scoped, scientifically grounded and fair across all employee groups.

Transparency, explainability, and accuracy matter. Customers and colleagues want to know when and how AI is being used, and in many FS processes, they expect 100% accuracy. We must disclose AI use, especially as interfaces become more conversational and humanlike. AI outputs must be interpretable, traceable, and supported with citations (for example, an investment summary should point back to source documents). Reliability is essential. GenAI can hallucinate or produce flawed outputs, which undermines trust. While accuracy is improving, we must train people to spot errors and act as vigilant “humans in the lead,” especially for customer facing decisions supported by AI agents.

AI also raises privacy, confidentiality, and cybersecurity risks. AI systems must follow data protection rules, including “minimum necessary” data use. Colleagues should be able to explain what customer or employee data was used. At a basic level, they need clear guidance on safe prompting — for instance, never entering client, colleague or confidential data into public tools. Protection‑ rules, including “minimum necessary” data use. Colleagues should be able to explain what customer or employee data was used. At a basic level, they need clear guidance on safe prompting, for instance, never entering client, colleague or confidential data into public tools.

Trust enables faster scaling

For a more in-depth view, take a look at Rethinking Responsibility with Generative AI. We help clients set the right direction on risk appetite and responsible AI, moving from principles to practice. One example is our work with the Monetary Authority of Singapore, where we helped build industrywide frameworks, tools, and methods as part of Project Veritas. Building on this collaboration, the Monetary Authority of Singapore and its industry partners, including Accenture, have translated that work into practical guidance with examples that aim to help financial institutions put responsible AI into action and accelerate value safely at scale.

Without responsible AI to build trust, adoption stalls and the value case collapses. In regulated industries like FS, a clear risk appetite and responsible AI practices enable organizations to scale with confidence, fast and safely. Like navigation systems and seatbelts in a car, they let us move quickly without losing control.

Leading a rapidly evolving change

It is vital to treat AI as a rapidly evolving change, not a linear change with a fixed goal.

AI is advancing two to three times faster than past technology waves, none of which are finished. Regulatory, customer, and societal expectations are shifting just as fast.

AI’s technical capabilities are expanding higher accuracy with fewer errors, stronger reasoning and logic, improved multimodal inputs and outputs. Tasks once beyond reach (reasoning, calculation, action) are becoming strengths. Yet AI does not affect all tasks equally. Dell’Aqua et al. (2023) shows a jagged frontier where some tasks benefit greatly while others do not. That frontier is moving as capabilities grow.

Hundreds of new models are emerging, each with different strengths and costs. Choosing the right model matters. Agentic architecture often relies on smaller, specialist models. In Accenture Refinery and with our ecosystem partners, we use model “switchboards” to select the best option while embedding the right controls.

This pace should not trigger inaction. It demands an adaptive, continuous approach to change. AI is not a linear program with a start and end. It is ongoing. It needs persistent funding and enduring teams, not ‘stand up and down’ projects. It requires openness to unexpected developments and signals from competitors and partners, iterative learning, feedback from customers and colleagues, and a growth mindset — not a fixed mindset.  Rapid change creates first-mover advantages and compounding effects in value realization and learning.

Leading a distributed change

AI must be treated as a distributed business change, not a centralized project.

AI is spreading rapidly across the enterprise and accelerating work everywhere. What once took years now takes months; what took months now takes weeks. This creates significant opportunity, but only if leaders and teams align around a shared vision and operate within clear guardrails.

Without alignment, distributed change leads quickly to duplication and disorder. Leaders across the business need support to identify high‑value opportunities and access delivery capability. Larger AI initiatives also require strong business product owners, not just the CDO or CIO, to ensure outcomes are grounded in real customer needs and commercial value.

As AI becomes pervasive, the HR function must evolve. HR teams in banks and insurers need to help the workforce get AI ready and transition to new work — but also get ready for the human x agentic workforce at scale. HR’s mandate now extends to orchestrating a combined workforce of people and intelligent agents. This includes building AI skills at speed, reshaping job architectures, expanding learning pathways with partners, and fostering a culture of curiosity and co‑learning. By embedding AI training, ethics, and change resilience into core talent practices, HR can help employees work confidently with AI rather than fear it.

Many financial services organizations already operate AI centers of excellence or federated AI networks. To be effective, these teams must be multidisciplinary, spanning data engineering, model development, prompt engineering, testing, work design, colleague engagement, adoption, and change management. We have built such teams for clients of all sizes. Their success depends on access to the right tools, models, infrastructure, data foundations, and strong responsible‑AI guardrails.

Leading a human change

Value, investment, responsible scaling, and distributed change all matter, but AI succeeds only when treated as a human-centered change. AI must work for customers and colleagues. A human-led approach is part of responsible business and delivers far greater returns. It builds trust, addresses concerns, supports human-agent interaction, drives adoption and enables new ways of working.

Trust

Trust sits at the heart of every transformation. Teams need psychological safety to experiment and adopt new ways of working (Edmondson, 2018). Fear, conflict, and low trust account for 85% of failed transformations (Accenture Transformation GPS, 2025).

Workers hold mixed views about AI. Many want to learn and use it, often trusting it already in their personal lives. At the same time, they worry about job security, work intensity, adoption, and ethics. Leaders must respond with a clear workforce strategy, honest communication, and integrity in how change is managed.

Job security

This is an area of polarized commentary. AI will automate some jobs, augment many more, and create new ones. The effects will be uneven and will unfold over time.

Only 29% of CXOs point to workforce resistance as a GenAI barrier, while 40.8% of employees fear job redundancy—an adoption risk leaders cannot afford to ignore (Accenture, Learning, Reinvented survey, 2025).

Leaders must reinforce that people who embrace AI will thrive. As Andrew Ng put it at Davos, “a person that uses AI will be so much more productive, they will replace someone that doesn’t.” The goal isn’t to replace people, but to help them outperform through augmentation. Clear reskilling paths, visible investment in employees, and practical opportunities to learn turn anxiety into adoption.

Workforce plans aligned to AI investments can reduce unnecessary redundancies by managing hiring, reskilling, and redeployment. People need time to build skills and adapt, and a plan helps direct those efforts. Leaders must communicate authentically and frame changes positively wherever possible.

Work intensity and autonomy

AI is reshaping human-machine boundaries and the psychology of work. Generative and agentic AI can threaten a worker’s sense of competence, autonomy, and connection. 60% of workers fear AI will increase stress and burnout, yet only 37% of executives expect this. We must respond through leadership and thoughtful work design.

Participatory design

MIT’s Acemoglu and Johnson (2023) highlights the critical role of worker involvement in tech and AI development — especially in problem definition and co-design of work. This leads to better solutions, adoption and use, resulting in greater value realisation.

Good work design keeps humans in control of pace and style. It enables performance and preserves space for creativity. For example, when AI reduces time spent compiling investment proposals, relationship managers can focus on deep client work — advice, relationship building, and decision support.

Testing AI with experts also improves explainability and trust. A DeepMind–Moorfields study showed that breaking down AI’s reasoning increased expert understanding and confidence.

Real world example: Building trust

At a large bank, we designed a series of commercial banking AI-enabled process solutions with the relationship managers and their teams. These were a tough crowd — expert tenured professionals, typically quite sceptical of technology and naturally protective of their clients. We were able to build trust through their involvement and testing of the AI in development. This carried on with their ongoing feedback during pilot and scale up to make improvements, which led to better work, trust and adoption.

Adoption and transition

Adopting AI means starting new work — prompting, using agents, checking outputs — and stopping old work. Both can provoke discomfort. And this will happen repeatedly over the coming years.

AI may feel intuitive, but adoption is not automatic. Organizations need repeatable patterns for readiness and value realization. When leaders frame AI as a catalyst for creativity, workers are 20% more confident adapting their habits.

What helps:

    • Peer stories
    • Celebrating wins
    • Clear, authentic leadership messages
    • Practical steps and small experiments
    • Safe spaces to try and learn

Motivation varies. Early adopters want access and continuous learning. The majority need guidance and time. Late adopters need reassurance and trust. Evidence consistently shows a sharp divide between employees already using AI and those who are not.

Measurement matters. Track access, usage, popular prompts and agents, and depth of integration into workflows. Measure changes in time spent, quality, and outcomes. Analyze patterns across roles, teams, and locations. Keep measurement “cool, not creepy”—focus on group insights, not surveillance.

Early deployments set the tone. Adoption fails when AI is framed primarily as cost cutting, when promises are broken, when training is overly technical, when tools sit outside workflows, or when AI is released before it is ready.

Real world example: Building AI adoption at a global bank

At a global bank, we increased adoption of ChatGPT Enterprise and Microsoft Copilot by more than 400%. Our approach encouraged exploration first rather than pressure. Three groups emerged:

    • Early adopters (10%)
    • Followers (80%)
    • Late adopters and skeptics (10%)

By empowering the early adopters, helping the 80% see practical value and take the first steps, we accelerated both adoption and results. Eventually most of the late adopters came onboard when they saw the benefits for their colleagues.

Ethical use of AI

Many workers question whether AI will be used ethically. 53% worry about output quality and unclear accountability, yet only 21% of executives see this as a concern.

If organizations have clear risk appetite frameworks and responsible AI practices, they must make them visible. Leaders should show how these practices guide decisions, ensure transparency, manage risk, and clarify accountability. Clear channels for raising concerns are essential.

Real world example: Creating accountability for responsible AI

At one FS institution, we addressed ethical concerns through both action and communication. We established responsible AI guidelines, trained product owners with clear accountabilities, involved employees in co‑design, built explainability tools, created reporting and whistleblowing channels, and set up a second line of defense focused on AI monitoring. Leaders communicated these measures clearly and consistently.

Takeaway questions for leaders

Some key points you may want to reflect on — let me know your thoughts and ideas:

    1. Scaling: Are you scaling AI for value, or stuck in pilots?
    2. Investment: Are your investments aligned to value pools and risk appetite?
    3. Responsible: Have you defined and communicated responsible AI clearly?
    4. Continual change: Are you treating AI as ongoing evolution, not a one‑off program?
    5. Distributed: Are you ready to use and reuse agents across the enterprise?
    6. Human-led: Are you leading your people into the AI journey, empowering adoption and supporting human concerns and needs?

Looking ahead

In my next blog, we explore how we can reimagine the workforce with AI as a co-worker.

To connect these ways of leading distributed, fast-moving change to the wider market signals, read our Top Banking Trends for 2026 report.