Accenture Banking Blog

In my last blog, I discussed how leading financial services (FS) firms are scaling AI for business value and how a widening gap is emerging between the leaders and laggards. I also introduced agentic architecture, sharing real world examples from software engineering, KYC and claims.

In this blog, I first explore the value opportunity at stake by taking a human-centered approach to agentic AI. Leading financial services firms design AI around clear business intents and customer outcomes, then decide what humans and agents must do together realize them.

How a human-centered approach to AI can generate value for your business

Our Work, Workers, Workforce research identified a $10.3 trillion gap in GDP growth over the next 15 years between treating AI as a pure tech change versus taking a human-led approach, realizing its full potential with workers.

Let that number sink in. That’s about 10% of global GDP at stake based on how we approach AI.

At the heart of this opportunity is financial services, with banking (first), insurance (second), and capital markets (third) among the industries with the most work hours (up to 90%) that could potentially be automated or augmented through AI.

Why? Earlier waves of automation already addressed structured data tasks. But with more than 80% of the world’s data unstructured, much of the remaining work in FS is language rich and inherently unstructured. We operate in an environment defined by information, service and knowledge work.

It’s no surprise that generative AI — built to work with unstructured data and mimic human language — has seen rapid adoption. And as we enter the era of agentic AI, we gain something more: a system that acts not only as a tool, but as a true coworker to do work with us.

Taking a human-centered approach to AI

The same research shows that the workforce has mixed feelings about AI and my experience confirms it. While around 95% of people say they want to work with generative AI and reskill — unsurprising given how widely these tools are used in everyday life — concerns about security, work intensity and ethics remain. Employees are moving through this change at different speeds.

That tension is clear in the data. According to Accenture’s latest workforce survey, only 20% of employees feel like active cocreators in shaping how AI changes their work, and just 17% say they enjoy using AI tools or actively seek new ways to apply them. This gap underscores the need to involve employees early. Designing AI adoption with people, not for them, is essential to building trust, ownership and adoption at scale.

In response, we help clients design AI tools alongside their colleagues and guide adoption in ways that feel inclusive and empowering. We work with leadership teams to engage employees in meaningful conversations about AI and equip people to help lead the change.

We’ve invested heavily in our LearnVantage capability to upskill leaders and entire workforces. For example, every employee at S&P Global can now complete foundational GenAI training. At a large bank, we helped reskill more than a thousand data, cloud and full stack engineers to advanced and expert levels of role readiness through Udacity.

Our work in responsible AI, work design, and workforce planning also helps organizations transition from old to new skills, mitigating redundancies, improving work quality and ensuring people can thrive as their roles evolve.

This commitment extends beyond individual clients. At Davos 2026, Accenture and 24 other organizations announced the “Creating Opportunities for All in the Intelligent Age” skills pledge, collectively committing to provide technology training to 120 million people by 2030. Accenture’s own commitment is to equip more than 10 million people worldwide with job relevant AI and digital skills by 2030. To support this ambition, the firm showcased LearnVantage, its AI native learning platform and launched an affordable AI Master’s degree open to all.

Supporting a just transition for society

A human-centered approach considers not only workers, but also our communities and society. Against a backdrop of improving social inclusion in financial services, we must mitigate unintended consequences and amplify opportunities for mobility.

Our recent study with Progress Together, Rise with AI, shows the scale of the challenge. In the UK, people from lower socioeconomic backgrounds (SEB) remain underrepresented in financial services by about 30%. The research also found a 10–15% gap in access to AI, AI-related skills, confidence to reskill, trust in employers and other factors that would help people from lower SEBs navigate the AI transition.

AI’s impact on the workforce demands an intersectional view. Many of the roles most affected by AI in financial services are held by women. We are working with partners such as Tech She Can to strengthen AI skills and inclusion.

AI can also be liberating, creating new opportunities and improving access for people with disabilities, including those with visual impairments or who are neurodiverse.

Ultimately, we must ensure that AI drives a just transition — one that is inclusive and equitable — with financial services playing a central role in shaping positive outcomes for people, communities and society.

Reinventing work using AI

The real unlock lies in reinventing work around business intents and customer outcomes. It starts with value streams and end-to-end processes, not individual tasks or roles. Single use cases are too narrow, typically only resulting in fractional saves. Today’s jobs are the wrong ‘units of analysis’, as they contain a mix of work needing different treatments.

True reinvention means rethinking how work across the value stream can be done fundamentally differently to deliver better customer and business outcomes. This includes removing low-value work and toil upfront. Instead, we should redirect effort toward the high-value work that drives client outcomes and growth.

Making this shift requires three ingredients:

    1. Leaders with a reinvention mindset.
    2. Colleagues empowered to reinvent the work they know best.
    3. A strategic approach to enabling these reinvention decisions.

Real world example — Underwriting

We partnered with a global insurer to reinvent its underwriting function using AI.  Working directly with underwriters, we first simplified the underwriting standards in one area — reducing roughly 130 varied assessment criteria to 70 consistent factors.

With the process streamlined, we applied AI to handle the heavy lifting of reviewing complex broker submissions, often 200–300 pages long. The AI extracted and summarized unstructured information into a structured decision framework that underwriters could use immediately. It performed this work more accurately than an underwriting assistant and provided source document citations so underwriters could quickly validate the content.

Previously, this process took days, and the insurer only had the capacity to review about 20% of submissions, meaning viable business was being turned away. With AI, the review time dropped to hours, enabling the team to assess all submissions and do so with greater accuracy. This unlocked more than a 50% increase in revenue without expanding the team. Underwriters gained time to make better decisions and build stronger relationships with brokers.

This transformation worked because leaders committed to true reinvention and a human-centered approach. We redesigned the end-to-end underwriting process, clarified where value was created and codesigned the new workflows and AI capabilities with the underwriters themselves. By shifting the “drudge” work to AI agents, underwriters were freed to focus on the high value judgment, interaction and decision making tasks where human expertise makes the biggest difference.

Real world example — Credit sales and lending in commercial banking

Our agentic architecture for credit sales and lending in commercial banking supports relationship managers (RMs) in handling lending applications. This process typically involves lots of unstructured data, documents and administrative tasks.

Built on the Accenture AI Refinery, the architecture brings together three coordinated layers of AI agents:

    • Orchestration agent: Manages the end-to-end process, directing work across the system.
    • Super agents: Conduct business evaluation, financial analysis and risk assessment.
    • Utility agents: Extract, analyze, summarize and generate recommendations based on complex data.

This setup allows RMs to serve more clients, improve client experience, make better decisions with credit risk and accelerate funding for approved loans.

Together, these agents streamline the lending workflow, enabling Relationship Managers (RMs) to:

    • Serve more clients efficiently.
    • Deliver a significantly improved client experience.
    • Make higher quality, data driven credit decisions.
    • Accelerate funding for approved loans.

This architecture shifts time away from manual review and administrative drag. This frees up RMs to focus on higher value activities like advising clients, shaping better lending decisions and strengthening relationships — all while increasing throughput and reducing risk.

Where to start and create the right insights for reinvention

Reinvention starts by focusing on the most valuable, scalable value streams in your organization, not on isolated functions or technologies. In banking, these usually include fraud prevention, client onboarding and KYC, lending, relationship management and investment advice. In insurance, they include underwriting, claims and servicing. In markets, they include trading and post trade processing. Reinventing these processes end-to-end unlocks significant value and creates the foundation for enterprise-wide transformation.

A key lesson from the era of robotic process automation is clear: we should not “patch” broken processes with AI. Leaders must make it easier to do high value work by removing complexity, waste and friction — adding time for what truly creates value (Sutton and Rao 2024).

Most organizations already have pockets of process maturity and better insights into efficiency and value, especially where they have built Global Capability Centers. But many other value streams remain unclear or fragmented.

Where processes and value are not well understood, we use our proprietary Process Value Explorer (PVE) to uncover the work and its value, often alongside process mining tools such as Celonis. PVE allows us to analyze effort, cost, value, issues and other dimensions across thousands of workers simultaneously. Creating this visibility of work and value gives us the insights for reinvention.

Can I explore what the future workforce could look like?

For organizations seeking a broader view, we use proprietary analytics and planning tools to model the future workforce at scale. These tools enable rapid assessment of the enterprise and help prioritize AI and reskilling investments.

At a large retirement and investments provider, we are using this analysis to develop a top-down workforce strategy for the board. The approach models capacity released or redeployed through agentic AI and other automation, providing a clear picture of future workforce needs and skills. This, in turn, informs better workforce decisions today and guides where to invest in AI.

We break down current work and workforce data quickly and in depth, identifying where AI can have different types of impact, considering in-flight AI investments and forming a clear view of the future workforce: the new roles needed, the skills required, the capacity implications and the cost profile. This helps our clients make better, more informed workforce decisions, informs their change narrative and guides their AI investment strategy.

We are doing this as an initial strategy exercise for the CHRO, CEO or Board and embedding it as an enduring capability with our client’s strategic workforce planning, ensuring they can continuously anticipate, design and adapt their future workforce as AI adoption scales.

What people want from their AI tools

The 2025 Accenture Life Trends research found 44% of people felt AI tools increased efficiency and 38% felt they increased quality. However, there were some negative perceptions too — 16% felt AI tools made work feel more transactional and 14% felt they limited their creativity.

People want AI tools that absorb tedious, repetitive aspects of their role, so that they can better do the work they enjoy most. Drudge work dominates the working week of many people, even of highly paid and skilled workers.

People want to protect the human characteristics and interesting aspects of their work, and they want to maintain some control and freedom over how they work. Critically, they want to preserve the opportunity to feel meaning, purpose and satisfaction in their work.

Designing effective human-agent interaction into work

As we design human-AI interaction, especially interaction with agents, there are some important things to get right.

It begins with clearly defining the goal and value of the work and setting explicit expectations for what humans and AI agents are responsible for. Leaders must be deliberate about where AI’s strengths are best applied, where human capabilities are essential and where a combination delivers the most value.

This is why we take a human in the lead approach, where people guide the judgement, decisions and oversight, and AI agents provide the support that strengthens the work. This requires complementary roles and responsibilities, with humans retaining clear accountability for workflow and decisions.

Both AI agents and human workers must be trained to perform their roles well. AI should produce accurate, consistent results; minimize bias; adapt to varied contexts; maintain security and privacy; generate high quality outputs; and be explainable. Human workers must be able to use and assess AI outputs — iterating, improving and knowing when to challenge. In some cases, this means giving people the time and space to apply the capabilities we rely on humans for most: strategic thinking, judgment, empathy, relationship building, and creativity. We want human agent interactions that promote continued learning and improvement for both. We return to this idea of co-learning later.

Good interaction also requires simple, intuitive interfaces. This includes embedding agents directly into the flow of work — such as in a relationship manager workbench or case handler queue — and using conversational interfaces to increase usability. Humans should stay in control: able to turn off, override, or edit AI output. It should always be clear when someone is interacting with AI, what the AI has done, and how it produced its results.

Many financial services workers want more time for deep work — advising clients, solving complex problems, or developing creative propositions. Well designed AI tools can reduce cognitive load and create the conditions for deep work (Newport, 2016): focus, flow and creative problem solving. When agents filter noise, surface the right insights and automate routine tasks, they free workers to concentrate on the thinking that drives client outcomes. This is cognitive ergonomics in action — shaping technology around the rhythm of human attention and motivation (Sudiarta, 2023), rather than around the machine.

Put simply if agents are now our co-workers, let’s make sure we have good teammates.

Real world example — Customer support and contact center

We worked with one of the largest insurance and retirement providers in the US to reinvent their contact centers with agentic AI. The solution used four Super Agents and 12 reusable Utility Agents built on Accenture’s AI Refinery. It formed a fully connected system: 16 APIs integrated into claims, policy and underwriting systems, supported by two years of customer interaction memory.

The client invested heavily in testing to build confidence — more than two million training and test calls, reviewed by 30 experts over three months. One result was a personal digital assistant that offered relatable guidance (not advice) to customers, reducing basic call volumes and increasing digital leads.

For contact center representatives, the agents detect caller intent and sentiment, access customer data, surface contextual guidance, and recommend the next best action. This improved NPS, strengthened call resolution and cut training needs by 50%. Human reps can now focus on empathy, judgment and higher value client service.

We are seeing similar results across US insurers, including a group life carrier and a life and pensions provider.

Real world examples — Marketing

What about highly skilled professional work? At Accenture, we’ve already applied 14 specialized AI agents across the campaign lifecycle to support our 2,000 marketers. Campaigns once took up to 150 days. Using SynOps to analyze workflows, we identified where time was wasted and where quality could improve. The results were decisive: effort dropped sharply — 67% for creative briefs, 90% for first draft copy — and speed to market improved by 25–35%. The work also delivered an $80 million cost release. These agents amplify marketers’ creativity and impact; they do not replace it.

We are delivering similar value across financial services. In a large Asian bank, a US based life insurer and several global banks, agentic AI is reshaping marketing work. One global bank now supports 50% of its campaigns with AI, boosting creative velocity by 50% and increasing total campaigns by 20%, with a target of 35% growth. Another large Asian bank achieved 50 times more micro segmented campaigns, increased message speed by 80% and cut creation time from 30 days to 3. These approaches elevate the marketeer and create campaigns with more relevance, faster market penetration and more effective customer engagement.

Reinventing work using agentic architectures

Agentic AI opens powerful new possibilities for reinventing work in financial services. To use it well, we must return to the fundamentals of work design and make disciplined decisions about how humans and AI should interact. Key questions include:

    1. When should humans trigger the agent?
      Some processes start with a human action (as in software development, my first blog). Others trigger automatically (as in the underwriting example, where a broker email hits an inbox).
    2. How much work should agents do on their own?
      In the software development example, agents collaborate visibly, with the developer overseeing their output. We must define the right level of autonomy.
    3. Who decides when the output is “good enough”?
      Agents can iterate until they reach an acceptable threshold, but humans often treat the result as a first draft, not a final product.
    4. How is the action or decision delivered?
      Financial services requires high accuracy and compliance — for instance in the claims or KYC examples. For now, humans must stay in the loop for many decisions.
    5. How independently should agents learn?
      We must set boundaries for autonomous learning and oversee agent learning and change.  This also means helping employees and agents to learn together.
    6. How will we prove and monitor agent performance?
      We need clear methods to test, review and improve both human and agent performance over time.

Knowing when not to use AI

AI — especially agentic AI — is powerful, but not every problem needs it. Many reinvention efforts require a mix of process changes, simpler technologies and lighter weight forms of AI. Often, straightforward work or calculations on structured data are better served by basic algorithms or traditional tech. Simple point tasks rarely justify agentic AI.

AI also carries real cost. Agentic architectures are token intensive, making them expensive and energy heavy. Costs for large language models are falling by roughly 50% annually, and reuse and improved models are making agentic AI more affordable. Even so, we should use AI only when the value case is clear.

The principle is simple: apply AI where it adds meaningful value and avoid it where it doesn’t.

Takeaway points

Key ideas for reflection — I’d welcome your thoughts:

    • Human-centered: Do you see AI as tech only — or as a business and human change as well?
    • Business intents: What is the customer outcome and business value?
    • Reinvention: Are you redesigning value streams, processes and work for value?
    • Work design: Are you designing new human and agent work intentionally?

Looking ahead

In my next blog, I will look at how organizations can lead this rapidly evolving change.

To see how a human‑centered approach to AI scales across banking with practical actions, read our Top Banking Trends for 2026 report.