Accenture Banking Blog

In my previous blogs in this series, I’ve looked at how leaders are scaling AI, how we are reinventing work using agentic AI and how to lead this change. In this penultimate blog, I look at how agentic AI is reshaping the workforce and creating a critical role for the HR function. 

How agentic AI is reshaping financial services work

Across banking and insurance, AI is automating routine tasks, augmenting complex work, but the real shift is how human judgement is redeployed to higher-value decisions. As one insurance client put it, the goal is to “take the robot out of the human”. Agentic AI does exactly that: semi-independent, specialist agents work alongside employees like capable coworkers.

The impact is neither uniform nor inevitable. AI does not affect every task in a role and outcomes are not predetermined. Leaders make deliberate choices about what should remain human work and what should shift to AI. This is less about redesigning roles and more about designing work for outcomes and intent, not job titles.

Sector-by-sector impact

    • Insurance: Underwriting, claims, and servicing see the deepest change. Complex cases are augmented, while high-volume work becomes increasingly automated. Human effort shifts toward judgement, decision-making and relationships with clients and brokers.
    • Banking: Relationship managers and specialist advisors — particularly in wealth, private, corporate, and institutional banking — benefit from deep augmentation. In retail banking, automation concentrates on transactions and back office work, accelerating the move to digital channels and elevating judgement and exception handling.
    • Investment banking and capital markets: AI supports sharper judgement across dealmaking, execution, and risk management. Routine processing, compliance checks, and documentation are automated, freeing bankers to focus on creativity, negotiation and trust building.
    • Asset and wealth management: Research, portfolio management and client services are transforming. AI improves market insight and risk management, while automation absorbs reporting and regulatory administration, allowing more time for meaningful client conversations.

Agentic AI will reshape how value is created across the financial services workforcenot just the workforce itself. The right strategy is therefore highly organization specific.

Considering individuals, teams and networks

Reinventing work means designing for individuals and for the teams and networks that create value.

Most knowledge work in financial services is team based. Yet few teams are adapting effectively. Our Talent Reinventors survey found that only 19% of employees say their team experiments with AI together, and just 17% feel psychologically safe to share new ideas.

Talent Reinventors address this gap by using AI-driven analytics to strengthen team dynamics. Their teams report lower stress, improved wellbeing and faster, higher quality decisions, driven by psychological safety, experimentation, and shared learning.

This matters in financial services, where multidisciplinary teams, cross firm networks, and entire ecosystems such as exchanges, financial centers and industry partnerships define how work gets done.

Research supports this shift. A recent study from Ethan Mollick and his colleagues shares that teams with AI outperform both individuals with AI and teams without it. AI helps teams bridge knowledge and language gaps, generate more diverse ideas, and foster positive emotions. Human creativity still drives novelty, but together, humans and AI outperform either alone.

Learning new skills and ways of working

As our work changes, skills must change with it. Our recent Learning Reinvented research shows that while 84% of executives expect AI agents to work alongside humans within three years, and 80% of workers see AI as an opportunity, only 26% report receiving training on how to collaborate with AI. Progress is real, but too slow.

Skills in decline

    • Manual data entry
    • Summarizing correspondence
    • Drafting routine reports and proposals
    • Basic calculations and modelling
    • Simple customer interactions
    • Basic compliance checks

Many of these trends pre‑date AI, but AI accelerates them.

Skills in demand

    • Building, integrating, testing, monitoring, and explaining AI
    • Working effectively with agents — from basic prompting to advanced collaboration
    • Deep human skills: empathy, communication, judgement, negotiation, leadership, and trust building

Even operational areas such as fraud, payments, claims, KYC and lending are shifting toward smaller, more expert teams focused on complex cases and situational judgement.

This is not a one-time transition. Continuous learning enables workforce mobility and long-term employability. For some, this means upskilling within a role. For others, it means reskilling into entirely new careers.

Leading banks and insurers are already moving toward skills-based workforce models, supported by workforce planning, learning platforms and internal talent marketplaces.

Building enterprise-wide skills

To help organizations respond, we launched LearnVantage, our flexible AI-enabled learning ecosystem for future skills. It includes our AI Academy for boards, executives and employees, ranging from educational sessions to deeper learning including nano-degrees and external certification with Stanford and others.

At S&P Global we trained all 40,000 employees, including 7,500 people leaders and 200 board members and senior leaders on GenAI. Participation reached 100%, MPS was very positive and there was a 4X increase in adoption of AI tools. S&P Global is a clear role model for using learning to drive enterprise-wide change.

I am particularly impressed by some of the national efforts in the Middle East around reskilling — this has included the Emirates Institute of Finance, cloud skills training in the UAE and KSA, and AI upskilling for the Commercial Bank of DubaiIn East Asia, we upskilled 36,000 front office employees and 300 leaders at a large insurer, enabling them to work effectively with digital assistants and deliver more personalized customer outcomes.

From learning to co-learning

Co-learning is when people teach technology and simultaneously learn from it and is the focus on this research. Early examples include AI coaches and enterprise copilots. In an agentic architecture, these systems adapt to individuals, improve with feedback and support learning in the flow of work.

Consider a contact center. A human agent leads the call; an AI agent transcribes, suggests compliant responses and summarizes outcomes. When the human agent skips or rates suggestions, the AI learns. After the call, it provides reflective feedback. Capability improves for both human and machine.

This addresses a few challenges faced in FS: a) how to raise capability and performance of both human and agent in the real world and not just in testing; b) how to improve usability, adoption and usage of AI, as great AI products that are not used do not generate value; c) how to genuinely put learning into the flow of work where it can best support employee and customer outcomes and d) to release time for deeper learning. Organizations using this approach are developing skills four times faster and doubling confidence in collaborating with AI.

At a large European insurer, this model reduced employee handled calls by 5–10%, cut average handling time by 10%, and released around 20% capacity, while improving skill levels and time to competence.

Developing AI specialists

Organizations also need specialist talent to build, govern and explain AI. These roles increasingly combine technical depth with business domain expertise.

AI job postings have more than tripled since late 2023, intensifying competition. Banks and insurers must differentiate themselves. For one large bank, we repositioned its employer brand in India, enabling the rapid buildout of a 1,000 person data organization operating in a modern, global model.

Reskilling the specialist workforce

Through LearnVantage we are also addressing deeper specialist reskilling for data and AI professionals, particularly following our acquisition of Udacity and Ascendient.

Examples include:

    • At a tier one investment bank, we built a comprehensive learning curriculum and real world simulation to strengthen data management capability across 15,000 operations professionals.
    • At a large national bank, we launched a data immersion program for 5,000 leaders, followed by a full data education pathway for more than 23,000 professionals.
    • At a global tier one investment bank, we retrained an entire 2,500 person data workforce on data policy, standards, and management.
    • At another large national bank, we reskilled more than 1,300 data, cloud and AI engineers to advanced and expert levels, achieving a 98% increase in skills proficiency and measurable gains in delivery and business value.

These efforts go beyond foundational training, where many reskilling programs stop. We consistently build advanced and expert capability through structured digital pathways, real projects, mentoring, nano‑degrees, and collaborative learning.

AI and performance

As AI quality increases, there is a risk that workers form an over-reliance on AI and effectively ‘fall asleep at the wheel’. If we have a human in the lead, we want them to be effective and active in this role. In a study of recruiters, Dell’Aqua found recruiters who over-relied on AI made worse selection decisions and missed brilliant candidates. The increasing performance of the AI was substituting, rather than elevating their human performance.

We are creatures of habit and seek to minimize cognitive load (aka the ‘path of least resistance’). We need a reason to pay attention, check AI results well, maintaining a healthy skepticism and not over-trusting. We need to use AI outputs as a draft and combine them with our own human ingenuity. Being a good human in the lead requires cognitive load.

If we do not understand the fundamentals of the work or how a decision should be made, we cannot properly review AI outputs, spot errors, or apply judgement when an override is needed. As AI speeds up work and improves its results, there should be more time for good decision making. The risk is that we instead rush decisions and over trust the machine. We have found ways to counter this: targeted training, structured decision prompts, breaking complex judgements into logical steps, requiring AI explanations and citations, anomaly detection, and ethical monitoring of both AI and human performance.

Once AI is scaled in the workplace, we need to embed AI into workbenches and tools in a way that promotes good use. We need to train employees how to use AI well and on the human skills we want them to apply (e.g. judgement). We need to provide incentives and disincentives to promote the right behaviors and create healthy patterns of self-reflection (e.g. team retrospectives, individual reflection, peer review etc.).

Maximizing AI return on investment is not about maximizing AI performance in isolation but improving human and agent performance together. This means raising human skills as well. For instance, my personal experience with my creative teams is that AI in the hands of skilled workers actively augmenting and enriching their work gets more creative diversity and faster results, rather than blunting their craft.

Amplifying human intelligence

How AI is trained to interact with human workers really matters to learning as well. A recent study from Wharton deployed GPT-based tutors to a thousand students — access to this AI significantly improved student performance — but when the AI was removed performance dropped. The AI had become a crutch. Where a tutor-based AI had been deployed, this performance drop was largely mitigated. We need to design and train agents to support employee skills development, like a good teammate.

This is particularly important in entry level and early years roles. Base skills, situational judgement and foundational domain knowledge have traditionally been developed through on-the-job learning. As these entry level roles are reduced and we have less basic work to ‘learn the ropes’ on, we need to find different ways to help the next generation of talent learn.

I love my colleague Karalee Close’s view on this — we need to use this moment to amplify human intelligence, not just augment or artificially replicate it. One area of continued interest for me is collective intelligence and how AI can power up the development of shared knowledge, create active dialogue, can lower language barriers and help us connect and collaborate with colleagues and content around the world.

Managing the hybrid workforce

Across my previous blogs in this series, we’ve seen agentic AI working alongside humans. For many people (me included), AI is already a co‑worker. Fast forward three years and a bank or insurer may operate with a workforce of a million, human and agent combined.

How do we manage this new hybrid workforce? Some thoughts regarding the agents:

    • Many organizations and individuals are giving their AI and AI agents names.
    • We already talk about ‘onboarding’ a new agent or set of agents.
    • Agents are not paid, but their training, operation and governance carry real cost.
    • Some organizations already include agent capacity in workforce plans. For now, agents do not hold formal positions or lines of delegation in financial services, but that may change.
    • We are seeing early use of ‘agent descriptions,’ similar to job descriptions.
    • Accountability for agent performance needs to be clear. We are just starting to see management of the agentic workforce (e.g. ‘Agent Ops’) coming together with wider workforce management. For example, a large financial services firm has made its agents ‘digital employees’ with logins, email access, systems access and human managers.

We also need to adapt how we support colleagues:

    • Attracting and selecting more Al ready talent, especially at early career levels.
    • Supporting teams as new agents are introduced.
    • Using agents to enable co-learning and improve human performance.
    • Addressing performance disparities between colleagues with and without AI access.
    • Creating the right incentives for effective AI use.
    • Managing non‑use or misuse of AI.
    • Deliberately designing human connection into work, and monitoring isolation or lone worker risks.
    • Clarifying whether agents trained by an employee move with them when they change roles.
    • Using digital twins to support knowledge transfer when people leave or change roles.

This discussion about the hybrid workforce is just emerging and needs careful exploration. While AI has human-like capabilities and it can be comforting to give it names, we should not anthropomorphise AI and make it equal to human workers. We will need better language. We to value the unique intrinsic worth of humans and maintain a clear difference to agentic workers.

Voice and measurement matter

An effective AI workforce strategy must combine employee voice with measurement. Organizations that actively listen to employees and reflect that input in AI decisions see higher adoption, stronger trust, and more sustainable performance (SHRM, 2025).

Employee voice can take many forms: employee representation on AI ethics boards, consultation with unions and ERGs, participatory design, structured feedback during pilots, ongoing feedback in use, and strong channels for raising concerns and whistleblowing. Financial services organizations have made progress over the past decade in listening and responding to concerns; this now extends to concerns about agent accuracy and performance.

Measurement makes progress visible. Tracking AI adoption, employee sentiment, skills growth, and digital fluency help leaders and HR course correct and sustain momentum.

The role of HR in reinventing work

The Chief People Officer or CHRO has a key role as a change leader, helping the whole leadership team lead the people change well, navigating how to manage the new hybrid workforce and supporting their own HR function in embracing AI responsibly.

HR teams must support leader education, workforce wide reskilling, and the acquisition and development of specialist data and AI talent. Business partners also need deeper understanding of how work and skills are changing within their functions. This requires breaking down silos.

Nearly all Talent Reinventors (96%) align HR, IT, and business leaders around a single talent and technology strategy, compared with just 16% of other organizations. Similarly, 93% have redefined their talent strategy to support AI adoption, positioning HR as a copilot of change rather than a reactive function.

To deliver this shift, talent teams must be able to grow, redeploy, and reshape hybrid workforces quickly. Reward teams need new economic models that reflect a hybrid workforce. At the same time, AI creates significant opportunity within HR, improving outcomes, experience, and service effectiveness.

For HR professionals, this is an exciting moment. Their role has rarely been more critical.

Real world examples: AI in HR

Our Responsible AI Programme for HR, launched in 2016, was designed to accelerate, govern and monitor AI use across HR. We now apply AI across the full talent lifecycle — recruitment, onboarding, development, mobility, reward and colleague support.

The impact has been material: around 45% productivity gains within HR, a 30% improvement in proficiency for priority skills, a 40% increase in internal fill rates and a 35% reduction in time to fill. These outcomes empower colleagues. For example, we use AI to infer skills from work and learning experience, surface them with employees and help them identify future roles, career paths, and learning options.

Many financial services clients are also moving from reactive hiring to proactive talent strategies, using platforms such as Eightfold.ai and Beamery to support sourcing, role matching, pipeline building and internal mobility.

Yet our Talent Reinventors survey found only around 7% of organizations use AI to drive an “internal first” mobility strategy. Most still rely heavily on external hiring or isolated internal moves due to poor skills visibility. Talent Reinventors take a different path. They are 4.4X more likely to have an adaptable workforce and 7.2X more likely to fill roles internally.

A large U.S. bank provides a strong example. By adopting a skills driven model and embedding AI enabled skills visibility into workforce planning, leaders can anticipate gaps and redeploy talent quickly, building a more agile and resilient organization.

At another large banking client, we equipped line managers with AI tools to draft performance summaries, deliver better feedback and support compensation decisions, saving time, improving quality and creating space for stronger judgement.

Take away points

Some key points you may want to reflect on — let me know your thoughts and ideas:

    1. As work is reinvented, roles, teams, and ways of working must change too. Are you designing for today’s jobs or are you shaping roles and teams for the future?
    2. Continuous learning is essential. Human skills such as relationships, communication, judgement and creativity must be deliberately designed into new roles. Are you reskilling at scale?
    3. Managing a workforce of people supported by AI agents requires new approaches and elevates HR’s role. How are you thinking about the hybrid workforce? How ready is your HR team?

Conclusion

In this penultimate blog, we explored how financial services firms must reshape their workforce. Agentic AI requires us to think differently about humans and agents working together. Banks and insurers are likely to employ smaller but far more digitally and human skilled workforces, driving sustained demand for reskilling. HR must be central to this transition, helping leaders navigate change and manage the hybrid workforce.

In the final blog in this series, I will explore how leadership, culture and operating models must also evolve to sustain this transformation.