Accenture Banking Blog

In their first post, It’s now or never: Time for central banks to embrace change,” my colleagues Rohit Mathew and Oliver Reppel explored why central banks need to transform digitally to ensure they can fulfil their mandate. In this post, they explain how central banks can go about changing the way they use data and analytics to achieve their goal; they also argue that training the workforce for this high-data environment is crucial.  

A major reason why central banks have struggled to manage inflation—especially recently—is that their way of working is often reactive and delayed. At a time when economies demand proactivity and near- or real-time data collection and analysis, many banks still assess reams of siloed data on a periodic basis. What’s needed is the ambition to become a data-driven digital regulator.

Although attaining that is easier said than done, central banks can take comfort from the fact that transitioning to a data-centric future—and bringing their staff, internal capabilities and regulated entities along with them—is a journey. Like any journey, the hardest part can be getting started.

Inside and out: Leveraging internal and external data

Many organisations in the financial services industry are having to undergo compressed transformations, with some following the journey encompassed by Total Enterprise Reinvention (TER). Just as organisations in the private sector need continuous and dynamic reinvention, so too do central banks.

Digitisation sits at the heart of any transformation, and TER is no different. Its cornerstone is a strong digital core with access to, and centralised storage of, all relevant data (including non-regulatory data) and appropriate data governance for secure data-handling. Among the first tasks in this process is to ensure clear definitions and agreement across the central bank on the data and AI operating models to be used. Those will be based on the bank’s defined data and AI strategy and, among other aspects, should cover the organisation, its people, processes and technology.

The bank should not only specify which data should be used; it should make clear its aims for the internal and external use of that data. It should also implement internal policies for the use and management of AI, and craft regulations that target the use of AI in the market. Additionally, although a global standard to regulate the use of AI is unlikely, central banks should at a minimum seek agreement on universal principles.

The strength of TER is that its digital core leverages the latest technology and tools including data analytics, machine-learning, natural-language processing and AI to generate insights that meet the varying requirements of all departments. These tools make the central bank far more efficient and strengthen its role. Used wisely, they can make the supervision, compliance and other processes in regulated entities more efficient and help central banks to act in a timely manner.

Data, one of the five pillars that underpin the reinvention of central banks, is at the heart of this. Although leveraging data is not a new concept for these banks, what they currently use is typically static, unstructured and siloed. Their data is often not real-time, and external and internal data are seldom combined. The approach we outline here solves these problems.

Data: From source to use

When it comes to data, it is crucial to have clarity on the source. The first step, then, is to recognise where data is held: internally within the central bank’s departments, often siloed; or externally, at government ministries, public bodies, credit bureaus, banks, non-banking financial institutions and others, including telcos and retailers.

Examples of external data include information on mortgages and other loans, real estate statistics, banking transactions and retail/consumer prices. These data elements are often inter-linked—for example, interest rate changes affect the demand for lending, which affects the real estate market through mortgages. This chain can have direct and indirect impacts on, for instance, managing long-term price stability and liquidity, and can also give rise to regulatory and supervisory considerations.

Combining this external data with other data available at the central bank can be extremely insightful—for example, measuring the Sectoral Stress Index in near-real-time, and adjusting the probability of default of existing credit exposure. This approach can also help to identify consumer stress early, and can generate other metrics—for instance, GDP now-casting. What’s more, it can identify operational inefficiencies in the financial services sector and provide proactive guidance to entities.

After determining where data is held, the second step is to decide how the central bank will capture and curate it, and then leverage it, consume it and generate insights.

By combining internal and external data, and analysing it with the right tools (including AI), central banks can generate near-real-time information on, for example, inflation, consumer indebtedness, defaults or sectoral health. The data can also be parsed on a sub-national basis—by region, province or city, for instance.

Authorities in Singapore and Germany are currently testing interesting solutions to deliver more useful data.

However, not all data is well-structured and accessible. Consider, for example, banks’ regulatory reporting data. Currently, this arrives in a variety of templates, making it cumbersome, costly and inefficient for banks to generate and regulators to process. It also lacks proportionality, which disadvantages smaller banks.

The process should be digitised with a regulation leveraging common data standards (with the BIRD standard being one prominent example) and suggesting that banks use the regulator’s API to submit more granular data in real-time—a concept we refer to as “Open Central Banking”. (This approach would also circumvent the challenge that each bank has its own data standards.) An additional benefit is that this would create a channel for two-way communication of data between financial institutions and the central bank.

Similarly, regulators could provide a tool that validates banks’ first-level prudential reports and offers feedback so banks can work proactively before their audit. Additionally, regulations, policies and circulars could be made available as a code that banks could assess for adherence rather than trying to interpret what is relevant.

Importantly, these examples reflect only a sample of what is possible. There are many other scenarios that could be useful to central banks, depending on their requirements and focus.

Understanding generative AI

On central banks’ TER journey, the use of artificial intelligence will be increasingly crucial. The emergence of large language models (LLMs) like ChatGPT and Google Bard has made generative AI (gen AI) something of a buzzword, but this shouldn’t detract from the importance of understanding this technology, even if regulators do not immediately leverage it.

Importantly, it must be regulated. This is partly because banks are starting to adopt it, but more importantly because it is crucial that the technology, people and data sources behind gen AI are trusted.

To encourage banks to use these tools responsibly and in ways that are fair to customers and society, regulators need at minimum a clear AI policy, framework, strategy and regulations—and, again linked to the core subject of trust, must ensure staff review and judge gen AI’s output. (For more on how best to achieve this, please see: Responsible AI in Financial Services by Accenture, the Monetary Authority of Singapore (MAS) and Elevandi; MAS’s fairness, ethics, accountability and transparency principles on the responsible use of AI and data analytics in the financial sector, which is part of its Veritas Initiative; MAS’s Veritas Toolkit 2.0 for the responsible use of AI in the financial sector; and the EU’s AI Act, the world’s first comprehensive AI law.)

Regulators should also provide a sandbox where banks and others can experiment with AI tools and solutions.

Proactive central banks can deploy gen AI themselves in a range of areas. Some examples include: strengthening prudential oversight to improve risk surveillance; for e-licences, e-supervision, e-enforcement and e-regulation; conducting “fit and proper” checks on individuals and entities prior to manual validation; assessing the compliance of new banking products; creating a conversational AI agent for their leadership; and even deploying it to minimise the response times of customer service centres.

Given the many potential use cases, it would be understandable if central banks set their sights on a large general-purpose AI model. However, these are costly to develop, train and test, which makes a strong argument for a more balanced approach that utilises smaller models for specific use cases.

Finally, before deploying gen AI, it is essential to build staff awareness of where it can help. One solution is to use a heat-map that highlights specific issues and shows how gen AI could be leveraged to resolve them. The regulator should also adopt policies and security protocols to govern gen AI’s use.

Data governance

Governance is a fundamental data issue. Data is typically siloed and must be cleaned before use—and while specific implementation details will depend on the regulator, one option is a single central unit that owns the data in its entirety, end-to-end, is responsible for analysing it, and reports directly to the regulator’s leadership.

Other options include establishing a fragmented model, a model that distributes data ownership and analysis across several departments, or a hub-and-spoke model. While the final choice should depend on the central bank’s requirements, the key is to ensure a well-defined governance structure is in place with stakeholders across the organisation, clear connections with the other departments, and flexibility on resourcing so that higher demand can be addressed when needed.

Regardless of the model chosen—and this decision must be driven from the top—each does away with data silos. Each also brings specific considerations—for instance, a centralised model will meet most of the core data and AI skills requirements such as, say, data scientists; however, its staff must still work with departments across the central bank, and must share the insights they generate with the relevant departments.

Lastly, it is essential to adhere to confidentiality and data privacy standards, and to set up systems in advance to ensure this is done. The idea, after all, is not to use data at an individual level, but to aggregate it to generate insights.

Power to the people: Building a future-ready workforce

A common belief is that adding high-tech capabilities leads to redundancies, which disincentivises staff. This, however, is a misconception, and so the first message should be that numbers won’t decline. What will change are staff roles and capabilities as work becomes targeted towards value-adding activities.

Success, then, rests as much on building the right workforce as it does on technology and data.

Staff are needed to leverage data and must therefore be reskilled. This includes understanding the algorithms and the data sources that together produce the outcomes that people need to trust; it also requires a new culture of working. These are best done by making employees aware of data and AI trends, and showing how these can help. In this way regulators can change ways of working and inculcate a data-driven culture.

There will also be a greater need for data scientists, and they are scarce. Attracting them, however, isn’t just about compensation; it’s as much—and arguably more—about the culture of the working environment. Consequently, to attract the best, regulators should ensure their data operation is highly regarded.

The focus on data, technology and people sits at the heart of Total Enterprise Reinvention—where a strong digital core leverages the power of cloud, data and AI to rapidly achieve new capabilities through an interoperable set of systems, puts talent-strategy and people at the centre of the process, breaks down organisational silos, and brings end-to-end capabilities.

By taking this approach to data, technology and people, central banks will have made important steps on their transformation journey.

In our third post, we will explore the last three pillars of this transformation—innovation, efficiency and communication—to show how central banks can complete their transition to becoming digital regulators.