AI Regulation in Finance: Where Next?

  • Artificial Intelligence
  • 14.02.2022 03:15 pm

In the last three years, financial regulators worldwide have been actively highlighting the need for responsible use of Artificial Intelligence/ Machine Learning (AI/ML). What have they been saying? What common underlying concerns and regulatory themes are emerging? What can the industry expect in the coming years, and how can it start responding now?

By Shameek Kundu, Head of Financial Services and Chief Strategy Officer at TruEra.

What have regulators actually done so far?

To date, no major financial regulator has introduced explicit regulations dedicated to the use of AI/ML. Recent regulatory activity has mostly taken the form of guidelines, consultation papers, clarifications around existing rules for Model Risk (e.g. SR 11-7 in the US), data management and anti-discrimination considerations, and the occasional public pronouncement on high-profile cases of potentially unethical algorithms (e.g., the Apple credit card or discrimination on the basis of religiosity in the US).

Globally, the Monetary Authority of Singapore kicked things off in 2018 with its Fairness, Ethics, Accountability and Transparency (FEAT) guidelines. Similar guidelines have since been issued by financial regulators in Hong Kong, Canada, the United Arab Emirates and many others. In the US, five banking regulators ran a large-scale joint consultation exercise in 2021. In the UK, the Bank of England (BOE) and the Financial Conduct Authority (FCA) have been running an AI Public Private forum since 2020 (a report is expected soon). 

In Europe, the European Central Bank (ECB) recently provided suggestions to the European Commission around the banking obligations of the (industry-agnostic) 2021 draft AI law. There have also been a few examples of sector-specific exercises outside banking, such as those around Insurance in Europe and the US, and securities in the US.

(Why) Does AI/ML require a specific regulatory lens?

Banks and insurers are no strangers to statistical models (e.g., actuarial or capital calculations), or to automated decision-making (e.g., insurance pricing, credit decisioning) based on such models. Regulators and industry participants have extensive experience of understanding and managing the risks associated with such models. Existing risk management frameworks around the management of model and data risks and fair treatment of customers provide a robust foundation.

However, the use of AI/ ML techniques to build such models calls for an enhanced approach. AI/ML models have several unique characteristics (Figure 1). Collectively, these can result in increased levels of model risk, as well as higher focus on data risk (e.g., disclosing or using personal data inappropriately), operational resilience challenges (e.g., being unable to ensure that critical systems remain available), conduct and regulatory risk (e.g., being unable to meet obligations to treat customers and staff fairly, information security risk (e.g., creating additional vulnerability points for critical systems) and reputational risk (e.g., due to bad press from poorly communicated use of AI models with customers)

What common themes have emerged?

For Financial Institutions (FIs), particularly those operating under multiple regulatory regimes, there is a risk of getting overwhelmed by overlapping and/or divergent requirements. Luckily, there appears to be a remarkably high level of alignment among regulators world-wide around their key objectives.

  1. First, they expect robustness in AI/ML models. Predictions should be reliable, not just with the data used to initially test the model but also over time as internal and external circumstances change (stability). The model should perform well across all the segments of the population to which it is meant to apply, and not be overly dependent on a small number of training data points (overfitting).

  2. Second, they expect FIs to be sensitive to the potential for AI/ML models to introduce or worsen unfair biases against particular groups, and to have mechanisms in place to detect, investigate and mitigate such unfair bias. A related expectation is the need for FIs to be ethical in the use of personal data in data hungry ML models.

  3. Third, they expect FIs to be able to understand, explain and justify the model’s decisions, internally and to regulators. Most often, this can be achieved through ML explainability techniques such as feature influences, and by allowing human experts to test the model’s behaviour and explanations for conceptual soundness.

  4. Fourth, where appropriate, they expect FIs to provide transparency to data subjects impacted by the decision (e.g., Is my insurance claim being decided by an algorithm? Why was it refused? What data was used to arrive at the decision?), and avenues for redress (e.g., the right to request a manual or automated review, and to correct any incorrect data used by the FI for the decision-making). 

  5. Fifth, they expect FIs to demonstrate accountability for their use of AI/ML. One key aspect is Board and Senior Management awareness around AI/ML use and risks. Another is the introduction of appropriate policies, standards, procedures, tools and training must be put in place to operationalise AI/ML governance. A third is the need to put in appropriate levels of human oversight over final decision-making, depending on materiality and confidence in the AI/ML solution. Many regulators also explicitly include a complexity vs benefit assessment when using AI/ML. Finally, regulated FIs are expected to be responsible for any third party AI/ML models (e.g., external credit scores, anti-fraud and anti-money laundering software)

What to expect next?

Predicting regulatory moves is difficult, but based on regulators’ public consultations over the last three years, FIs can probably expect the following from regulators in the next 2 years:

  • A continued desire to balance AI/ML innovation and risk

  • Closer collaboration with non-financial regulators (e.g., data privacy, antitrust)

  • Incremental enhancements to existing rules (e.g., those around Model risk, data risk and conduct/fairness), rather than brand new ones dedicated to AI/ML

  • An appetite to work together with the industry to flesh out the details. The Veritas initiative in Singapore is a recent example of a regulator-led industry consortium translating AI/ML guidelines into detailed ‘how-to’ guides and toolkits

  • A level of strategic ambiguity around contentious topics like the definition of fairness metrics or minimum technical standards for algorithmic transparency (reflecting the immaturity of industry thinking in this space)

  • Tolerance for a materiality-based approach (e.g., one regulator intends to limit fairness considerations to situations impacting natural persons or small businesses)

Notwithstanding the above, however, FIs should be prepared for supervisory examinations around responsible use of AI/ML. Indeed, in several jurisdictions, regulators have already begun such exercises within the remit of existing model risk or conduct regulation.

Leading adopters of AI/ML have responded by pulling together an umbrella framework for managing AI/ML risk (Figure 2), which they can tweak continuously on the back of evolving regulatory and industry thinking. They have also started to embed responsible AI/ML considerations into the end-to-end model lifecycle - e.g., through tools for AI/ML model transparency, quality assessment and monitoring; updates to customer-facing communication and third party engagement protocols; and dedicated training for staff. Finally, they have been actively engaging industry bodies to contribute to and shape the conversation around responsible AI/ML.

 

About Shameek Kundu

 

 

Shameek is a leading expert in AI from both a tech and business strategy perspective and has spent most of his career driving responsible adoption of data analytics / AI in the financial services industry. He is Chief Strategy Officer and Head of Financial Services at TruEra. He sits on the Bank of England’s AI Public-Private Forum and the OECD Global Partnership on AI. In 2018 he was part of the Monetary Authority of Singapore’s Steering Committee on Fairness, Ethics, Accountability and Transparency (FEAT) in AI and is currently part of the MAS-led industry consortium that was set up to develop the FEAT methodologies, tool kit and business use case studies in the banking and insurance sectors. 

Prior to TruEra, Shameek was Group Chief Data Officer at Standard Chartered Bank, where he helped the bank explore and adopt AI in multiple areas (e.g., credit, financial crime compliance, customer analytics, surveillance). 

 

Related News