US financial regulators signal their focus on AI

TruEra Education ai regulation US financial regulators signal their focus on AI

Here’s our take on key questions financial regulators have about AI:

  • Greater attention on AI from US financial regulators seems imminent, based on the recent multi-agency RFI (OCC, CFPB, FED, FDIC, NCUA)
  • Governance, risk management, fairness, and overall AI Quality management are key concerns
  • Explainability is foundational to addressing these issues;  tools for managing these concerns are available today

Regulators in the financial sector have been at the forefront of  guidelines and industry consultations on the potential risks from the use of Artificial Intelligence/Machine Learning (AI/ML). Initiatives began in Asia, and then sparked further initiatives in Europe. For example, regulators in Singapore, Hong Kong, and the Netherlands have published specific guidelines on the use of AI. The Bank of England and Financial Conduct Authority in the UK formed a public-private forum in October 2020, with the aim of gathering industry inputs. 

Unsurprisingly, the regulators for the largest financial market in the world (the US) have also been weighing the benefits and risks of AI, and sharing their thoughts individually over the past few years. In March 2021, they came together in one of the biggest ever industry consultation exercises on this topic. The Comptroller of the Currency, the Federal Reserve System, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and the National Credit Union Administration issued a Request for Information (RFI) and comment on the use of AI/ML by financial institutions. 

The purpose was to understand respondents’ views on the use of AI by financial institutions, both in customer-facing and internal applications. The RFI included a total of 17 questions: around:

  • appropriate governance, risk management, and controls over AI
  • challenges in developing, adopting, and managing AI; 
  • and areas in which greater regulatory clarity would be welcome. 

They covered the full range of potential risks related to the use of AI/ML, including explainability, fairness, overfitting, data quality and representativeness, and third party risk.

Responses were submitted at the end of June 2021. TruEra’s full commentary on the use of AI in financial services is available  here, “Financial Institutions’ Use of Artificial Intelligence, including Machine Learning.”

Our key messages were:

  1. Explainability should be the backbone of assuring high-quality and trustworthy AI/ML models. It should be embedded throughout the model lifecycle from development to validation to continuous monitoring in production.
  2. We expect both inherently/structurally interpretable models and algorithmically interpretable ones (with post-hoc explanation methods) to co-exist. We recommend viewing these approaches to explainability as complementary and reinforcing each other rather than a mutually exclusive choice. Constraining the industry to use a small set of models that are viewed as inherently explainable (e.g., linear models and Bayesian rulesets) could have a detrimental impact on innovation. Instead, it would be better to identify a set of questions about models that should be answered as part of explainability and model risk management activities and map them to appropriate technical tools and processes to provide responsible governance.
  3. We believe that customer and regulatory expectations around AI transparency and explainability can be met with the current ‘state of the art.’ Post-hoc explanation methods, if applied correctly, can provide explanations that are accurate enough to meet customer and regulatory expectations adequately. However, this requires significant focus by Financial Institutions (FIs) and their partners on appropriate design and implementation of their explanation methodology.
  4. Internal standards about the accuracy and consistency of explanation methods are necessary. Explanation outputs can dramatically and meaningfully differ based on the comparison group and the output type that is being explained. FIs – particularly those with extensive adoption of AI/ML – may want to consider introducing internal standards around the accuracy and consistency of explanation methods.
  5. In order to ensure compliance with equal opportunity and anti-discrimination requirements, regulators should consider providing greater clarity on appropriate measures of fairness for specific use cases (beyond credit). FIs should consider employing root cause analysis to understand and justify any sources of bias before declaring a model “fair” or otherwise.

We at TruEra have been working closely with financial services firms to help them improve overall AI Quality, including explainability, model performance, and fairness. We believe that with the right tools, financial services firms can meet the coming governance requirements that seem imminent in all major markets.

Read more about how TruEra helped Standard Chartered achieve greater fairness.

Authors: 

Anupam Datta, cofounder, President, Chief Scientist

Shameek Kundu, Head of Financial Services

Divya Gopinath, Research Engineer

Last modified on September 7th, 2023