The CFPB has reiterated the need for specific and accurate explanations when denying credit applications based on complex algorithms. What are the implications for lenders, particularly when they are using Artificial Intelligence/ Machine Learning (AI/ML) models?
On May 26, the US Consumer Financial Protection Bureau (CFPB) reminded lenders in a circular that federal anti-discrimination law requires companies to explain to applicants the specific reasons for denying an application for credit or taking other adverse actions, even if the creditor is relying on credit models using complex algorithms.
The circular should not come as a surprise to the industry, given that it comes on the heels of a January 2022 warning from the Director of the CFPB around unfair discrimination, the stress on explainability of AI/ML models in the August 2021 Comptroller’s Handbook for Model Risk Management from the Office of the Comptroller of the Currency (OCC) and the multi-agency consultation exercise in March 2021 by five US regulators on the use of AI/ML by Financial Institutions.
Nevertheless, it does serve as a timely reminder for lenders looking to expand their use of AI/ML models, many of which can be complex and opaque. The good news is that the industry is well aware of the risk. Our work with multiple US lenders suggests that transparency is a key area of focus when building and deploying AI/ML models.
We believe that as a lender, you can do five things to stay ahead of the game
1. Understand your input data
Do you understand the data that goes into your model? Many AI/ML models are complex and difficult to understand because they use a large number of input data elements, from diverse sources and including many that are unfamiliar to the organisation.
For example, one bank discovered, after a few iterations, that 1 in 4 of the 200+ input features used to train the credit model were not influencing the model’s predictions at all. Dropping them allowed the bank to avoid having to maintain their metadata, monitor their quality on an ongoing basis or justify their use in the model.
Another lender had to drop two features (“under-banked flag” and “economic stability indicator”) from its new model, despite their significant predictive power. No one in the bank understood how the 3rd party ‘alternate’ data provider had calculated these scores. Having invested in making their own in-house model explainable, the bank did not want to get tripped over the use of these two externally sourced ‘black-box’ data elements.
2. Make sure you can stand behind your explanations
Is the approach you use to explain your AI/ML models fit for purpose? A host of explanation approaches are available – some dependent on creating inherently explainable models, others on post-hoc explanation techniques. Not all of them are born equal.
In an ideal world, lenders would be able to depend on industry-level standards – perhaps blessed by regulators – with quantifiable metrics (e.g., explanations must be accurate within an X% margin of error, Y% of the time). In practice, the relative immaturity of explanation approaches for AI/ML models have precluded any such industry-level standards.
However, there is still a case for setting internal standards, against which all models built or bought by a lender can be consistently assessed. Without such standards, it will be impossible for the lender to even be internally confident about the reliability of the model and surrounding explanations (and that is even before regulators or customers demand them!).
How would you go about defining such a standard? As we outline in a previous blog post and the ‘transparency’ methodology we co-authored for a regulator-led initiative in Singapore, a good explanation approach must meet four key tests
- Does it explain the outcomes that matter?
- Is it internally consistent?
- Is it able to perform at the speed and scale required?
- Can it satisfy rapidly evolving expectations?
3. Distinguish between ‘technical’ and customer-facing explanations
When explaining adverse decisions to customers, have you paid enough attention to the human element? An explanation can be comprehensive, accurate and still completely useless to a lay person if it has not been designed and shared appropriately.
A good customer-facing explanation must be consistent with human intuition. Its level of complexity must match the expertise level of the affected customers. Research suggests that such explanations should not be too lengthy, but that customers still require understanding of the core factors that underpin decision making. Customers generally prefer such explanations to be simple and highly probable and to appeal to causal structure rather than correlations.
In addition, customers ideally require a mix of reasoning (general explanations for their decision) and action (an understanding of how they can change a model’s behaviour). Adding counterfactuals that demonstrate how a decision could be improved or changed can help data subjects grasp model behaviour (e.g., “had I done X, would my outcome Y have changed?”). This 2021 video of an entry from HSBC and Microsoft at a CFPB Hackathon on adverse actions provides a great example of some of these principles.
4. Prevent unintended consequences of customer-facing explanations
Have you fully thought through the implications of the explanations you are providing to customers? For example, counterfactual explanations are a powerful tool to empower those who have been negatively impacted by an algorithmic decision. However, they carry the risk of inadvertently leaking important information about the training data and/or exposing gaps in the model. One way of addressing this risk is to avoid overly specific recommendations (e.g., not “increase your income to $3,220” but “increase your income to be greater than $3,000”).
Another unintended consequence is the risk that even if an individual customer manages to ‘reverse’ some of the reasons behind their adverse result (in a future attempt), they do not get a positive result, due to other factors beyond the customer’s personal circumstances (e.g., the lender may have reduced its risk appetite for that particular line of business). Even if this may be within the law, it can still cause poor customer experience and loss of trust.
5. Tool up!
Finally, have you invested in the tools needed to ensure that your models remain explainable over time, and not just as a result of one-off manual efforts? A comprehensive set of tools is needed to accurately explain the outcomes of AI/ML models.
In particular, such tools must
- Guarantee reliable, accurate explanations in an acceptable time frame
- Work across different modelling techniques and platforms
- Be accessible to a broad range of stakeholders – e.g., technical and business
- Be able to work throughout the model lifecycle – during development, testing and validation, and in production
- Be useful to pinpoint the root causes contributing to potential issues with broader model quality – e.g., apparent instances of bias against particular groups, or an unexpected drop in model approval rates and/or accuracy
***
Poorly explained decisions are bad for lenders’ reputations and invite regulatory sanctions, but, more importantly, they are also bad for customer experience, trust and relationships.. Lenders realize this: what is holding them back is not a desire to act but a set of practical difficulties in operationalizing such transparency. The industry must now shift attention from “wanting to do the right thing” to “making it easier to do the right thing.”