This article was originally posted on TechCrunch on Aug 18, 2020.
COVID-19 has disrupted the lives of millions of people and affected businesses across the world. Its impact has been particularly significant on many machine learning (ML) models that companies use to predict human behavior. Companies need to take steps to deeply examine ML models, and acquire the insights needed to effectively update models and surrounding business rules.
The economic disruption of COVID-19 has been unprecedented in its swiftness, upsetting supply lines, temporarily closing retail stores, and changing online customer behaviors. It has also dramatically increased unemployment overnight, increasing financial stress and systemic risks of both individuals and businesses. It is forecasted that global GDP could be affected by up to 0.9%, on par with the 2008 financial crisis. While the nature of our recovery is unknown, if the 2008 crisis is any indicator, the impact of COVID-19 could be felt for years, through both short term adjustments and long term shifts in consumer and business behaviors and attitudes.
This disruption impacts machine learning models because the concepts and relationships the models learned when they were trained may no longer hold. This phenomenon is called “concept drift”. ML models may become unstable and underperform in the face of concept drift. And that is precisely what is happening now with COVID-19. The effects of these drifts will be felt for quite some time, and models will need to be adjusted to keep up. The good news is that there have been significant developments in Model Intelligence technology, and through judicious use, models can nimbly adjust to those drifts.
As the effects of COVID-19 (and economic closure and reopening) play out, there will be distinct stages in the impact on social and economic behaviors. Updates to business rules and models will need to be done in sync with overall behavior shifts in each of these stages. Companies need to adopt an approach of Measure-Understand-Act and to constantly examine, assess, and adjust ML models in production or development and surrounding business rules.
Examining how ML models have been impacted means going through an exercise to both Measure and Understand how the models behaved prior to the coronavirus, how they are behaving now, why they are behaving differently (i.e. what inputs and relationships are the drivers of change), and then to determine if the new behavior is expected and accurate, or is no longer valid. Once this is determined, the next step is naturally to Act – “So, what can we do about it?”
For examples of the Measure-Understand-Act process we’re going to share two types of ML models. One is a credit model where feedback cycles are slow and you may not know if someone defaults on a loan right away. Therefore it’s hard to determine whether or not the model has gone bad. Another is a product search model at a retailer where feedback on search results such as clicks and purchases is rapid, and businesses will know much faster if the model is still working or not. The insights shared here apply much more broadly, including to ML models running marketing, inventory management, and fraud detection.
As an illustrative example, consider a ML model used by a bank to determine risk for personal loans. The model was trained and put into production in 2019. The bank was concerned about the model’s reliability in the coronavirus era. Here’s how a data science team would execute on Measure-Understand-Act.
The data science team determined that the model was showing significant change in risk scores for users from early 2020 to mid- 2020, i.e. pre- and post- the start of the coronavirus. The increase in risk is particularly pronounced for a segment of users with high outstanding debt. Standard metrics can be used to automatically quantify how different the risk assessment scores are between the two time periods. For example, one metric measures the difference in the average risk scores for the two time periods. If the metric is above a threshold, indicating that the drift is significant, the data science team is alerted to dig deeper.
The next step is for the data science team to understand why the model assesses higher risk for that segment. This step can leverage recent technical advances in understanding the inner workings of ML models to answer this “why” question. The data science team can thus quickly pinpoint the features contributing to the drift. The data science team discovers that the Loan Purpose feature was the main driver of drift.
When they drill down even further, they find that the Loan Purpose feature has shifted because more loans are being requested to pay off debt for credit cards and operate small businesses. This shift makes sense since with COVID-19 there has been a large increase in people applying for personal loans to pay off credit cards and fund their small businesses. Loan Purpose, which used to not be a very important feature when the model was trained, has grown in importance in the COVID-19 era.
Armed with this actionable information, the bank can act to make decisions with greater confidence. The data science team can leverage this understanding to assess if the model is degrading and update the model to reflect the new world by better leveraging features, such as Loan Purpose that are early indicators of risk. Often different models will be created to predict late payments and defaults farther into the future, and looking at the battery of models and forecasts together will help provide better understanding of trends and economic health.
The bank could also put in place business rules to support customer relief efforts and direct dunning while responsibly managing risk. For instance, knowing the key features driving model changes (higher debt levels and requesting small business loans) might inform targeted customer outreach efforts around reducing debt levels and informing small businesses about both banking resources and government aid they could avail themselves of during these challenging times.
A similar Measure-Understand-Act process can be followed for models, such as those used for product search at a retailer. The Measure step indicates that the conversion metric (i.e. what percentage of customers who click on the product actually purchase it) for a popular but expensive health product has gone down significantly. Understanding why surfaces that the model is no longer showing it in the top three results because of COVID-19 related drift in spending behavior.
This insight enables smarter ways of updating models and surrounding business rules. For example, the retailer might decide to track which customer segments have changed their preferences for this expensive health product in order to be able to run experiments on what products they now prefer. One such experiment might be to add a business rule to boost the ranking of budget-minded health related products to see if price sensitivity was the primary change. Another action might be to retrain the model on data collected since COVID-19 first affected the market, and then test it in an A/B test in production, perhaps starting with this flagged customer segment or just for queries related to health products. These actions need to be approached holistically to carefully track dependencies with other key business functions supported by ML models, including sales forecasting and inventory management.
While the economy will continue to experience the fallout from COVID-19, and ML models for predicting behavior will need to change, companies should take a proactive approach to leveraging new technology that can help with these fluctuations. Advances in technology that explain the inner workings of ML models will help businesses keep up with the changing consumer behaviors during this unprecedented time to help AI systems manage concept drift effectively.