TruEra Research: Trustworthy ML Systems

It’s not just about being explainable. ML systems must be debuggable, understandable, and trustworthy.

In order to trust ML systems, we need to be able to understand them and have confidence in their performance in the real world. Our research on the trustworthiness of ML systems falls into three categories:

Data & Model Quality
Can we ensure that our model follows intuition? Is it fair, robust, and stable? Does it produce consistent and accurate explanations?
System Performance
How can we quickly but accurately calculate explanations at scale? What are the early indicators of model performance in unknown circumstances?
Human-centric ML
Explanations need to be understandable to humans. How do humans interact with model intelligence measures? How do we visualize the inner workings of complex models?

Publications

In the media