TruEra Research: Explainable ML

A core research direction for TruEra is studying how to robustly explain models in order to understand, introspect, and trust them.

TruEra solutions are based on years of explainability research conducted at Carnegie Mellon University. We continue to view explainability as the backbone for trust in ML systems.

Publications

In the media