Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning for Healthcare by Dr. Rich Caruana

Oct 24, 2017

9:30 am - 10:30 am

Orchard View Room, Discovery Building


Rich Caruana, PhD

Microsoft Research

Abstract:  In machine learning often a tradeoff must be made between accuracy and intelligibility: the most accurate models usually are not very intelligible (e.g., deep nets, boosted trees, and random forests), and the most intelligible models usually are less accurate (e.g., linear or logistic regression).  This tradeoff often limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and ultimately trust a learned model is important.  We have been working on a learning method based on generalized additive models (GAMs) that is often as accurate as full complexity models, but even more intelligible than linear models.  This not only makes it easy to understand what a model has learned, but also makes it easier to edit the model when it learns inappropriate things because of unexpected landmines in the data.  Making it possible for experts to understand a model and interactively repair it is critical because most data has these landmines.  In the talk I’ll present two healthcare  cases studies where these high-accuracy GAMs discover surprising patterns in the data that would have made deploying a black-box model risky.
 
Bio: Rich Caruana is a Senior Researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in the Computer Science Department at Cornell University and at UCLA's Medical School.  Rich's Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon.  His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning.  Rich received an NSF CAREER Award in 2004 (for Meta Clustering), best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), co-chaired KDD in 2007 (with Xindong Wu), and serves as area chair for NIPS, ICML, and KDD.  His current research focus is on learning for medical decision making, deep learning, and computational ecology.

Registration Information

Registration not required