Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

نویسندگان

  • Marco Túlio Ribeiro
  • Sameer Singh
  • Carlos Guestrin
چکیده

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model’s behavior.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Plucked String “Nothing Else Matters” using Karplus-Strong Synthesis

With the advent of physical modeling, the implementation of instrument synthesis has become easier. Ironically, the KS algorithm was discovered as a simple computational technique that seemingly had nothing to do with physics [4]. It was Julius Smith and David Jaffe who first realized the potential of the KS algorithm and its relation to the physics of the plucked string. With the KS filter an ...

متن کامل

Local Interpretable Model-Agnostic Explanations for Music Content Analysis

The interpretability of a machine learning model is essential for gaining insight into model behaviour. While some machine learning models (e.g., decision trees) are transparent, the majority of models used today are still black-boxes. Recent work in machine learning aims to analyse these models by explaining the basis of their decisions. In this work, we extend one such technique, called local...

متن کامل

Anchors: High-Precision Model-Agnostic Explanations

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different mode...

متن کامل

Machine Learning Model Interpretability for Precision Medicine

Interpretability of machine learning models is critical for data-driven precision medicine efforts. However, highly predictive models are generally complex and are difficult to interpret. Here using Model-Agnostic Explanations algorithm, we show that complex models such as random forest can be made interpretable. Using MIMIC-II dataset, we successfully predicted ICU mortality with 80% balanced ...

متن کامل

Cortical Learning via Prediction

What is the mechanism of learning in the brain? Despite breathtaking advances in neuroscience, and in machine learning, we do not seem close to an answer. Using Valiant’s neuronal model as a foundation, we introduce PJOIN (for “predictive join”), a primitive that combines association and prediction. We show that PJOIN can be implemented naturally in Valiant’s conservative, formal model of corti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1611.05817  شماره 

صفحات  -

تاریخ انتشار 2016