Explaining Machine Learning Decisions
نویسندگان
چکیده
Abstract The operations of deep networks are widely acknowledged to be inscrutable. growing field Explainable AI (XAI) has emerged in direct response this problem. However, owing the nature opacity question, XAI been forced prioritise interpretability at expense completeness, and even realism, so that its explanations frequently interpretable without being underpinned by more comprehensive faithful way a network computes predictions. While taken shortcoming XAI, I argue it is broadly right approach
منابع مشابه
Explaining machine learning models in sales predictions
A complexity of business dynamics often forces decision-makers to make decisions based on subjective mental models, reflecting their experience. However, research has shown that companies perform better when they apply data-driven decision-making. This creates an incentive to introduce intelligent, data-based decision models, which are comprehensive and support the interactive evaluation of dec...
متن کاملExplaining Complex Scheduling Decisions
The work presented in this paper describes the explanation facility of an intelligent scheduling software framework that has been customized and deployed in a variety of domains. The customizability of the framework allows the software to develop a valid schedule that reflects each domain’s specific preferences and constraints. In all domains, the software quickly solves a complex scheduling pr...
متن کاملExplaining Best Decisions via Argumentation
This paper presents an argumentation-based multi-attribute decision making model, where decisions made can be explained in natural language. More specifically, an explanation for a decision is obtained from a mapping between the given decision framework and an argumentation framework, such that best decisions correspond to admissible sets of arguments, and the explanation is generated automatic...
متن کاملAuto Classifier Explaining Customers a Machine-Learning Model
When explaining customers that the artificial intelligence approach of our products automatically adapts document classifiers on training documents by applying statistical machine-learning, their reaction is similar like if we would tell them about an artificial intelligence in car breaks. Most likely they would dislike it, because they want full control on their data processors. Hence, we sell...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Philosophy of Science
سال: 2022
ISSN: ['0031-8248', '1539-767X']
DOI: https://doi.org/10.1017/psa.2021.13