Interpretable Encoding of Densities Using Possibilistic Logic
نویسندگان
چکیده
Probability density estimation from data is a widely studied problem. Often, the primary goal is to faithfully mimic the underlying empirical density. Having an interpretable model that allows insight into why certain predictions were made is often of secondary importance. Using logic-based formalisms, such as Markov logic, can help with interpretability, but even in Markov logic it can be difficult to gain insight into a model’s behavior due to interactions between the logical formulas used to specific the model. This paper explores an alternative approach to representing densities that makes use of possibilistic logic. Concretely, we propose a novel way to transform a learned density tree into a possibilistic logic theory. An advantage of our transformation is that it permits performing both MAP and, surprisingly, marginal inference, with the converted possibilistic logic theory. At the same time, we still retain the benefits conferred by using possibilistic logic, such as the ability to compact the theory and the interpretability of the model.
منابع مشابه
Induction of Interpretable Possibilistic Logic Theories from Relational Data
The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they...
متن کاملEncoding Markov logic networks in Possibilistic Logic
Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a ...
متن کاملA NOTE TO INTERPRETABLE FUZZY MODELS AND THEIR LEARNING
In this paper we turn the attention to a well developed theory of fuzzy/lin-guis-tic models that are interpretable and, moreover, can be learned from the data.We present four different situations demonstrating both interpretability as well as learning abilities of these models.
متن کاملProduct-based Causal Networks and Quantitative Possibilistic Bases
In possibility theory, there are two kinds of possibilistic causal networks depending if possibilistic conditioning is based on the minimum or on the product operator. Similarly there are also two kinds of possibilistic logic: standard (min-based) possibilistic logic and quantitative (product-based) possibilistic logic. Recently, several equivalent transformations between standard possibilistic...
متن کاملIntroducing possibilistic logic in ILP for dealing with exceptions
In this paper we propose a new formalization of the inductive logic programming (ILP) problem for a better handling of exceptions. It is now encoded in first-order possibilistic logic. This allows us to handle exceptions by means of prioritized rules, thus taking lessons from non-monotonic reasoning. Indeed, in classical first-order logic, the exceptions of the rules that constitute a hypothesi...
متن کامل