Efficient Decomposition of Bayesian Networks With Non-graded Variables

نویسندگان

چکیده

Elicitation, estimation and exact inference in Bayesian Networks (BNs) are often difficult because the dimension of each Conditional Probability Table (CPT) grows exponentially with increase number parent variables. The Noisy-MAX decomposition has been proposed to break down a large CPT into several smaller CPTs exploiting assumption causal independence, i.e., absence interaction among In this way, conditional probabilities be elicited or estimated computational burden joint tree algorithm for reduced. Unfortunately, is suited graded variables only, ordinal lowest state as reference, but real-world applications BNs may also involve non-graded variables, like ones reference middle sample space (double-graded variables) two more unordered non-reference states (multi-valued nominal variables). paper, we propose independence decomposition, which includes generalizations double-graded multi-valued While general definition BN implicitly assumes presence all possible interactions, our proposal based on feature that can added upon need. impact investigated published diagnosis acute cardiopulmonary diseases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Bayesian Networks with Thousands of Variables

We present a method for learning Bayesian networks from data sets containing thousands of variables without the need for structure constraints. Our approach is made of two parts. The first is a novel algorithm that effectively explores the space of possible parent sets of a node. It guides the exploration towards the most promising parent sets on the basis of an approximated score function that...

متن کامل

Learning Linear Bayesian Networks with Latent Variables

This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from low-order observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it...

متن کامل

Hybrid Bayesian Networks with Linear Deterministic Variables

When a hybrid Bayesian network has conditionally deterministic variables with continuous parents, the joint density function for the continuous variables does not exist. Conditional linear Gaussian distributions can handle such cases when the continuous variables have a multi-variate normal distribution and the discrete variables do not have continuous parents. In this paper, operations require...

متن کامل

Lossless Decomposition of Bayesian Networks

In this paper, we study the problem of information preservation when decomposing a single Bayesian network into a set of smaller Bayesian networks. We present a method that losslessly decomposes a Bayesian network so that no conditional independency information is lost and no extraneous conditional independency information is introduced during the decomposition.

متن کامل

Learning Treewidth-Bounded Bayesian Networks with Thousands of Variables

We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian network greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. Our novel algorithm accomplishes this task, scaling both to large domains and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Statistics and Probability

سال: 2021

ISSN: ['1927-7032', '1927-7040']

DOI: https://doi.org/10.5539/ijsp.v10n2p52