Hyperplane Priors
نویسنده
چکیده
The requirement of transformation invariance of a probability distribution is employed to derive prior probabilities for the coefficients of the equation describing a hyperplane. In two dimensions, this is a straight line, in three dimensions an ordinary plane etc. We treat the general case of n dimensions and propose a procedure to normalize the resulting distributions in order to make them proper and appropriate for model comparison problems.
منابع مشابه
Bayesian Sample size Determination for Longitudinal Studies with Continuous Response using Marginal Models
Introduction Longitudinal study designs are common in a lot of scientific researches, especially in medical, social and economic sciences. The reason is that longitudinal studies allow researchers to measure changes of each individual over time and often have higher statistical power than cross-sectional studies. Choosing an appropriate sample size is a crucial step in a successful study. A st...
متن کاملTutte polynomials of hyperplane arrangements and the finite field method
The Tutte polynomial is a fundamental invariant associated to a graph, matroid, vector arrangement, or hyperplane arrangement, which answers a wide variety of questions about its underlying object. This short survey focuses on some of the most important results on Tutte polynomials of hyperplane arrangements. We show that many enumerative, algebraic, geometric, and topological invariants of a h...
متن کاملGeneralized Thomas hyperplane sections and relations between vanishing cycles
R. Thomas (with a remark of B. Totaro) proved that the Hodge conjecture is essentially equivalent to the existence of a hyperplane section, called a generalized Thomas hyperplane section, such that the restriction to it of a given primitive Hodge class does not vanish. We study the relations between the vanishing cycles in the cohomology of a general fiber, and show that each relation between t...
متن کاملBayes Optimal Hyperplanes! Maximal Margin Hyperplanes
Maximal margin classifiers are a core technology in modern machine learning. They have strong theoretical justifications and have shown empirical successes. We provide an alternative justification for maximal margin hyperplane classifiers by relating them to Bayes optimal classifiers that use Parzen windows estimations with Gaussian kernels. For any value of the smoothing parameter (the width o...
متن کاملHyperplane Neural Codes and the Polar Complex
Hyperplane codes are a class of convex codes that arise as the output of a one layer feed-forward neural network. Here we establish several natural properties of nondegenerate hyperplane codes, in terms of the polar complex of the code, a simplicial complex associated to any combinatorial code. We prove that the polar complex of a non-degenerate hyperplane code is shellable and show that all cu...
متن کامل