نتایج جستجو برای: probability vector

تعداد نتایج: 408882  

2015
Srikrishna Karanam Yang Li Richard J. Radke

The basic idea of KLD-sampling [3] is to find the number of particles in each iteration such that the error between the true posterior probability density and the probability density approximated by the particle filter is less than ν with probability (1−δ ). At any particular iteration, suppose we draw n particles from a discrete probability distribution that has k disparate bins. Defining the ...

2000
Roseanna M. Neupauer Brian Borchers

The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm = d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upp...

2005
Michael Collins

• Σ is a set of output symbols, for example Σ = {a, b} • Θ is a vector of parameters. It contains three types of parameters: – πj for j = 1 . . . N is the probability of choosing state j as an initial state. Note that ∑N j=1 πj = 1. – aj,k for j = 1 . . . (N − 1), k = 1 . . . N , is the probability of transitioning from state j to state k. Note that for all j, ∑N k=1 aj,k = 1. – bj(o) for j = 1...

2008
Usha Devi

The quantum analogue of the classical characteristic function for a spin 1/2 assembly is considered and the probability mass function of the random vector associated with the assembly is derived. It is seen that the positive regions of Wigner and Margenau-Hill quasi distributions for the three components of spin, correspond to a trivariate probability mass function. We identify the domain of th...

2001
David Horn Assaf Gottlieb

We propose a novel clustering method that is an extension of ideas inherent to scale-space clustering and support-vector clustering. Like the latter, it associates every data point with a vector in Hilbert space, and like the former it puts emphasis on their total sum, that is equal to the scalespace probability function. The novelty of our approach is the study of an operator in Hilbert space,...

Journal: :Information Fusion 2016
Ronald R. Yager Frederick E. Petry

Our objective here is to obtain quality-fused values from multiple sources of probabilistic distributions, where quality is related to the lack of uncertainty in the fused value and the use of credible sources. We first introduce a vector representation for a probability distribution. With the aid of the Gini formulation of entropy, we show how the norm of the vector provides a measure of the c...

Journal: :CoRR 2018
Maiara F. Bollauf Vinay A. Vaishampayan Sueli I. Rodrigues Costa

We consider the problem of finding the closest lattice point to a vector in n-dimensional Euclidean space when each component of the vector is available at a distinct node in a network. Our objectives are (i) minimize the communication cost and (ii) obtain the error probability. The approximate closest lattice point considered here is the one obtained using the nearest-plane (Babai) algorithm. ...

Journal: :Digital Signal Processing 2013
V. N. Hari G. V. Anand A. Benjamin Premkumar

This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the ...

Journal: :Electr. Notes Theor. Comput. Sci. 2002
Marta Z. Kwiatkowska Rashid Mehmood Gethin Norman David Parker

Despite considerable effort, the state-space explosion problem remains an issue in the analysis of Markov models. Given structure, symbolic representations can result in very compact encoding of the models. However, a major obstacle for symbolic methods is the need to store the probability vector(s) explicitly in main memory. In this paper, we present a novel algorithm which relaxes these memor...

2008
Philip Schniter Justin Ziniel

A low-complexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability models and ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید