نتایج جستجو برای: probability vector

تعداد نتایج: 408882  

2008
Sonjoy Das

xi 1 Chapter 1: Introduction 1 1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Chapter 2: Asymptotic Distribution for Polynomial Chaos Representation from Data 5 2.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . . . . . . ....

2009
Ushio Sumita

In a recent paper by Sumita and Rieders (1990), a new algorithm has been developed for computing the ergodic probability vector for large Markov chains. Decomposing the state space into M lumps, the algorithm generates a sequence of replacement processes on individual lumps in such a way that in the limit the ergodic probability vector for a replacement process on one lump will be proportional ...

Journal: : 2022

Designing a Comparative Model of Bank Credit Risk Using Neural Network Models, Survival Probability Function and Support Vector Machine

Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient con...

2014
Yu-Hao Chin Chang-Hong Lin Ernestasia Siahaan Jia-Ching Wang

For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) ...

2009
Vlasta Kaňková

Let ξ := ξ(ω) (s×1) be a random vector defined on a probability space (Ω, S, P ); F, PF the distribution function and the probability measure corresponding to the random vector ξ. Let, moreover, g0(x, z), g1 0(y, z) be functions defined on Rn × Rs and Rn1 × Rs; fi(x, z), gi(y), i = 1, . . . , m functions defined on Rn × Rs and Rn1 ; h := h(z) (m × 1) a vector function defined on Rs, h ′ (z) = (...

1999
Naoki Abe Philip M. Long

We consider the problem of maximizing the total number of successes while learning about a probability function determining the likelihood of a success. In particular, we consider the case in which the probability function is represented by a linear function of the attribute vector associated with each action/choice. In the scenario we consider, learning proceeds in trials and in each trial, th...

Journal: :CoRR 2011
Massimo Melucci

According to the probability ranking principle, the document set with the highest values of probability of relevance optimizes information retrieval effectiveness given the probabilities are estimated as accurately as possible. The key point of this principle is the separation of the document set into two subsets with a given level of fallout and with the highest recall. If subsets of set measu...

2005
Mark A. Davenport

Standard classification algorithms aim to minimize the probability of making an incorrect classification. In many important applications, however, some kinds of errors are more important than others. In this report we review cost-sensitive extensions of standard support vector machines (SVMs). In particular, we describe cost-sensitive extensions of the C-SVM and the ν-SVM, which we denote the 2...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید