نتایج جستجو برای: separation hyperplanes
تعداد نتایج: 124957 فیلتر نتایج به سال:
The concept of linear separability is used in the theory of neural networks and pattern recognition methods. This term can be related to examination of learning sets (classes) separation by hyperplanes in a given feature space. The family of K disjoined learning sets can be transformed into K linearly separable sets by the ranked layer of binary classifiers. Problems of the ranked layers deigni...
We study the rest-frame instant form of a new formulation of relativistic perfect fluids in terms of Eulerian coordinates. After the separation of the relativistic center of mass from the relative variables on the Wigner hyperplanes, we define orientational and shape variables for the fluid, viewed as a relativistic extended deformable body, by introducing dynamical body frames. Finally we defi...
this paper investigates a procedure for identifying all efficient hyperplanes of production possibility set (pps). this procedure is based on a method which recommended by pekka j. korhonen[8]. he offered using of lexicographic parametric programming method for recognizing all efficient units in data envelopment analysis (dea). in this paper we can find efficient hyperplanes, via using the para...
This paper investigates a procedure for identifying all efficient hyperplanes of production possibility set (PPS). This procedure is based on a method which recommended by Pekka J. Korhonen[8]. He offered using of lexicographic parametric programming method for recognizing all efficient units in data envelopment analysis (DEA). In this paper we can find efficient hyperplanes, via using the para...
Maximal margin classifiers are a core technology in modern machine learning. They have strong theoretical justifications and have shown empirical successes. We provide an alternative justification for maximal margin hyperplane classifiers by relating them to Bayes optimal classifiers that use Parzen windows estimations with Gaussian kernels. For any value of the smoothing parameter (the width o...
A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to #1>/= p-1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivation...
We consider decision tables with real value conditional attributes and we present a method for extraction of features deened by hyperplanes in a multi-dimensional aane space. These new features are often more relevant for object classiication than the features deened by hyperplanes parallel to axes. The method generalizes an approach presented in 18] in case of hyperplanes not necessarily paral...
A topological hyperplane is a subspace of R (or a homeomorph of it) that is topologically equivalent to an ordinary straight hyperplane. An arrangement of topological hyperplanes in R is a finite set H such that for any nonvoid intersection Y of topological hyperplanes in H and any H ∈ H that intersects but does not contain Y , the intersection is a topological hyperplane in Y . (We also assume...
We propose a class of new double projection algorithms for solving variational inequality problem, which can be viewed as a framework of the method of Solodov and Svaiter by adopting a class of new hyperplanes. By the separation property of hyperplane, our method is proved to be globally convergent under very mild assumptions. In addition, we propose a modified version of our algorithm that fin...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید