نتایج جستجو برای: separation hyperplanes
تعداد نتایج: 124957 فیلتر نتایج به سال:
We show that there are 6 isomorphism classes of hyperplanes of the dual polar space ∆ = DW (5, 2h) which arise from the Grassmannembedding. If h ≥ 2, then these are all the hyperplanes of ∆ arising from an embedding. If h = 1, then there are 6 extra classes of hyperplanes as has been shown by Pralle [23] with the aid of a computer. We will give a computer free proof for this fact. The hyperplan...
The es-splitting operation on binary matroids is a natural generalization of Slater's <em>n</em>-line splitting graphs. In this paper, we characterize the closure operator matroid M<sup>e</sup><sub>X</sub> in terms original M. We also describe ats and hyperplanes bi- nary hyperplanes, respectively
AbetractRather than iteratively manually examining a variety of pre-specified architectures, a constructive learning algorithm dynamically creates a problem-specific neural network architecture. Here we present an revised version of our parallel constructive neural network learning algorithm which constructs such an architecture. The three steps of searching for points on separating hyperplanes...
We introduce a scheme for optimally allocating a variable number of bits per LSH hyperplane. Previous approaches assign a constant number of bits per hyperplane. This neglects the fact that a subset of hyperplanes may be more informative than others. Our method, dubbed Variable Bit Quantisation (VBQ), provides a datadriven non-uniform bit allocation across hyperplanes. Despite only using a frac...
We consider packings of radius r collars about hyperplanes in H. For such packings, we prove that the Delaunay cells are truncated ultra-ideal simplices which tile H. If we place n+1 hyperplanes in H each at a distance of exactly 2r to the others, we could place radius r collars about these hyperplanes. The density of these collars within the corresponding Delaunay cell is an upper bound on den...
Online learning is very important for processing sequential data and helps alleviate the computation burden on large scale data as well. Especially, one-pass online learning is to predict a new coming sample’s label and update the model based on the prediction, where each coming sample is used only once and never stored. So far, existing one-pass online learning methods are globally modeled and...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید