نتایج جستجو برای: and euclidean nearest neighbor distance with applying cross tabulation method
تعداد نتایج: 18636211 فیلتر نتایج به سال:
A key question in vision is how to represent our knowledge of previously encountered objects to classify new ones. The answer depends on how we determine the similarity of two objects. Similarity tells us how relevant each previously seen object is in determining the category to which a new object belongs. Here a dichotomy emerges. Complex notions of similarity appear necessary for cognitive mo...
This paper proposes a novel pattern classiication approach, called the nearest linear combination (NLC) approach, for eigenface based face recognition. Assume that multiple prototypical vectors are available per class, each vector being a point in an eigenface space. A linear combination of prototypical vectors belonging to a face class is used to deene a measure of distance from the query vect...
A recently proposed product quantization method is efficient for large scale approximate nearest neighbor search, however, its performance on unstructured vectors is limited. This paper introduces residual vector quantization based approaches that are appropriate for unstructured vectors. Database vectors are quantized by residual vector quantizer. The reproductions are represented by short cod...
We propose an approach to embed time series data in a vector space based on the distances obtained from Dynamic Time Warping (DTW), and to classify them in the embedded space. Under the problem setting in which both labeled data and unlabeled data are given beforehand, we consider three embeddings, embedding in a Euclidean space by MDS, embedding in a Pseudo-Euclidean space, and embedding in a ...
In this paper, we develop a novel index structure to support efficient approximate k-nearest neighbor (KNN) query in high-dimensional databases. In high-dimensional spaces, the computational cost of the distance (e.g., Euclidean distance) between two points contributes a dominant portion of the overall query response time for memory processing. To reduce the distance computation, we first propo...
A fundamental question of machine learning is how to compare examples. If an algorithm could perfectly determine whether two examples were semantically similar or dissimilar, most subsequent machine learning tasks would become trivial (i.e, the 1-nearest-neighbor classifier will achieve perfect results). A common choice for a dissimilarity measurement is an uninformed norm, like the Euclidean d...
abstract sensitive and precise voltammetric methods for the determination of trace amounts of furaldehydes, mainly as furfural (f) and 5-hydroxymethyl-2-furaldehyde (hmf), in waste waters and other matrices is described. determination of total furaldehyde at < ?g g-1 levels in alkaline buffered aqueous media was individually investigated. by the use of ordinary swv and adsorptive square wave ...
Nearest neighbor retrieval can be defined as the task of finding the objects that are most similar to a query from a given a database of objects. It find its application in areas ranging from medical domain, financial sector, computer vision, computational sciences, computational geometry, information retrieval, etc. With the expansion of internet, the amount of digitized data is increasing by ...
The nearest-neighbor problem arises in clustering and other applications. It requires us to define a function to measure differences among items in a data set, and then to compute the closest items to a query point with respect to this measure. Recent work suggests that the conventional Euclidean measure does not adequately model highdimensional data. We present a new, data-driven difference me...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید