نتایج جستجو برای: nearest neighbor classification
تعداد نتایج: 524866 فیلتر نتایج به سال:
The nearest neighbor technique is a simple and appealing method to address classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The e...
We present new nearest neighbor methods for text classification and an evaluation of these methods against the existing nearest neighbor methods as well as other well-known text classification algorithms. Inspired by the language modeling approach to information retrieval, we show improvements in k-nearest neighbor (kNN) classification by replacing the classical cosine similarity with a KL dive...
Geostatistics have become the dominant tool for probabilistic estimation of properties of heterogeneous formations at points where data are not available. Ordinary kriging, the starting point in development of other geostatistical techniques, has a number of serious limitations, chief among which is the intrinsic hypothesis of the (second order) stationarity of the underlying random field. Atte...
This paper considers how to conduct k-nearest neighbor classification in the following scenario: multiple parties, each having a private data set, want to collaboratively build a k-nearest neighbor classifier without disclosing their private data to each other or any other parties. Specifically, the data are vertically partitioned in that all parties have data about all the instances involved, ...
Combining multiple classiiers is an eeective technique for improving accuracy. There are many general combining algorithms, such as Bagging, Boosting, or Error Correcting Output Coding, that signiicantly improve classiiers like decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classiier. In this paper, we present MFS, a...
Abstract As technology improves, how to extract information from vast datasets is becoming more urgent. well known, k-nearest neighbor classifiers are simple implement and conceptually implement. It not without its shortcomings, however, as follows: (1) there still a sensitivity the choice of k -values even when representative attributes considered in each class; (2) some cases, proximity betwe...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید