نتایج جستجو برای: and euclidean nearest neighbor distance with applying cross tabulation method
تعداد نتایج: 18636211 فیلتر نتایج به سال:
In most Information Retrieval (IR) applications, Euclidean distance is used for similarity measurement. It is adequate in many cases but this distance metric is not very accurate when there exist some different local data distributions in the database. We propose a Gaussian mixture distance for performing accurate nearest-neighbor search for Information Retrieval (IR). Under an established Gaus...
K-nearest neighbor (k-NN) classification is a powerful and simple method for classification. k-NN classifiers approximate a Bayesian classifier for a large number of data samples. The accuracy of k-NN classifier relies on the distance metric used for calculating nearest neighbor and features used for instances in training and testing data. In this paper we use deep neural networks (DNNs) as a f...
abstract: in the paper of black and scholes (1973) a closed form solution for the price of a european option is derived . as extension to the black and scholes model with constant volatility, option pricing model with time varying volatility have been suggested within the frame work of generalized autoregressive conditional heteroskedasticity (garch) . these processes can explain a number of em...
The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled \training" objects that are closest to it. Closeness is usually de ̄ned in terms of a metric distance on the Euclidean space with the input measurement variables as axes. The metric chosen to de ̄ne this distance can strongly e®ect performance. An optimal choice depends on the proble...
During last decade, tremendous efforts have been devoted to the research of time series classification. Indeed, many previous works suggested that the simple nearest-neighbor classification is effective and difficult to beat. However, we usually need to determine the distance metric (e.g., Euclidean distance and Dynamic Time Warping) for different domains, and current evidence shows that there ...
We consider improving the performance of k-Nearest Neighbor classifiers. A regularized kNN is proposed to learn an optimal dissimilarity function to substitute the Euclidean metric. The learning process employs hyperkernels and shares a similar regularization framework as support vector machines (SVM). Its performance is shown to be consistently better than kNN, and is competitive with SVM.
In this paper, we consider the problem of feature selection and classifier fusion and discuss how they should be reflected in the fusion system architecture. We employed the genetic algorithm with a novel coding to search the worst performing fusion strategy. The proposed algorithm tunes itself between feature and matching score levels, and improves the final performance over the original on tw...
in solvo-hydrothermal method, xrd pattern was indicated presence of intermediate of ammonium octatmolybdate and ammonium tetramolybdate, which were changed to stble phase of ?-moo3 annealing. sem images were indicated nanoparticle with semispheical morphology and tem image was demonstrated nanoparticles with a diameter size of 25nm. also in impregnate method. the xrd pattern was shown high crys...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید