نتایج جستجو برای: مدل lvq
تعداد نتایج: 120544 فیلتر نتایج به سال:
We propose to use learning vector quantization (LVQ) in novelty detection where a few outliers exist in training data. The codebook update of original LVQ is modified and the scheme to determine a threshold for each codebook is proposed. Experimental results on artificial and real-world problems are quite promising.
In this article, we propose batch-type learning vector quantization (LVQ) segmentation techniques for the magnetic resonance (MR) images. Magnetic resonance imaging (MRI) segmentation is an important technique to differentiate abnormal and normal tissues in MR image data. The proposed LVQ segmentation techniques are compared with the generalized Kohonen's competitive learning (GKCL) methods, wh...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics with numerous successful applications but, so far, limited theoretical background. We study LVQ rigorously within a simplifying model situation: two competing prototypes are trained from a sequence of examples drawn from a mixture of Gaussians. Concepts from statistical physics and the theory of ...
In this paper several neural network classiication algorithms have been applied to a real-world data case of electron microscopy image classiication in which it was known a priori the existence of two diierentiated views of the same specimen. Using several labeled sets as a reference, the parameters and architecture of the classiier (both LVQ trained codebooks and BP trained neural-nets) were o...
Neuro-fuzzy approach have attracted considerable attention in the computational intelligence and segmentation algorithms have been increasingly in developed in improving the accuracy of medical diagnosis. Fuzzy set attempts to represent the human perception whereas neural network attempt to emulate the architecture and information representation scheme of human brain. In this paper a comparativ...
In image classification, there are no labeled training instances for some classes, which therefore called unseen classes or test classes. To classify these zero-shot learning (ZSL) was developed, typically attempts to learn a mapping from the (visual) feature space semantic in represented by list of semantically meaningful attributes. However, fact that this is learned without using affects per...
Input feature ranking and selection represent a necessary preprocessing stage in classification, especially when one is required to manage large quantities of data. We introduce a weighted LVQ algorithm, called Energy Relevance LVQ (ERLVQ), based on Onicescu’s informational energy [10]. ERLVQ is an incremental learning algorithm for supervised classification and feature ranking.
OBJECTIVE A self-organizing map (SOM) is a competitive artificial neural network with unsupervised learning. To increase the SOM learning effect, a fuzzy-soft learning vector quantization (FSLVQ) algorithm has been proposed in the literature, using fuzzy functions to approximate lateral neural interaction of the SOM. However, the computational performance of FSLVQ is still not good enough, espe...
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and ...
We compare the performance of ve algorithms for vector quan-tisation and clustering analysis: the Self-Organising Map (SOM) and Learning Vector Quantization (LVQ) algorithms of Kohonen, the Linde-Buzo-Gray (LBG) algorithm, the MultiLayer Perceptron (MLP) and the GMM/EM algorithm for Gaussian Mixture Models (GMM). We propose that the GMM/EM provides a better representation of the speech space an...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید