نتایج جستجو برای: clustering error

تعداد نتایج: 353239  

1996
Dan Judd Philip K. McKinley Anil K. Jain

Algorithmic enhancements are described that enable large computational reduction in mean square-error data clustering. These improvements are incorporated into a parallel data-clustering tool, P-CLUSTER, designed to execute on a network of workstations. Experiments involving the unsupervised segmentation of standard texture images were performed. For some data sets, a 96 percent reduction in co...

2010
Chih-Hao Chen Hsing-Chung Lee Qingdong Ling Hsiao-Rong Chen Yi-An Ko Tsong-Shan Tsou Sun-Chong Wang Li-Ching Wu H. C. Lee

Inferences acquired by applying clustering analysis of microarrays cannot be reliably assessed before data-originated errors are quantified, an exacting task that is often not performed. Here, we present a novel and fast clustering technique, pair-wise Gaussian merging (PGM), suited for this purpose. Designed for systems with normally distributed error, PGM treats each observation as a Gaussian...

1997
Dan Judd Philip K. McKinley Anil K. Jain

This paper presents the results of a performance study of parallel data clustering on Network of Workstations (NOW) platforms. The clustering program, P-CLUSTER, is based on the mean square-error clustering algorithm and is applied to the problem of image segmentation. The parallel implementation uses a client-server model, in which the clustering task is divided among a set of clients that rep...

2001
Constantinos Boulis

In this paper I consider the problem of clustering the cepstrum coefficients of an acoustic vector into a number of disjoint sets (subvectors) using the mutual information as the clustering criterion. I then quantize each one of the subvectors independently using different quantization step. I compare the performance of the clustering scheme with a heuristic one where neighboring coefficients a...

Journal: :Pattern Recognition Letters 1997
Lei Xu

It is shown that a particular case of the Bayesian Ying–Yang learning system and theory reduces to the maximum likelihood learning of a finite mixture, from which we have obtained not only the EM algorithm for its parameter estimation Ž and its various approximate but fast algorithms for clustering in general cases including Mahalanobis distance clustering or . elliptic clustering , but also cr...

2005
Greg Hamerly Erez Perelman Jeremy Lau Brad Calder

This paper describes the new features available in the SimPoint 3.0 release. The release provides two techniques for drastically reducing the run-time of SimPoint: faster searching to find the best clustering, and efficiently clustering large numbers of intervals. SimPoint 3.0 also provides an option to output only the simulation points that represent the majority of execution, which can reduce...

2003
Takashi Morie Tomohiro Matsuura Atsushi Iwata

The clustering algorithm employing “stochastic association”, which we have already proposed, offers a simple and efficient soft-max adaptation rule. The adaptation process is the same as the on-line K-means clustering method except for adding random fluctuation in the distortion error evaluation process. This paper describes VLSI implementation of this new clustering algorithm based on a pulse ...

Journal: :IJCLCLP 2010
Heng Lu Zhen-Hua Ling Li-Rong Dai Ren-Hua Wang

This paper presents a decision tree pruning method for the model clustering of HMM-based parametric speech synthesis by cross-validation (CV) under the minimum generation error (MGE) criterion. Decision-tree-based model clustering is an important component in the training process of an HMM based speech synthesis system. Conventionally, the maximum likelihood (ML) criterion is employed to choose...

Journal: :J. Instruction-Level Parallelism 2005
Greg Hamerly Erez Perelman Jeremy Lau Brad Calder

This paper describes the new features available in the SimPoint 3.0 release. The release provides two techniques for drastically reducing the run-time of SimPoint: faster searching to find the best clustering, and efficiently clustering large numbers of intervals. SimPoint 3.0 also provides an option to output only the simulation points that represent the majority of execution, which can reduce...

Journal: :Journal of computational biology : a journal of computational molecular cell biology 2002
Edward R. Dougherty Junior Barrera Marcel Brun Seungchan Kim Roberto Marcondes Cesar Junior Yidong Chen Michael L. Bittner Jeffrey M. Trent

There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sa...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید