نتایج جستجو برای: feature reduction
تعداد نتایج: 713021 فیلتر نتایج به سال:
This study is concerned with whether it is possible to detect what information contained in the training data and background knowledge is relevant for solving the learning problem, and whether irrelevant information can be eliminated in preprocessing before starting the learning process. A case study of data preprocessing for a hybrid genetic algorithm shows that the elimination of irrelevant f...
We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performe...
DNA microarray technologies are useful for addressing a broad range of biological problems - including the measurement of mRNA expression levels in target cells. These studies typically produce large data sets that contain measurements on thousands of genes under hundreds of conditions. There is a critical need to summarize this data and to pick out the important details. The most common activi...
optimal attributes are useful in interpretation of seismic data. two proposed methods are presented in this paper for finding optimal attributes. regularized discriminate analysis(rda) is based on 2 parameters ë, ? which called regularization parameter. the other method is principal component analysi s(pca).in this paper gas chimney detection is defined as the subject of study for ranking relev...
Abstract— Feature selection is a term commonly used in data mining to describe the tools and techniques available for reducing inputs to a manageable size for processing and analysis. Feature selection implies not only cardinality reduction, which means imposing an arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the choice of att...
Nowadays, increasing the volume of data and the number of attributes in the dataset has reduced the accuracy of the learning algorithm and the computational complexity. A dimensionality reduction method is a feature selection method, which is done through filtering and wrapping. The wrapper methods are more accurate than filter ones but perform faster and have a less computational burden. With ...
In this paper we describe a HMM-based sign language recognition (SLR) system for isolated signs. In the first part we describe the image parametrization method producing features used for recognition. Our goal was to find the best combination of a feature space dimension reduction method and an HMM structure.
Methods for estimating the ratio of two probability density functions have been actively explored recently since they can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, feature selection, and conditional probability estimation. In this paper, we propose a new density-ratio estimator which incorporates dimensionality reduction into the densityra...
Transfer learning provides an approach to solve target tasks more quickly and effectively by using previouslyacquired knowledge learned from source tasks. Most of transfer learning approaches extract knowledge of source domain in the given feature space. The issue is that single perspective can‟t mine the relationship of source domain and target domain fully. To deal with this issue, this paper...
High time cost is the bottle-neck of video scene segmentation. In this paper we use a heuristic method called Sort-Merge feature selection to construct automatically a hierarchy of small subsets of features that are progressively more useful for segmentation. A novel combination of Fastmap for dimensionality reduction and Mahalanobis distance for likelihood determination is used as induction al...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید