Unsupervised Active Learning via Subspace Learning
نویسندگان
چکیده
Unsupervised active learning has been an research topic in machine community, with the purpose of choosing representative samples to be labelled unsupervised manner. Previous works usually take minimization data reconstruction loss as criterion select which can better approximate original inputs. However, are often drawn from low-dimensional subspaces embedded arbitrary high-dimensional space many scenarios, thus it might severely bring noise if attempting precisely reconstruct all entries one observation, leading a suboptimal solution. In view this, this paper proposes novel Active Learning model via Subspace Learning, called ALSL. contrast previous approaches, ALSL aims discovery low-rank structures data, and then perform sample selection based on learnt representations. To end, we devise two different strategies propose corresponding formulations under representations respectively. Since proposed involve several non-smooth regularization terms, develop simple but effective optimization procedure solve them. Extensive experiments performed five publicly available datasets, experimental results demonstrate first formulation achieves comparable performance state-of-the-arts, while second significantly outperforms them, achieving 13\% improvement over best baseline at most.
منابع مشابه
High-Dimensional Unsupervised Active Learning Method
In this work, a hierarchical ensemble of projected clustering algorithm for high-dimensional data is proposed. The basic concept of the algorithm is based on the active learning method (ALM) which is a fuzzy learning scheme, inspired by some behavioral features of human brain functionality. High-dimensional unsupervised active learning method (HUALM) is a clustering algorithm which blurs the da...
متن کاملUnsupervised Slow Subspace-Learning from Stationary Processes
We propose a method of unsupervised learning from stationary, vector-valued processes. A low-dimensional subspace is selected on the basis of a criterion which rewards data-variance (like PSA) and penalizes the variance of the velocity vector, thus exploiting the shorttime dependencies of the process. We prove error bounds in terms of the -mixing coe¢ cients and consistency for absolutely regul...
متن کاملActive Subspace: Toward Scalable Low-Rank Learning
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic ...
متن کاملDeep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i9.17013