Subspace Learning from Image Gradient Orientations
نویسندگان
چکیده
منابع مشابه
On the Subspace of Image Gradient Orientations
We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data population. We show that replacing intensities with gradient orientations and the l2 norm with a ...
متن کاملImage Recognition Using a Subspace Learning Method
Subspace learning is an effective image feature extraction and classification technique, which generally includes supervised and unsupervised subspace learning methods. Parallel computing is an effective technique in solving large-scale learning problems. It splits the original task into several subtasks and reduces the time complexity. Parallel computing reduces the time complexity by splittin...
متن کاملImage retrieval based on incremental subspace learning
Many problems in information processing involve some form of dimensionality reduction, such as face recognition, image/text retrieval, data visualization, etc. The typical linear dimensionality reduction algorithms include principal component analysis (PCA), random projection, locality-preserving projection (LPP), etc. These techniques are generally unsupervised which allows them to model data ...
متن کاملEstimation of 3D shape from image orientations.
One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in t...
متن کاملGradient-Based Meta-Learning with Learned Layerwise Metric and Subspace
Gradient-based meta-learning has been shown to be expressive enough to approximate any learning algorithm. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during meta-testing. Our primary contribution is the MT-net, which enables the meta-learner to learn on each layer’s activation space a subspace that the task-specific learner pe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2012
ISSN: 0162-8828,2160-9292
DOI: 10.1109/tpami.2012.40