Supplemental Material for the paper “ Learning Separable Filters
نویسندگان
چکیده
Figure 1 illustrates some filter banks learned on the DRIVE dataset. In particular, it shows how an example learned filter bank can be replaced by its rank-1 approximation obtained using the SVD decomposition (SEP-SVD). Figure 2 shows examples of 3D filter banks learned on the OPF dataset. In Figure 3 the central slice of the filter banks are given for a better comparison. We report the detailed pixel classification results in Table 1. To provide a fair comparison of the different approaches, we have evaluated the quality of the classification we obtained on both the 2D and the 3D datasets by using several analytic measures. In particular, we considered the following:
منابع مشابه
Learning Separable Filters with Shared Parts
Learned image features can provide great accuracy in many Computer Vision tasks. However, when the convolution filters used to learn image features are numerous and not separable, feature extraction becomes computationally demanding and impractical to use in real-world situations. In this thesis work, a method for learning a small number of separable filters to approximate an arbitrary non-sepa...
متن کاملSTABILIZER TOPOLOGY OF HOOPS
In this paper, we introduce the concepts of right, left and product stabilizers on hoops and study some properties and the relation between them. And we try to find that how they can be equal and investigate that under what condition they can be filter, implicative filter, fantastic and positive implicative filter. Also, we prove that right and product stabilizers are filters and if they are ...
متن کاملLearning of Separable Filters by Stacked Fisher Convolutional Autoencoders
Learning of convolutional filters in deep neural networks proves high efficiency to provide sparse representations for the purpose of image recognition. The computational cost of these networks can be alleviated by focusing on separable filters to reduce the number of learning parameters. Autoencoders are a family of powerful deep networks to build scalable generative models for automatic featu...
متن کاملLearning Rotation-Aware Features: From Invariant Priors to Equivariant Descriptors Supplemental Material
The R-FoE model of Sec. 3 of the main paper was trained on a database of 5000 natural images (50 × 50 pixels) using persistent contrastive divergence [12] (also known as stochastic maximum likelihood). Learning was done with stochastic gradient descent using mini-batches of 100 images (and model samples) for a total of 10000 (exponentially smoothed) gradient steps with an annealed learning rate...
متن کاملSupplementary Material for Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks
A core technical contribution of this work is the generalization of the Gaussian permutohedral lattice convolution proposed in [1] to the full non-separable case with the ability to perform backpropagation. Although, conceptually, there are minor difference between non-Gaussian and general parameterized filters, there are non-trivial practical differences in terms of the algorithmic implementat...
متن کامل