نتایج جستجو برای: total variation regularizer
تعداد نتایج: 1064242 فیلتر نتایج به سال:
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix....
The success of regularized risk minimization approaches to classification with linear models depends crucially on the selection of a regularization term that matches with the learning task at hand. If the necessary domain expertise is rare or hard to formalize, it may be difficult to find a good regularizer. On the other hand, if plenty of related or similar data is available, it is a natural a...
NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually ...
In this paper we present a novel way of combining the process of k-means clustering with image segmentation by introducing a convex regularizer for segmentation-based optimization problems. Instead of separating the clustering process from the core image segmentation algorithm, this regularizer allows the direct incorporation of clustering information in many segmentation algorithms. Besides in...
We present a variational multi-label segmentation algorithm based on a robust Huber loss for both the data and the regularizer, minimized within a convex optimization framework. We introduce a novel constraint on the common areas, to bias the solution towards mutually exclusive regions. We also propose a regularization scheme that is adapted to the spatial statistics of the residual at each ite...
We consider a learning algorithm generated by a regularization scheme with a concave regularizer for the purpose of achieving sparsity and good learning rates in a least squares regression setting. The regularization is induced for linear combinations of empirical features, constructed in the literatures of kernel principal component analysis and kernel projection machines, based on kernels and...
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with gradient-based optimization, where the errors back-propagated from last layer back to first one. At each optimization step, neurons at given receive feedback belonging higher hierarchy. In this paper, we propose complement traditional 'between-layer' additional 'within-layer' encourage dive...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید