نتایج جستجو برای: total variation regularizer

تعداد نتایج: 1064242  

2013
Stefan Wager Sida I. Wang Percy Liang

Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix....

2011
Ulrich Rückert Marius Kloft

The success of regularized risk minimization approaches to classification with linear models depends crucially on the selection of a regularization term that matches with the learning task at hand. If the necessary domain expertise is rare or hard to formalize, it may be difficult to find a good regularizer. On the other hand, if plenty of related or similar data is available, it is a natural a...

2013
Sida I. Wang Mengqiu Wang Stefan Wager Percy Liang Christopher D. Manning

NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually ...

2015
Benjamin Hell Marcus A. Magnor

In this paper we present a novel way of combining the process of k-means clustering with image segmentation by introducing a convex regularizer for segmentation-based optimization problems. Instead of separating the clustering process from the core image segmentation algorithm, this regularizer allows the direct incorporation of clustering information in many segmentation algorithms. Besides in...

Journal: :CoRR 2017
Byung-Woo Hong Ja-Keoung Koo Stefano Soatto

We present a variational multi-label segmentation algorithm based on a robust Huber loss for both the data and the regularizer, minimized within a convex optimization framework. We introduce a novel constraint on the common areas, to bias the solution towards mutually exclusive regions. We also propose a regularization scheme that is adapted to the spatial statistics of the residual at each ite...

Journal: :Journal of Machine Learning Research 2016
Xin Guo Jun Fan Ding-Xuan Zhou

We consider a learning algorithm generated by a regularization scheme with a concave regularizer for the purpose of achieving sparsity and good learning rates in a least squares regression setting. The regularization is induced for linear combinations of empirical features, constructed in the literatures of kernel principal component analysis and kernel projection machines, based on kernels and...

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2023

Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with gradient-based optimization, where the errors back-propagated from last layer back to first one. At each optimization step, neurons at given receive feedback belonging higher hierarchy. In this paper, we propose complement traditional 'between-layer' additional 'within-layer' encourage dive...

Journal: :Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications 1998

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید