نتایج جستجو برای: non convex function
تعداد نتایج: 2416187 فیلتر نتایج به سال:
Minimizing a convex function of matrices regularized by the nuclear norm arises in many applications such as collaborative filtering and multi-task learning. In this paper, we study the general setting where the convex function could be non-smooth. When the size of the data matrix, denoted bym×n, is very large, existing optimization methods are inefficient because in each iteration, they need t...
As a generalization of geodesic function, this paper introduces the notion φE-convex function. Some properties function and are established. The concepts set φE-epigraph also given. characterization functions in terms their φE-epigraphs, obtained.
Consider the problem of minimizing the sum of a smooth (possibly non-convex) and a convex (possibly nonsmooth) function involving a large number of variables. A popular approach to solve this problem is the block coordinate descent (BCD) method whereby at each iteration only one variable block is updated while the remaining variables are held fixed. With the recent advances in the developments ...
In the present paper, we first prove the logarithmic convexity of the elementary function b x −a x x , where x 6= 0 and b > a > 0. Basing on this, we then provide a simple proof for Schur-convex properties of the extended mean values, and, finally, discover some convexity related to the extended mean values.
Khrapchenko’s classical lower bound n2 on the formula size of the parity function f can be interpreted as designing a suitable measure of sub-rectangles of the combinatorial rectangle f−1(0) × f−1(1). Trying to generalize this approach we arrived at the concept of convex measures. We prove the negative result that convex measures are bounded by O(n2) and show that several measures considered fo...
Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...
In this paper we study optimization from samples of convex functions. There are many scenarios in which we do not know the function we wish to optimize but can learn it from data. In such cases, we are interested in bounding the number of samples required to optimize the function. Our main result shows that in general, the number of samples required to obtain a non-trivial approximation to the ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید