نتایج جستجو برای: nonconvex optimization
تعداد نتایج: 320278 فیلتر نتایج به سال:
We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives. This class includes the extensively studied class of convex composite objective problems as a subclass. To solve composite nonconvex problems we introduce a powerful new framework based on asymptotically nonvanishing errors, avoiding the commo...
Abstract. Nonsmooth nonconvex regularization has remarkable advantages for the restoration of piecewise constant images. Constrained optimization can improve the image restoration using a priori information. In this paper, we study regularized nonsmooth nonconvex minimization with box constraints for image restoration. We present a computable positive constant θ for using nonconvex nonsmooth re...
In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being over n agents having access only to zeroth-order (ZO) information of cost. This has various machine learning applications. As solution, propose two ZO algorithms, in which at each iteration agent samples local oracle points time-varying smoothing parameter. We show that proposed algor...
Distributed multi-agent optimization finds many applications in distributed learning, control, estimation, etc. Most existing algorithms assume knowledge of first-order information the objective and have been analyzed for convex problems. However, there are situations where is nonconvex, one can only evaluate function values at finitely points. In this paper we consider derivative-free nonconve...
The success of deep learning has led to a rising interest in the generalization property stochastic gradient descent (SGD) method, and stability is one popular approach study it. Existing bounds based on do not incorporate interplay between optimization SGD underlying data distribution, hence cannot even capture effect randomized labels performance. In this paper, we establish error for by char...
In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...
The paper deals with three numerical approaches that allow one to construct computational technologies for solving nonconvex optimization problems. We propose use the developed algorithms based on modifications of tunnel search algorithm, Luus–Yaakola method, and expert algorithm. presented techniques are implemented within framework software package used problems various classes, in particular...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید