نتایج جستجو برای: nonsmooth convex optimization problem
تعداد نتایج: 1134849 فیلتر نتایج به سال:
BACKGROUND The challenge of reconstructing a sparse medical magnetic resonance image based on compressed sensing from undersampled k-space data has been investigated within recent years. As total variation (TV) performs well in preserving edge, one type of approach considers TV-regularization as a sparse structure to solve a convex optimization problem. Nevertheless, this convex optimization pr...
We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such met...
In this monograph we develop the function space method for optimization problems and operator equations in Banach spaces. Optimization is the one of key components for mathematical modeling of real world problems and the solution method provides an accurate and essential description and validation of the mathematical model. Optimization problems are encountered frequently in engineering and sci...
In this work we consider the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We propose a novel algorithm called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algori...
The analysis of flow in water-distribution networks with several pumps by the Content Model may be turned into a non-convex optimization uncertain problem with multiple solutions. Newton-based methods such as GGA are not able to capture a global optimum in these situations. On the other hand, evolutionary methods designed to use the population of individuals may find a global solution even for ...
This thesis is concerned with the study of algorithms for approximately solving large-scale linear and nonsmooth convex minimization problems within a prescribed relative error δ of the optimum. The methods we propose converge in O(1/δ 2) or O(1/δ) iterations of a first-order type. While the theoretical lower iteration bound for approximately solving (in the absolute sense) nonsmooth convex min...
In this paper, a one-layer recurrent projection neural network is proposed for solving pseudoconvex optimization problems with general convex constraints. The proposed network in this paper deals with the constraints into two parts, which brings the network simpler structure and better properties. By the Tikhonov-like regularization method, the proposed network need not estimate the exact penal...
In this paper we characterize nonsmooth convex vector functions by first and second order generalized derivatives. We also prove optimality conditions for convex vector problems involving nonsmooth data.
In this paper, we investigate a steepest descent neural network for solving general nonsmooth convex optimization problems. The convergence to optimal solution set is analytically proved. We apply the method to some numerical tests which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید