نتایج جستجو برای: nonsmooth convex optimization problem

تعداد نتایج: 1134849  

2017
Shanshan Chen Hongwei Du Linna Wu Jiaquan Jin Bensheng Qiu

BACKGROUND The challenge of reconstructing a sparse medical magnetic resonance image based on compressed sensing from undersampled k-space data has been investigated within recent years. As total variation (TV) performs well in preserving edge, one type of approach considers TV-regularization as a sparse structure to solve a convex optimization problem. Nevertheless, this convex optimization pr...

2012
Jason D. Lee Yuekai Sun Michael A. Saunders

We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such met...

2012
Kazufumi Ito

In this monograph we develop the function space method for optimization problems and operator equations in Banach spaces. Optimization is the one of key components for mathematical modeling of real world problems and the solution method provides an accurate and essential description and validation of the mathematical model. Optimization problems are encountered frequently in engineering and sci...

Journal: :CoRR 2012
Hua Ouyang Alexander G. Gray

In this work we consider the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We propose a novel algorithm called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algori...

The analysis of flow in water-distribution networks with several pumps by the Content Model may be turned into a non-convex optimization uncertain problem with multiple solutions. Newton-based methods such as GGA are not able to capture a global optimum in these situations. On the other hand, evolutionary methods designed to use the population of individuals may find a global solution even for ...

2007
Peter Richtárik

This thesis is concerned with the study of algorithms for approximately solving large-scale linear and nonsmooth convex minimization problems within a prescribed relative error δ of the optimum. The methods we propose converge in O(1/δ 2) or O(1/δ) iterations of a first-order type. While the theoretical lower iteration bound for approximately solving (in the absolute sense) nonsmooth convex min...

Journal: :Neurocomputing 2014
Qingfa Li Yaqiu Liu Liangkuan Zhu

In this paper, a one-layer recurrent projection neural network is proposed for solving pseudoconvex optimization problems with general convex constraints. The proposed network in this paper deals with the constraints into two parts, which brings the network simpler structure and better properties. By the Tikhonov-like regularization method, the proposed network need not estimate the exact penal...

2004
CLAUDIO CUSANO MATTEO FINI DAVIDE LA TORRE

In this paper we characterize nonsmooth convex vector functions by first and second order generalized derivatives. We also prove optimality conditions for convex vector problems involving nonsmooth data.

Journal: :J. Optimization Theory and Applications 2013
Alireza Hosseini Seyed Mohammad Hosseini

In this paper, we investigate a steepest descent neural network for solving general nonsmooth convex optimization problems. The convergence to optimal solution set is analytically proved. We apply the method to some numerical tests which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید