نتایج جستجو برای: non convex optimization
تعداد نتایج: 1637507 فیلتر نتایج به سال:
We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline ...
Many challenging problems in automatic control may be cast as optimization programs subject to matrix inequality constraints. Here we investigate an approach which converts such problems into non-convex eigenvalue optimization programs and makes them amenable to non-smooth analysis techniques like bundle or cutting plane methods. We prove global convergence of a first-order bundle method for pr...
Mobile cloud computing (MCC) is a new technology that has been developed to overcome the restrictions of smart mobile devices (e.g. battery, processing power, storage capacity, etc.) to send a part of the program (with complex computing) to the cloud server (CS). In this paper, we study a multi-cell with multi-input and multi-output (MIMO) system in which the cell-interior users request service...
The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. W...
We consider robust optimization problems, where the goal is to optimize in the worst case over a class of objective functions. We develop a reduction from robust improper optimization to Bayesian optimization: given an oracle that returns αapproximate solutions for distributions over objectives, we compute a distribution over solutions that is α-approximate in the worst case. We show that deran...
In this note, we address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓ p minimization with p ∈ (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romberg and Tao [3] and Wojtaszczyk [30] regarding the decoder ∆ 1 , based on ℓ 1 minimization, to ∆p wi...
This paper considers the distributed optimization of a sum of locally observable, nonconvex functions. The optimization is performed over a multi-agent networked system, and each local function depends only on a subset of the variables. An asynchronous and distributed alternating directions method of multipliers (ADMM) method that allows the nodes to defer or skip the computation and transmissi...
Stochastic gradient descent (SGD) and its variants have attracted much attention in machine learning due to their efficiency and effectiveness for optimization. To handle largescale problems, researchers have recently proposed several lock-free strategy based parallel SGD (LF-PSGD) methods for multi-core systems. However, existing works have only proved the convergence of these LF-PSGD methods ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید