Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling
نویسندگان
چکیده
We consider the problem of minimizing a high-dimensional objective function, which may include regularization term, using only (possibly noisy) evaluations function. Such optimization is also called derivative-free, zeroth-order, or black-box optimization. propose new zeroth-order regularized method, dubbed ZORO. When underlying gradient approximately sparse at an iterate, ZORO needs very few function to obtain iterate that decreases achieve this with adaptive, randomized estimator, followed by inexact proximal-gradient scheme. Under novel assumption and various different convex settings, we show (theoretical empirical) convergence rate logarithmically dependent on dimension. Numerical experiments outperforms existing methods similar assumptions, both synthetic real datasets.
منابع مشابه
Stochastic Zeroth-order Optimization in High Dimensions
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...
متن کاملOn Zeroth-Order Stochastic Convex Optimization via Random Walks
We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...
متن کاملAdaptive Sampling for Sparse Recovery
Consider n data sequences, each consisting of independent and identically distributed elements drawn from one of the two possible zero-mean Gaussian distributions with variances A0 and A1. The problem of quickly identifying all of the sequences with varianceA1 is considered and an adaptive two-stage experimental design and testing procedure is proposed. The agility and reliability gains in comp...
متن کاملZeroth Order Nonconvex Multi-Agent Optimization over Networks
In this paper we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors. Differently from all existing works on distributed optimization, our focus is given to optimizing a class of difficult non-convex problems, and under the challenging setti...
متن کاملMeasurement-adaptive Sparse Image Sampling and Recovery
This paper presents an adaptive and intelligent sparse model for digital image sampling and recovery. In the proposed sampler, we adaptively determine the number of required samples for retrieving image based on space-frequency-gradient information content of image patches. By leveraging texture in space, sparsity locations in DCT domain, and directional decomposition of gradients, the sampler ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Siam Journal on Optimization
سال: 2022
ISSN: ['1095-7189', '1052-6234']
DOI: https://doi.org/10.1137/21m1392966