Asynchronously parallel optimization solver for finding multiple minima
نویسندگان
چکیده
منابع مشابه
Asynchronously Parallel Optimization Solver for Finding Multiple Minima
This paper proposes and analyzes an asynchronously parallel optimization algorithm for finding multiple, highquality minima of nonlinear optimization problems. Our multistart algorithm considers all previously evaluated points when determining where to start or continue a local optimization run. Theoretical results show that, under certain assumptions, the algorithm almost surely starts a finit...
متن کاملFinding Approximate Local Minima for Nonconvex Optimization in Linear Time
We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which is linear in the input representation. The previously fastest methods run in time proportional to matrix inversion or worse. The time complexity of our algorithm to find a local minimum is even faster than that of gradient descent to find a critical point (which can...
متن کاملA Batch, Derivative-Free Algorithm for Finding Multiple Local Minima
We propose a derivative-free algorithm for finding high-quality local minima for functions that require significant computational resources to evaluate. Our algorithm efficiently utilizes the computational resources allocated to it and also has strong theoretical results, almost surely starting a finite number of local optimization runs and identifying all local minima. We propose metrics for m...
متن کامل-Work Parallel Algorithm for Finding the Row Minima in Totally Monotone Matrices∗
We give a parallel algorithm for computing all row minima in a totally monotone n×nmatrix which is simpler and more work efficient than previous polylogtime algorithms. It runs in O(lg n lg lg n) time doing O(n √ lg n) work on a CRCW PRAM, in O(lg n(lg lg n)2) time doing O(n √ lg n) work on a CREW PRAM, and in O(lg n √ lg n lg lg n) time doing O(n √ lg n lg lg n) work on an EREW PRAM.
متن کاملThird-order Smoothness Helps: Even Faster Stochastic Optimization Algorithms for Finding Local Minima
We propose stochastic optimization algorithms that can find local minima faster than existing algorithms for nonconvex optimization problems, by exploiting the third-order smoothness to escape non-degenerate saddle points more efficiently. More specifically, the proposed algorithm only needs Õ( −10/3) stochastic gradient evaluations to converge to an approximate local minimum x, which satisfies...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming Computation
سال: 2018
ISSN: 1867-2949,1867-2957
DOI: 10.1007/s12532-017-0131-4