نتایج جستجو برای: non convex optimization
تعداد نتایج: 1637507 فیلتر نتایج به سال:
In this paper, first we study the weak and strong convergence of solutions to the following first order nonhomogeneous gradient system $$begin{cases}-x'(t)=nablaphi(x(t))+f(t), text{a.e. on} (0,infty)\x(0)=x_0in Hend{cases}$$ to a critical point of $phi$, where $phi$ is a $C^1$ quasi-convex function on a real Hilbert space $H$ with ${rm Argmin}phineqvarnothing$ and $fin L^1(0...
This report investigates and examines the greedy alternating optimization procedures used for solving the non-convex optimization problem of our Factorized High order Interactions Model (FHIM) model.
The author writes in the preface: “Discrete Convex Analysis is aimed at establishing a novel theoretical framework for solvable discrete optimization problems by means of a combination of the ideas in continuous optimization and combinatorial optimization.” Thus the reader may conclude that the book presents a new theory (the name “discrete convex analysis” was, apparently, coined by the author...
This paper presents the optimization techniques for solving convex programming problems with hybrid constraints. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalleinvariance principle, a neural network model is constructed. The equilibrium point of the proposed model is proved to be equivalent to the optima...
Node cooperation can protect wireless networks from eavesdropping by using the physical characteristics of wireless channels rather than cryptographic methods. Allocating the proper amount of power to cooperative nodes is a challenging task. In this paper, we use three cooperative nodes, one as relay to increase throughput at the destination and two friendly jammers to degrade eavesdropper&rsqu...
The application of classical optimization techniques to Graphical Models has led to specialized derivations of powerful paradigms such as the class of EM algorithms, variational inference, max-margin and maximum entropy learning. This view has also sustained a conceptual bridge between the research communities of Graphical Models, Statistical Physics and Numerical Optimization. The optimization...
In this paper we introduce a discrete-time, distributed optimization algorithm executed by a set of agents whose interactions are subject to a communication graph. The algorithm can be applied to optimization problems where the cost function is expressed as a sum of functions, and where each function is associated to an agent. In addition, the agents can have equality constraints as well. The a...
Online optimization has been a successful framework for solving large-scale problems under computational constraints and partial information. Current methods for online convex optimization require either a projection or exact gradient computation at each step, both of which can be prohibitively expensive for large-scale applications. At the same time, there is a growing trend of non-convex opti...
The rise of deep learning in recent years has brought with it increasingly clever optimization methods to deal with complex, non-linear loss functions [13]. These methods are often designed with convex optimization in mind, but have been shown to work well in practice even for the highly non-convex optimization associated with neural networks. However, one significant drawback of these methods ...
We propose a new majorization-minimization (MM) method for non-smooth and non-convex programs, which is general enough to include the existing MM methods. Besides the local majorization condition, we only require that the difference between the directional derivatives of the objective function and its surrogate function vanishes when the number of iterations approaches infinity, which is a very...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید