Generalized Majorization-Minimization
نویسندگان
چکیده
Non-convex optimization is ubiquitous in machine learning. The MajorizationMinimization (MM) procedure systematically optimizes non-convex functions through an iterative construction and optimization of upper bounds on the objective function. The bound at each iteration is required to touch the objective function at the optimizer of the previous bound. We show that this touching constraint is unnecessary and overly restrictive. We generalize MM by relaxing this constraint, and propose a new framework for designing optimization algorithms, named Generalized Majorization-Minimization (G-MM). Compared to MM, GMM is much more flexible. For instance, it can incorporate application-specific biases into the optimization procedure without changing the objective function. We derive G-MM algorithms for several latent variable models and show that they consistently outperform their MM counterparts in optimizing non-convex objectives. In particular, G-MM algorithms appear to be less sensitive to initialization.
منابع مشابه
Majorization-minimization generalized Krylov subspace methods for lp-lq optimization applied to image restoration
A new majorization-minimization framework for lp-lq image restoration is presented. The solution is sought in a generalized Krylov subspace that is build up during the solution process. Proof of convergence to a stationary point of the minimized lp-lq functional is provided for both convex and nonconvex problems. Computed examples illustrate that high-quality restorations can be determined with...
متن کاملA Majorization-Minimization Algorithm for the Karcher Mean of Positive Definite Matrices
An algorithm for computing the Karcher mean of n positive definite matrices is proposed, based on the majorization-minimization (MM) principle. The proposed MM algorithm is parameter-free, does not need to choose step sizes, and has a theoretical guarantee of asymptotic linear convergence.
متن کاملMajorization minimization by coordinate descent for concave penalized generalized linear models
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing ...
متن کاملGeneralized Linear Model Regression under Distance-to-set Penalties
Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and...
متن کاملCrossing Minimization within Graph Embeddings Crossing Minimization within Graph Embeddings
We propose a novel optimization-based approach to embedding heterogeneous high-dimensional data characterized by a graph. The goal is to create a two-dimensional visualization of the graph structure such that edge-crossings are minimized while preserving proximity relations between nodes. This paper provides a fundamentally new approach for addressing the crossing minimization criteria that exp...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1506.07613 شماره
صفحات -
تاریخ انتشار 2015