Learning in high dimensions: modular mixture models

نویسنده

  • Hagai Attias
چکیده

We present a new approach to learning probabilistic models for high dimensional data. This approach divides the data dimensions into low dimensional subspaces, and learns a separate mixture model for each subspace. The models combine in a principled manner to form a flexible modular network that produces a total density estimate. We derive and demonstrate an iterative learning algorithm that uses only local information.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

BYY harmony learning, structural RPCL, and topological self-organizing on mixture models

The Bayesian Ying-Yang (BYY) harmony learning acts as a general statistical learning framework, featured by not only new regularization techniques for parameter learning but also a new mechanism that implements model selection either automatically during parameter learning or via a new class of model selection criteria used after parameter learning. In this paper, further advances on BYY harmon...

متن کامل

Scalable and Incremental Learning of Gaussian Mixture Models

This work presents a fast and scalable algorithm for incremental learning of Gaussian mixture models. By performing rank-one updates on its precision matrices and determinants, its asymptotic time complexity is of O ( NKD ) for N data points, K Gaussian components and D dimensions. The resulting algorithm can be applied to high dimensional tasks, and this is confirmed by applying it to the clas...

متن کامل

Model of Organization Learning in Islamic Azad University

This study aims to present a model of learning organization in Islamic Azad University. It is practical in terms of purpose and quantitative in terms of implementation. At the first step of the research, after analyzing the information, using inductive content analysis, 15 components were identified and were categorized into 5 dimensions of learning levels, systematic thinking, shared vis...

متن کامل

A scalable parallel algorithm for training a hierarchical mixture of neural experts

Efficient parallel learning algorithms are proposed for training a powerful modular neural network, the hierarchical mixture of experts (HME). Parallelizations are based on the concept of modular parallelism, i.e. parallel execution of network modules. From modeling the speedup as a function of the number of processors and the number of training examples, several improvements are derived, such ...

متن کامل

Generalized associative mixture of experts

Modular learning, inspired by divide and conquer, learns a large number of localized simple concepts (classiiers or function approximators) as against single complex global concept. As a result, modular learning systems are eecient in learning and eeective in generalization. In this work, a general model for modular learning systems is proposed whereby, specialization and localization is induce...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001