Differential learning and random walk model
نویسنده
چکیده
This paper presents a learning algorithm for differential decorrelation, the goal of which is to find a linear transform that minimizes the concurrent change of associated output nodes. First the algorithm is derived from the minimization of the objective function which measures the differential correlation. Then we show that the differential decorrelation learning algorithm can also be derived in the framework of maximum likelihood estimation of a linear generative model with assuming a random walk model for latent variables. Algorithm derivation and local stability analysis are given with a simple numerical example.
منابع مشابه
A Fuzzy Random Walk Technique to Forecasting Volatility of Iran Stock Exchange Index
Study of volatility has been considered by the academics and decision makers dur-ing two last decades. First since the volatility has been a risk criterion it has been used by many decision makers and activists in capital market. Over the years it has been of more importance because of the effect of volatility on economy and capital markets stability for stocks, bonds, and foreign exchange mark...
متن کاملNew Results for Random Walk Learning
In a very strong positive result for passive learning algorithms, Bshouty et al. showed that DNF expressions are efficiently learnable in the uniform random walk model. It is natural to ask whether the more expressive class of thresholds of parities (TOP) can also be learned efficiently in this model, since both DNF and TOP are efficiently uniform-learnable from queries. However, the time bound...
متن کاملDifferential ICA
As an alternative to the conventional Hebb-type unsupervised learning, differential learning was studied in the domain of Hebb’s rule [1] and decorrelation [2]. In this paper we present an ICA algorithm which employs differential learning, thus named as differential ICA. We derive a differential ICA algorithm in the framework of maximum likelihood estimation and random walk model. Algorithm der...
متن کاملTesting Weak-Form Efficient Capital Market Case Study: TSE and DJUS Indices
The present study investigated weak-form market information efficiency in Tehran security exchange (TSE) as an emerging market and in Dow Jones United States security exchange (DJUS) as a developed market based on random walk model. In each market, the random walk model was examined using daily and monthly returns of a set of indices. The results of the parametric and non-parametric tests indic...
متن کاملRobust Decentralized Differentially Private Stochastic Gradient Descent
Stochastic gradient descent (SGD) is one of the most applied machine learning algorithms in unreliable large-scale decentralized environments. In this type of environment data privacy is a fundamental concern. The most popular way to investigate this topic is based on the framework of differential privacy. However, many important implementation details and the performance of differentially priv...
متن کامل