III - 14 . Time - warped PCA : simultaneous alignment and dimensionality reduc - tion of neural data

نویسندگان

  • Ben Poole
  • Alexander Williams
  • Niru Maheswaranathan
  • Byron Yu
  • Stephen Ryu
  • Stephen A. Baccus
  • Krishna Shenoy
  • Surya Ganguli
چکیده

firing patterns, e.g. by producing stochastic drift in grid attractor networks. Recently, Hardcastle et al. (Neuron, 2015) proposed that border cells may provide one mechanism for correcting such drift. We construct a model in which experience-dependent Hebbian plasticity during exploration allows border cells to self-organize their responses, while also learning connectivity to grid cells which maintain their activity through an attractor network. We show that border cells in this learned network effectively correct for grid drift despite stochasticity of border cell firing. This error-correction is robust with respect to environmental shape including squares and circles. Furthermore, it survives insertion of barriers within an enclosure (consistent with Solstad et al., 2008) even though a given border cell can fire ambiguously at multiple boundaries with the same allocentric orientation (Solstad et al, 2008; Lever et al, 2009). In our mechanism, the learned border-grid connectivity pattern compensates for such ambiguities. Upon deformation of an environment, e.g., by shrinking or changing shape, the error correction initially fails and grid drift resumes. However, in our model the border-grid connectivity adapts to boundaries of the deformed environment, restoring error correction after a characteristic timescale. Our results demonstrate a class of self-organized mechanisms that achieve robust path integration. These mechanisms predict that: (a) disrupting synaptic plasticity between grid cells and border cells will cause grid patterns to drift in a random walk, and (b) deforming an environment will initially lead to grid drift, which is subsequently stabilized.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

2D Dimensionality Reduction Methods without Loss

In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (...

متن کامل

Weakly-Paired Maximum Covariance Analysis

We study the problem of multimodal dimensionality reduc6 6 tion assuming that data samples can be missing at training time, and 7 7 not all data modalities may be present at application time. Maximum 8 8 covariance analysis, as a generalization of PCA, has many desirable prop9 9 erties, but its application to practical problems is limited by its need for 10 10 perfectly paired data. We overcome...

متن کامل

Dimensionality Reduction of Protein Mass Spectrometry Data Using Random Projection

Protein mass spectrometry (MS) pattern recognition has recently emerged as a new method for cancer diagnosis. Unfortunately, classification performance may degrade owing to the enormously high dimensionality of the data. This paper investigates the use of Random Projection in protein MS data dimensionality reduction. The effectiveness of Random Projection (RP) is analyzed and compared against P...

متن کامل

A Bayesian supervised dual-dimensionality reduction model for simultaneous decoding of LFP and spike train signals.

Neuroscientists are increasingly collecting multimodal data during experiments and observational studies. Different data modalities-such as EEG, fMRI, LFP, and spike trains-offer different views of the complex systems contributing to neural phenomena. Here, we focus on joint modeling of LFP and spike train data, and present a novel Bayesian method for neural decoding to infer behavioral and exp...

متن کامل

Principal Components Analysis Competitive Learning

We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction prop...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017