Complex inference in neural circuits with probabilistic population codes and topic models

J Beck, A Pouget, KA Heller - Advances in neural …, 2012 - proceedings.neurips.cc
Advances in neural information processing systems, 2012proceedings.neurips.cc
Recent experiments have demonstrated that humans and animals typically reason
probabilistically about their environment. This ability requires a neural code that represents
probability distributions and neural circuits that are capable of implementing the operations
of probabilistic inference. The proposed probabilistic population coding (PPC) framework
provides a statistically efficient neural representation of probability distributions that is both
broadly consistent with physiological measurements and capable of implementing some of …
Abstract
Recent experiments have demonstrated that humans and animals typically reason probabilistically about their environment. This ability requires a neural code that represents probability distributions and neural circuits that are capable of implementing the operations of probabilistic inference. The proposed probabilistic population coding (PPC) framework provides a statistically efficient neural representation of probability distributions that is both broadly consistent with physiological measurements and capable of implementing some of the basic operations of probabilistic inference in a biologically plausible way. However, these experiments and the corresponding neural models have largely focused on simple (tractable) probabilistic computations such as cue combination, coordinate transformations, and decision making. As a result it remains unclear how to generalize this framework to more complex probabilistic computations. Here we address this short coming by showing that a very general approximate inference algorithm known as Variational Bayesian Expectation Maximization can be implemented within the linear PPC framework. We apply this approach to a generic problem faced by any given layer of cortex, namely the identification of latent causes of complex mixtures of spikes. We identify a formal equivalent between this spike pattern demixing problem and topic models used for document classification, in particular Latent Dirichlet Allocation (LDA). We then construct a neural network implementation of variational inference and learning for LDA that utilizes a linear PPC. This network relies critically on two non-linear operations: divisive normalization and super-linear facilitation, both of which are ubiquitously observed in neural circuits. We also demonstrate how online learning can be achieved using a variation of Hebb’s rule and describe an extesion of this work which allows us to deal with time varying and correlated latent causes.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果