next up previous
Next: Zhaoping Li: Visual Up: No Title Previous: Gallant: Neural codes for

Latham: What can information theory tell us about neural codes from large populations of neurons?

NIH

Information theory provides a framework for quantifying the relative effectiveness of different neural codes. Numerous investigators have used information theoretical techniques to compare, for instance, rate codes versus codes that explicitly make use of spike arrival times. Almost all of the analysis has been applied to small numbers of neurons (less than about 5), for which both physiological and computational methods are fairly well developed. Unfortunately, these methods are not readily applicable to large populations of neurons. Our goal is to develop techniques that can be used to examine neural codes from such populations. To approach this problem, we consider a simple case in which every neuron in a population is equally correlated to every other neuron (i.e., any pair, triplet, etc. of neurons has the same correlation as any other pair, triplet etc.), and the same holds for the stimulus space. While this is obviously an oversimplification in real systems, it can give us insight into highly correlated multi-neuron distributions. Moreover, since correlations tend to decrease information, this analysis may provide a lower bound on mutual information.

Given the above assumption about correlations, we are able to show that

  1. For a large population of neurons, the mutual information is linear in the number of neurons.
  2. In the low signal to noise ratio, the information depends only on two-point correlations in the stimulus space.
  3. At high signal to noise, it may be possible to use perturbation methods to estimate mutual information.



Tony Zador
Tue Oct 22 16:34:57 PDT 1996