next up previous
Next: Shadlen and Newsome: Up: No Title Previous: Ritz and Sejnowski:

Sanger: Probability Interpretation of Population Codes


Recent results from neurophysiological investigations have shown that populations of neurons are able to encode sensory and motor information as patterns of activity over a large number of cells. Under certain conditions, information can be extracted from such a cell population by forming a "population code" defined to be a linear combination of the cell firing rates weighted by the "preferred" stimulus for each cell (Georgopolous et al., 1988). However, it can be shown that the population code is not unique and that any population of cells can be taken to represent many different possible input variables, depending on how the component cells are labelled (Sanger, 1994, Mussa-Ivaldi, 1988). This leads naturally to the question of whether there is a "best" variable represented by a given cell population.

By identifying a cell's receptive field profile with the posterior distribution of the input data given that the cell has fired, it is possible to interpret the average firing rates of the cells in a population as providing samples from the distribution of the represented data. It then becomes possible to use maximum entropy arguments to assign a "best" labelling to the cells that maximizes the mutual information between the sensory data and the represented variable. I define this "best" labelling to be the optimally represented variable, and I show that the solution leads to even utilization of cellular resources across the population. In addition, these statistical methods lead to a set of supervised and unsupervised learning algorithms that can operate on cell populations in a manner similar to that proposed in (Anderson, 1995).

Tony Zador
Tue Oct 22 16:34:57 PDT 1996