next up previous
Next: Alexander Dimitrov and John Up: No Title Previous: Christopher deCharms

Michael R. DeWeese (1) and Markus Meister (2)

tex2html_wrap_inline239Sloan Center for Theoretical Neurobiology, The Salk Institute, tex2html_wrap_inline241Molecular and Cellular Biology, Harvard University

How much do we learn from one observation?

Information theory provides a powerful framework to analyze how neurons represent sensory stimuli or other behavioral variables. A recurring question regards the amount of information conveyed by a specific neuronal response. Here we show that a commonly used definition for this quantity has a serious flaw: The information obtained from subsequent observations of neural activity fails to combine additively. Additivity is highly desirable, both on theoretical grounds and for the practical purpose of analyzing population codes. We propose an alternative definition for the information per observation and prove that it uniquely satisfies additivity. We demonstrate the qualitative differences between the old and new measure with visually evoked responses from a motion-sensitive neuron in the primate cortex. We find that, counterintuitively, the cell is actually least informative about the stimulus which evokes the biggest response. This is missed by the old measure. Our analysis allows a reinterpretation of several published results: In each case we show that the neurons being studied are either not operating at their full information capacity, or the ensemble of stimuli are not well matched to the natural environment.



Tony Zador
Sat Mar 27 10:58:21 PST 1999