The present paper is the first to interpret information rates in single cortical neurons in terms of the underlying biophysical sources of the ``signal'' and ``noise''. Here ``signal'' is the set of firing times over the ensemble of presynaptic neurons, while ``noise'' is synaptic variability that leads to variability in the firing times of the postsynaptic neuron.
The present study was centrally motivated by the hypothesis that the nervous system is under selective evolutionary pressure to preserve as much information as possible during processing. In the limit this is trivially true: A retina that transmits no information whatsoever about the visual input is no better than no retina at all! Less trivially, computational power in some models increases as the precision of the underlying components increases [Zador and Pearlmutter, 1996]. If such principles apply to cortical computation, then the cortex may have evolved strategies to compensate for synaptic unreliability, given other constraints.
The most obvious strategy would be simply to increase the synaptic release probability. Indeed, there are synapses (used e.g. in the fly retina [de Ruyter Van Steveninck and Laughlin, 1996]) where the number of release sites per terminal is large enough to guarantee a high fidelity connection under normal conditions. But such multi-release synapses are large, and the cortex may be under an additional constraint to minimize size.
It is reasonable to wonder why the more direct approach--setting the release probability to unity--does not appear to be common. It is well known that the release probability changes in a history-dependent manner during short term plasticity (e.g. \ paired-pulse facilitation and depression, posttetanic potentiation, etc; see [Magleby, 1987, Zucker, 1989, Fisher et al., 1997, Dobrunz and Stevens, 1997, Markram and Tsodyks, 1996, Tsodyks and Markram, 1997, Abbott et al., 1997, Varela et al., 1997, Zador and Dobrunz, 1997]). We speculate that a dynamic is essential to cortical computation. A dynamic could function as a form of gain control [Varela et al., 1997, Tsodyks and Markram, 1997, Abbott et al., 1997]. More generally, it could be used to permit efficient computation on time-varying signals [Zador and Maass, 1997]. Thus we propose that the (teleological) reason that does not simply approach unity may be that cortical computation requires that a retain a large dynamic range.
The cortex appears to adopt the ``redundant connection'' approach, albeit on a smaller scale. Fig. 4B shows that even a modest increase in the connection redundancy from 1 to 5 can double the information rate, from 1 to 2 bits/spike. While a direct comparison is difficult, it is interesting to note that information rates in both anesthetized [Bair et al., 1997] and alert [Buracas et al., 1996] primate visual cortex are in the same range.
In our formulation, the fraction of the signal entropy transmitted by the spike train is small, even when the signal is not corrupted by noise. This follows immediately when we consider that in order to drive the model neuron to fire at, for example, 40 Hz, impulses must arrive at 2,400 Hz, which is equivalent to 60 input neurons each firing at 40 Hz, with each input axon presumably carrying comparable (and, by assumption, independent) information. This captures what may be an essential feature of the cortex: each pyramidal neuron must in some sense ``summarize'' with a single spike train the spike trains from other neurons. It is this ``summary'' that represents the ``computation'' that a neuron performs. Understanding the fidelity with which this computation can occur is a necessary step toward understanding the computation.
This work was supported by The Sloan Center for Theoretical Neurobiology at the Salk Institute, and a grant to CFS from the HHMI.