next up previous
Next: Information rate depends on Up: Results Previous: Synaptic variability is the

Information rate depends on firing rate

 

Experiments like those shown in Fig. 1 suggest that synaptic noise represents an important source of output variability. Such experiments can be used to estimate information rates in cortical neurons using techniques developed elsewhere [de Ruyter van Steveninck et al., 1997, Buracas et al., 1996]. In an experimental setting, however, information estimates can be distorted by nonstationarity, finite data sizes, variability between neurons, and a number of other factors. While it is possible to correct for such factors (subject to certain reasonable assumptions), here we focus on the results from a model neuron in which all assumptions are explicit; this permits us to focus specifically on the role of synaptic variability in governing transmitted information.

In what follows, we consider a model in which the spike generating mechanism is completely deterministic, known, and stationary. Thus variability in the output spike train is due solely to variability among the stochastic inputs. In this section we begin with the limiting case in which the only source of variability among the inputs is the quantal variability of the synapses, the fact that the postsynaptic response varies even when only a single functional contact is successfully activated. Thus in this section we assume not only that (1) the spike generating mechanism is completely deterministic, but also that (2) synapses release transmitter reliably when an action potential invades the presynaptic terminal (tex2html_wrap_inline1413). Here as elsewhere, the exact sequence of action potentials arriving at each of the presynaptic terminals is the ``signal'', and any variability response to repeated trials on which precisely the same sequence is presented represents the ``noise''.

The information per spike is defined (Eq. 8) as the difference between the total and conditional entropies per spike. Fig. 2A shows how these quantities depend on the firing rate for the integrate-and-fire spike generation model given by eq. (1). The dashed curve represents the total entropy, which quantifies the total output variability of the spike train. The dotted line represent the conditional entropy, which quantifies the variability that remains when the signal (i.e. the precise firing times of each of the inputs) is held constant. The solid line is the mutual information between the input and the output, and is the difference between these quantities. If there were no quantal variability, the conditional entropy would be zero, and all the entropy would be information. Fig. 2A shows that even when the only source of synaptic variability is quantal, only about 3/4 of the spike entropy (6 bits/spike conditional entropy vs. 8 bits/spike total entropy at 4 Hz) is information. As seen in the next section, additional sources of synaptic variability reduce this fraction further.

The information and entropies per spike decrease monotonically with firing rate. These quantities diverge logarithmically to infinity as the firing rate goes to zero, and in fact the entropy rates were calculated for firing rates only as low as about 4 Hz. The behavior of the total entropy per spike at low firing rates can be understood in terms of the results for the limiting case of a Poisson model outlined in Section 2.6.

In contrast to the entropy and information per spike, the entropy and information per second increase with increasing firing rate. The reason is that the entropy and information per time depend only logarithmically on firing rate, so the overall dependence, tex2html_wrap_inline1417, is increasing (see Section 2.6). Fig. 2B illustrates the entropy and information rates (units: bits/second) corresponding to the curves shown in Fig. 2A. Because of our assumption that time is discretized into bins of length tex2html_wrap_inline1331, each containing only at most one spike, the information declines back to zero at very high firing rates (not shown).

The information rate is a nearly linear function of the firing rate (Fig. 2B). This is precisely the behavior that would be expected from the maximum entropy Poisson process (Eq. 12). Although the output of the integrate-and-fire model is not a Poisson process, the dependence on firing rate is qualitatively similar: an increased firing rate compensates for a logarithmic decrease in the entropy per spike.


next up previous
Next: Information rate depends on Up: Results Previous: Synaptic variability is the

Tony Zador
Fri Nov 28 10:17:14 PST 1997