next up previous
Next: Direct method Up: Methods for estimating spike Previous: Methods for estimating spike

Reconstruction method

One approach to this dilemma [Bialek et al., 1991, Bialek et al., 1993] is to compute a strict lower bound on the mutual information using the reconstruction method. The idea is to ``decode'' the output and use it to ``reconstruct'' the input that gave rise to it. The error between the reconstructed and actual outputs is then a measure of the fidelity of transmission, and with a few testable assumptions can be related to the information. Formally, this method is based on an expression mathematically equivalent to eq. (5) involving the conditional entropy tex2html_wrap_inline1293 of the signal given the spike train,
In the present context, the quantity reconstructed is the sum of the input, tex2html_wrap_inline1295. The entropy tex2html_wrap_inline1243 is just the entropy of the time series tex2html_wrap_inline1295, and can be evaluated directly from the Poisson synthesis equation (Eq. 3). Intuitively, eq. (6) says that the information gained about the spike train by observing the stimulus is just the initial uncertainty about the synaptic drive (in the absence of knowledge of the spike train) minus the uncertainty that remains about the signal once the spike train is known. The reconstruction method estimates the input from the output, and then bounds the errors of the outputs from above by assuming they are Gaussian. This method, which can provide a lower bound on the mutual information, has been used with much success in a variety of experimental preparations [de Ruyter van Steveninck and Bialek, 1988, Bialek et al., 1991, Rieke et al., 1997, de Ruyter Van Steveninck and Laughlin, 1996].

Tony Zador
Fri Nov 28 10:17:14 PST 1997