The assumption that successive ISIs are independent (*i.e. * that the
spike train is a renewal process) leads to an *exact* expression
(rather than the upper bound provided by the reconstruction method)
for the mutual information, subject only to error in the estimation of
the ISI distribution. Here we review the well known result that a
Poisson process (the special case where the ISI distribution is
exponential) leads to the maximum entropy spike train, and give the
simple closed-form expression for the entropy in this case.

The upper bound on the possible information transmitted in this model
is straightforward to calculate [MacKay and McCulloch, 1952]. The output
is a binary string--we have disallowed the possibility of multiple
spikes per bin. If the conditional entropy is zero (*i.e.* if
there is no noise whatsoever), then all the entropy is
information, and the upper bound on the entropy is equal to the upper
bound on the informaiton.

The probability of observing a spike in a bin of length
depends on the firing rate *R* as , and the probability of *not* observing a spike is . If spikes are independent--that is, if
the probability of observing a spike in one bin does not depend on
whether there was a spike in any neighboring bin, so that the spike
train is a Poisson process--then the entropy/bin is . At low firing rates,
, and , so
the entropy per bin is approximately . The entropy rate (entropy per
time) is then the entropy per bin divided by the time per bin , or

This upper bound on the information encoded in a discretized spike
train is achieved if (1) there is no noise; (2) spikes are
independent; and (3) the spike rate is low compared to the inverse bin
size, . It shows that the information *rate*
increases almost linearly with the firing rate, but the information
*per spike* decreases
logarithmically.

Fri Nov 28 10:17:14 PST 1997