Information theory provides a framework for quantifying the relative effectiveness of different neural codes. Numerous investigators have used information theoretical techniques to compare, for instance, rate codes versus codes that explicitly make use of spike arrival times. Almost all of the analysis has been applied to small numbers of neurons (less than about 5), for which both physiological and computational methods are fairly well developed. Unfortunately, these methods are not readily applicable to large populations of neurons. Our goal is to develop techniques that can be used to examine neural codes from such populations. To approach this problem, we consider a simple case in which every neuron in a population is equally correlated to every other neuron (i.e., any pair, triplet, etc. of neurons has the same correlation as any other pair, triplet etc.), and the same holds for the stimulus space. While this is obviously an oversimplification in real systems, it can give us insight into highly correlated multi-neuron distributions. Moreover, since correlations tend to decrease information, this analysis may provide a lower bound on mutual information.
Given the above assumption about correlations, we are able to show that