Using natural sounds to calculate spectral-temporal receptive fields of complex auditory neurons
Keck Center for Integrative Neuroscience and Departments of Physiology and Psychiatry, University of California at San Francisco, and Department of Psychology, University of California at Berkeley
Auditory experience and feedback are crucial to the process of song learning and song production, as is true for human speech. Consistent with the important role of hearing, the forebrain of songbirds contains numerous specialized auditory areas and auditory neurons. Some of the most complex sensory neurons known are found in the song nucleus HVc: in adult songbirds these cells respond more strongly to the bird's own song (BOS) than to other complex but similar stimuli such as conspecific songs or the BOS played in reverse or with the component syllables out of order. Surprisingly little is known, however, about the hierarchy of forebrain auditory processing in areas afferent to HVc, which must ultimately contribute to the extremely selective and non-linear properties of HVc neurons. Moreover, responses of many forebrain auditory neurons to simple sounds does not predict their response to complex and more ethologically relevant sounds such as birdsongs.
To address these issues we developed a method for calculating spectral-temporal receptive fields (STRFs) using ensembles of natural sounds, and have begun to use this to analyze the auditory forebrain of male zebra finches. STRFs, which are linear descriptions of the time-varying stimulus response functions of neurons, have been useful in characterizing visual and auditory neurons, but until now, for mathematical reasons, have usually only been obtained using simple stimulus ensembles. Such stimuli often do not effectively activate high-level sensory neurons, which may be optimized to analyze natural sounds and images. We showed that it is possible to overcome the simple-stimulus limitation, and used this approach to calculate the STRFs of avian auditory forebrain neurons directly from an ensemble of birdsongs. For simpler neurons, the song-derived STRFs agree well with the classic STRFs we derived using a simple ensemble of random tone pips, validating the overall method. For more complex auditory neurons, the STRFs calculated using natural sounds were often strikingly different from classic, tone-derived STRFs. When we compared these two models by assessing their predictions of the neural response to the actual neural data, we found that the song-derived STRFs provided a more complete description of neuronal response properties. These results suggest that receptive fields constructed directly from natural stimuli may be crucial in understanding the response properties of high level, selective visual and auditory neurons, and in dissecting hierarchies of sensory processing in the brain.