Abstract
The neural mechanisms underlying the comprehension of meaningful sounds are yet to be fully understood. While previous research has shown that the auditory cortex can classify auditory stimuli into distinct semantic categories, the specific contributions of the primary (A1) and the secondary auditory cortex (A2) to this process are not well understood. We used songbirds as a model species, and analyzed their neural responses as they listened to their entire vocal repertoire ((sim )10 types of vocalizations). We first demonstrate that the distances between the call types in the neural representation spaces of A1 and A2 are correlated with their respective distances in the acoustic feature space. Then, we show that while the neural activity in both A1 and A2 is equally informative of the acoustic category of the vocalizations, A2 is significantly more informative of the semantic category of those vocalizations. Additionally, we show that the semantic categories are more separated in A2. These findings suggest that as the incoming signal moves downstream within the auditory cortex, its acoustic information is preserved, whereas its semantic information is enhanced.