How the brain builds high-level representations of visual stimuli
An important goal of visual cortex is to extract invariant categorical information about a stimulus, such as whether the stimulus depicts a face or a particular object. While categorical information is known to exist in ventral temporal cortex, exactly how this high-level representation is constructed by the brain is poorly understood. In this study, we target face-selective regions in ventral temporal cortex (e.g. fusiform face area) and design an fMRI protocol that assesses how these regions represent the basic stimulus dimension of visual space. Measuring responses to faces varying in position and size, we develop a population receptive field (pRF) model that quantitatively accounts for the full range of responses and reveals a functional hierarchy across face-selective regions. We then manipulate the subject's attentional state and discover that attention to the stimulus increases the gain, eccentricity, and size of pRFs in face-selective regions. Finally, we show using a model-based decoding analysis that these changes in pRF properties reduce uncertainty in the representation of spatial information. These results elucidate how bottom-up and top-down factors shape neural responses in ventral temporal cortex, and lay the groundwork for understanding additional response properties such as category selectivity.