How is information about facial identity represented in the brain?
People are extremely proficient at recognizing faces that are familiar to them, but are much worse at identifying unfamiliar faces. This finding has been integrated into cognitive models of face processing, which propose that familiar and unfamiliar faces are represented differently in the human visual system. These models propose that the initial processing of all faces involves computation of a view-dependent representation. The information from this early processing stage is compared with image-invariant representations that are specific to familiar faces. Although this mechanism has been widely used to explain the difference in the perception of familiar and unfamiliar faces, there is little evidence to show how it might be implemented in the brain. The aim of this talk will be to ask whether neuromiaging can reveal the specific processes underlying familiar face perception, clarifying which brain regions are involved and how they are involved.