Mediating the mapping between language and the visual world
The goal of much psycholinguistic research is to understand the processes by which linguistic input is mapped onto a hearer's mental representation of his or her world. I shall review a number of studies in which we monitored eye-movements around a visual scene as hearers listened to descriptions of what may happen next in the scenario depicted by the scene. Taken together, the studies show that aspects of the language make contact with the visual world at the theoretically earliest opportunity. At first glance, it appears that the processor is able to use information about what has been heard so far, in conjunction with the visual context and real-world knowledge, to anticipate what will be referred to next in the linguistic input. However, I shall also present data which cast doubt on this; I shall suggest instead that the anticipatory processes we have observed thus far reflect an interaction between unfolding event descriptions in the language and experiential knowledge of how the objects in the scene may participate in those events. In so doing, I shall show also that language is mapped not onto the visual world in these studies but onto a mental world (eye movements can be directed by the language to where objects had been located, or will be located, rather than to where they are located), and that language-mediated eye movements are, so far as is possible to tell, 'automatic'.I shall conclude that the interpretation of a sentence situated in a visual world may be as much to do with non-linguistic, primarily visually driven processes, as with linguistic processes. This is to be expected on the view that ontologically, such processes most likely precede the linguistic processes that we typically assume 'drive' the mapping between language and the world.