Representing space in discourse processing
When understanding a narrative, we have to keep track of a number of (often) changing dimensions to form a coherent representation of the events being described. In doing so, we can form a mental model that goes beyond the surface structure of language, using situational and experiential knowledge to form a detailed representation of what the narrative is about; one which needn’t be any different to actually perceiving an event (Glenberg, Meyer, & Lindem, 1987). Recently, the event horizon model (Radvansky, 2012) has established a set of criteria outlining how the structure of mental models influences accessibility for information from memory. I will discuss the implications of this model for language processing, describing a series of eye-tracking experiments exploring how the spatial structure of our mental models affects comprehension for spoken narratives. These experiments ask: What is the metric of space represented during online language comprehension? How does the way in which we update a mental model influence the representations we access? And can the structure of a mental model reduce interference when accessing (otherwise) competing sources of information?