'Early parallelism in spoken language processing? Neurophysiological experiments using MEG and EEG.'
A major debate is psycholinguistics has been on the order of access to different types of linguistic information, with alternative views advocating sequential vs parallel processing of language. While behavioural experiments provided evidence for both views, early neurophysiological experiments appeared to give more direct support for serial access to different levels of information. In this talk, however, I will present a body of recent EEG and MEG studies that strongly suggest that the processing of phonological, lexical, semantic and syntactic information commences in parallel, or near-simultaneiously. Moreover, some of these processes take place substantially earlier than believed previously, within the first 200 msec after the information becomes available in the auditory input. These earliest stages of language processing appear to occur automatically and do not require our focussed attention on incoming speech. We will look at spatio-temporal patterns of this early neural activation, investigate how they may reflect language-specific memory traces, their formation and interaction, ask whether conduction delays of a millisecond order may play a significant role in linguistic processing, and discuss whether these experimental strategies can solve (psycho)linguistic problems or have translational potential. The early automatic processes proposed here do not falsify the well-known neural dynamics in the later time range but provide a more complete picture of cerebral language processing.