Spatiotemporal characterization of prediction error processing during reinforcement learning using simultaneous EEG/fMRI
Accurate reward representations associated with potential choices can be acquired with reinforcement learning mechanisms that use the prediction error (PE) – the difference between expected and actual rewards – as a learning signal to update reward expectations. To date, most studies used fMRI and EEG independently to either identify brain regions or activation latencies associated with PE signals. In this talk I will first give an overview of our work on the temporal characterization of PE processing using stand-alone EEG before I discuss our recent endeavors in coupling single-trial EEG with simultaneously acquired fMRI to infer the full spatiotemporal dynamics of the relevant brain networks. Overall, our current findings suggest that the temporal brain dynamics of PE processing can be inferred reliably using single-trial EEG acquired inside an MR-scanner. Crucially, to provide a complete spatiotemporal characterization of the underlying networks, single-trial EEG estimates associated with PE valence and magnitude can be used to inform the analysis of simultaneously acquired fMRI data. I will argue that EEG-informed fMRI has the potential to expose latent brain states – that could otherwise go astray using conventional model-based fMRI – by exploiting trial-by-trial variability in electrophysiologically- rather than behaviorally-derived measures.