Resolving human object recognition in space and time by combining MEG, fMRI and deep convolutional neural networks
A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. I will present recent work (Cichy et al., 2014, Nat Neurosci) towards this goal, combining human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). We measured human brain responses with MEG and fMRI to 92 object images (Kiani et al., 2007; Kriegeskorte et al., 2008). Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively late. Using representational similarity analysis, we combined human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. Beyond the ROI approach, we used a searchlight approach to create a movie showing how visual activity spreads in time and space. The talk will close with a report on unpublished work, investigating the relationship between state-of-the-art computer models of visual categorization (convolutional deep neural networks) and brain activity during object recognition.