Balancing Visual Prediction and Visual Stability
The visual world is dynamic, but object identities do not randomly change from moment to moment; objects often change location smoothly, but they rarely pop into or out of existence. This presents two major challenges for the visual system. One one hand, because visual processing is sluggish, there is a need to predict changing object locations. On the other hand, because visual input is noisy and discontinuous, there is a need to represent object identities as continuous and stable. In three related lines of research in my lab, we have investigated how the visual system balances these competing goals of prediction and stability. First, we have used fMRI to isolate a mechanism that gates and filters information about distractor objects, which allows selective representation of attended objects. Second, using psychophysics, TMS, and fMRI, we have found that the visual system assigns predictive locations to dynamic objects, thus anticipating smoothly changing visual input. In a third line of research, we have found evidence for a mechanism that links the perception of an object’s identity and properties from moment to moment, thus promoting perceptual continuity. These results show that an object's present appearance is captured by what was perceived over the last several seconds. The spatiotemporal tuning of this serial dependence reveals the continuity field (CF), within which perceptual judgments are dictated by previous percepts—making different objects appear the same. Together, our results reveal how the visual system delicately balances the need to optimize sensitivity to image changes (prediction) with the desire to represent the temporal continuity of objects—the likelihood that objects perceived at this moment tend to exist in subsequent moments.