Exploring the Fidelity of Visual Working and Long-term Memory
Although visual working memory (VWM) and visual long-term memory (VLTM) are thought to be distinct systems (Broadbent, 1957; Jonides et al., 2008), they also clearly interact with and depend on one another in a variety of ways. Indeed, working memory maintenance appears to be a critical step for long-term encoding (Ranganath, Cohen, & Brozinsky, 2005). An important set of questions concerning how they interact pertains to the nature of the representations they employ, their contents, formats, and resolution. Recent research (Brady, Konkle, Gill, Oliva & Alvarez, 2013) has investigated the resolutions of these two systems in terms of color. A limitation of this research, however, is that VLTM and VWM representations could possess a similar resolution for any given feature, such as color, while differing along others, and more importantly, in terms of how they encode objects as a whole. I therefore sought to investigate the resolutions of these systems in more holistic terms, by adding image noise to stimuli rather than probing precision along a single dimension.
Participants first completed a VWM task. They were shown four possible image locations, after which in two random locations they would see two images of real-world objects. At test, participants faced a two alternative forced (2AFC) judgment involving a randomly selected object from the encoding display and a new object, with the task of identifying the old object. After completing the VWM task, participants were then given a surprise VLTM test: on each trial, the previously untested object from each of the encoding displays was paired with a new object, and participants reported the one that was “old” (that is, the one that had appeared at some point in the encoding phase). In this way, each of the two objects from each encoding display was tested for recognition, one in VWM and one in VLTM, and they were tested in exactly the same way. Performance was considerably and significantly worse in the VLTM recognition test. In a second experiment, we injected noise into the test stimuli by randomly scrambling a certain percentage of the pixels in each image (25%-75%). We observed VWM was unaffected by noise, whereas VLTM demonstrated a linear decrease in performance.
These results are inconsistent with the hypothesis that an integrated visual memory system traffics in representations with a shared constraint on resolution (Brady et al., 2013). More generally, these results suggest that VLTM and VWM may traffic in different kinds of representations in terms of their content and format. Our noise manipulation demonstrates operationally different whole-object resolution, but the causes of poor effec- tive resolution remain an open question. VLTM and VWM may utilize different representations because they are designed to solve different kinds of problems. VLTM in particular needs to contend with greater variability in object appearance--the classic problem of object recognition. Poor resolution in the system may thus be a consequence of representational formats that support tolerant, invariant recognition despite variable viewing conditions (for more information see Schurgin & Flombaum, 2015).