Boundaries Extend and Contract in Scene Memory Depending on Image Properties

Wilma A Bainbridge, Chris I Baker, Wilma A Bainbridge, Chris I Baker

Abstract

Boundary extension, a memory distortion in which observers consistently recall a scene with visual information beyond its boundaries, is widely accepted across the psychological sciences as a phenomenon revealing fundamental insight into memory representations [1-3], robust across paradigms [1, 4] and age groups [5-7]. This phenomenon has been taken to suggest that the mental representation of a scene consists of an intermingling of sensory information and a schema that extrapolates the views of a presented scene [8], and it has been used to provide evidence for the role of the neocortex [9] and hippocampus [10, 11] in the schematization of scenes during memory. However, the study of boundary extension has typically focused on object-oriented images that are not representative of our visuospatial world. Here, using a broad set of 1,000 images tested on 2,000 participants in a rapid recognition task, we discover "boundary contraction" as an equally robust phenomenon. Further, image composition largely drives whether extension or contraction is observed-although object-oriented images cause more boundary extension, scene-oriented images cause more boundary contraction. Finally, these effects also occur during drawing tasks, including a task with minimal memory load-when participants copy an image during viewing. Collectively, these results show that boundary extension is not a universal phenomenon and put into question the assumption that scene memory automatically combines visual information with additional context derived from internal schema. Instead, our memory for a scene may be largely driven by its visual composition, with a tendency to extend or contract the boundaries equally likely.

Trial registration: ClinicalTrials.gov NCT00001360.

Keywords: boundary extension; boundary transformations; drawings; memory; scenes.

Conflict of interest statement

Declaration of Interests The authors declare no competing interests.

Published by Elsevier Ltd.

Figures

Figure 1.. An Example of Boundary Extension…
Figure 1.. An Example of Boundary Extension and the Experimental Paradigms.
(A) Three separate participants drew the same image of a house from memory (from [17]). All three participants extended the boundaries around the house, drawing empty space above it and to the right and left, even though the house in the original image is truncated by the boundaries of the image. (B) Depictions of the methods of the experiments described in the current study (see Methods). Data from three experiments from [17] are discussed: 1) the Memory Drawing Experiment, in which participants drew scene images from memory, 2) the Copying Drawing Experiment, in which participants copied images while viewing them, and 3) the Boundary Transformation Judgment Experiment, where separate online workers judged the level of boundary transformation for the drawings. In the current study, three experiments were conducted using the RSVP Recognition paradigm: 1) an experiment with the 60-Image Set, 2) an experiment with the 1000-Image Set, with 2 response options, and 3) a second experiment with the 1000-Image Set, with 3 response options. See Figure S1 for performance comparison across the Memory Drawing Experiment and RSVP Recognition paradigm.
Figure 2.. Example Images and Boundary Transformation…
Figure 2.. Example Images and Boundary Transformation Distribution.
(A) Examples from the 1000 images and their labels used in the current study, originating from the object-oriented GOI Database and the scene-oriented SUN Database. The object-oriented images tend to focus on the labeled object, but sometimes contain other objects or perspectives. In contrast, the scene-oriented images tend to be broad perspectives of a scene, but sometimes contain central objects. (B) Histograms of average boundary transformation rating for each image, averaged across two experiments (2-option and 3-option RSVP experiments). While object-oriented images, similar to those in previous boundary extension experiments, find a high rate of boundary extension (93.2%), scene-oriented images show equal rates of boundary extension and contraction (49.0% vs. 49.8%, respectively). Histograms separated by experiment are shown in Figure S2. (C) Results of consistency analyses on the boundary transformation scores across all images. The left shows a scatterplot of boundary transformation scores across the two 1000-Image Sets experiments (2-option and 3-option experiments), with the GOI images in green and the SUN images in purple. The boundary transformation scores for the two experiments are significantly correlated. The right shows results of a split-half consistency analysis on boundary transformation scores across all images and both experiments. The blue line indicates average boundary transformation scores determined by a random half of the participants across 1000 iterations, while the orange line indicates the average boundary scores from the other half of participants, sorted in the same order. The grey line indicates the other half of participants sorted randomly. Group 1 and Group 2 are highly similar and significantly correlated, demonstrating that participants are highly consistent in the boundary transformation ratings they make for a given image. Split-half consistency analyses separated by experiment are shown in Figure S2.
Figure 3.. The Influence of Image Composition…
Figure 3.. The Influence of Image Composition on Boundary Transformation.
(A) The images from the 1,000 image set showing the highest boundary extension and contraction, split by database. Across databases, the images causing the most extension are images with few objects at a very close subjective distance from the observer, while those causing the most contraction tend to be wide images of scenes. (B) The location of objects within the images that show boundary extension and contraction. Each pixel is colored by the proportion of images with an object at that pixel. Extending images tend to have a centrally-located object while contracting images tend to have a spread of objects along the lower visual field. (C) Scatterplots showing the relationship between average boundary transformation rating and number of objects in the image, subjective ratings of distance to the main object (1=close to 5=far), average object area (total number of pixels) per image, and average object distance from the image center (pixels). Images that elicit more boundary extension have fewer, larger, centrally located, subjectively close objects, while images that elicit more boundary contraction have several, smaller, dispersed, far objects. These results show that direction of boundary transformation is highly related to image composition, and that more traditional scenes cause more boundary contraction.
Figure 4.. Boundary Transformation Scores Replicate Across…
Figure 4.. Boundary Transformation Scores Replicate Across Memory and Image Copying Paradigms.
(Left) A scatterplot comparing boundary transformation scores in the RSVP recognition task and a copying drawing task where participants copied an image while viewing it. Boundary transformation scores correlate significantly between tasks, with many images showing similar boundary transformations in both tasks. (Right) Example drawings exhibiting boundary extension and contraction (circled in the scatterplot) by participants instructed to copy a photograph while viewing it.

Source: PubMed

3
Tilaa