Drawings of real-world scenes during free recall reveal detailed object and spatial information in memory

Wilma A Bainbridge, Elizabeth H Hall, Chris I Baker, Wilma A Bainbridge, Elizabeth H Hall, Chris I Baker

Abstract

Understanding the content of memory is essential to teasing apart its underlying mechanisms. While recognition tests have commonly been used to probe memory, it is difficult to establish what specific content is driving performance. Here, we instead focus on free recall of real-world scenes, and quantify the content of memory using a drawing task. Participants studied 30 scenes and, after a distractor task, drew as many images in as much detail as possible from memory. The resulting memory-based drawings were scored by thousands of online observers, revealing numerous objects, few memory intrusions, and precise spatial information. Further, we find that visual saliency and meaning maps can explain aspects of memory performance and observe no relationship between recall and recognition for individual images. Our findings show that not only is it possible to quantify the content of memory during free recall, but those memories contain detailed representations of our visual experiences.

Trial registration: ClinicalTrials.gov NCT00001360.

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Example drawings and drawing matching performance. a Example drawings made from Category Drawing, Delayed Recall, Immediate Recall, and Image Drawing conditions for four exemplars from the 30 image categories (see Supplementary Figure 1 for examples from all 30 categories). Both Delayed Recall and Immediate Recall participants drew complex drawings, including multiple objects, spatial relationships of objects, and spatial layout of the scene. The Category Drawings show what sort of information is present in the canonical representation of each category. Delayed Recall and Immediate Recall participants are clearly using information from memory beyond just an image’s category name, given the accurate object and spatial information in their drawings. b The average proportion of correct AMT worker matches of each drawing type (Category Drawing, Delayed Recall, Immediate Recall, Image Drawing). Each dot indicates each of the 60 images used in the experiment across drawings, and lines connect the same image across the different drawing conditions. Horizontal lines above the graph show all significant pairwise Wilcoxon rank-sum test comparisons at a Bonferroni-corrected level of p < 0.0083. The Category Drawings indicate the average proportion of matches with the two exemplars used in the study, even though there is no “correct” answer for this condition. All scene images in the manuscript are from the publicly available SUN Database for research of scene images
Fig. 2
Fig. 2
Comparison of objects drawn across conditions. a Average proportion of objects drawn for each drawing condition (Category Drawing, Delayed Recall, Immediate Recall, Image Drawing). Dots indicate average proportion for each of the 60 images used in the experiment, with lines connecting the same image across conditions. Horizontal lines above the graph indicate significant pairwise Wilcoxon rank-sum test comparisons that pass a Bonferroni-corrected significance level of p < 0.0083. b Example heatmaps of which objects were remembered. The “Delayed Recall Map” shows the drawing frequency of each object in the Delayed Recall drawings. Bright red indicates objects remembered by all participants who drew the image, and white indicates objects that were not remembered by anyone (white also indicates the background). The heatmaps on the right indicate the difference between the Delayed Recall heatmap (red) and the corresponding heatmaps for Category Drawing, Immediate Recall, and Image Drawing (blue), where white is a neutral color (background and objects that were drawn with equal frequency in both conditions). There were generally more objects in Image Drawings and Immediate Recall than the Delayed Recall drawings (e.g., more blue in the “Delayed Recall vs Image Drawing”), but there were also several objects participants remembered equally well (e.g., the flowers in the living room, the table in the kitchen), or even drew more frequently from memory than when perceiving the image (e.g., the hoe in the golf scene, the chef in the middle of the kitchen table). Image Drawings and Immediate Recall also show extremely similar heatmaps, showing the objects recalled immediately after encoding are much like those drawn at perception. The “Delayed Recall vs Category Drawing” heatmaps show that Delayed Recall drawings contained several items beyond what would exist in a canonical image from that scene category (e.g., circular rugs in a living room, a table in a kitchen), but there are also some objects that would be canonically drawn but participants did not successfully recall (e.g., the television in the living room with the fireplace, cupboards in a kitchen)
Fig. 3
Fig. 3
Comparison of additional objects drawn beyond those in the images across conditions. a Average number of objects drawn for each drawing type that did not exist in the images (Category Drawing, Delayed Recall, Immediate Recall, Image Drawing). Participants drew few additional objects when recalling images regardless of delay and when drawing from the image. For Category Drawing, they often drew objects that did not exist in the images for that label, as expected since drawings were merely done from the category name. Dots represent each of the 60 images used in the experiment, with lines connecting the same image across conditions. Horizontal lines above the graph indicate significant pairwise Wilcoxon rank-sum test comparisons that pass a Bonferroni-corrected threshold of p < 0.0083. b Examples of additionally drawn objects from Delayed Recall, Immediate Recall, and the Image Drawings. Additional objects are circled and labeled below each drawing in orange. Participants drew additional objects in the Delayed Recall drawings, for example, adding a cactus to a desert scene, drawing a window not captured in the image, or adding a dining table to a kitchen. However, participants also drew additional objects when recalling the image immediately after seeing it, adding people to a mountain scene, or replacing a chef sculpture with a vase of flowers in a kitchen scene. Even more surprising, participants drew additional, non-existent objects when drawing from the image itself, also adding a cactus to a desert scene or people and cars to scenes that did not have any
Fig. 4
Fig. 4
Comparison of object location and size across conditions. a Top—the mean X and Y distance between object centroids of the different drawing conditions (Delayed Recall, Immediate Recall, Image Drawing) and object centroids in the original image. Object centroids were determined from ellipses placed around each drawn object by AMT workers. The y-axes represent the distance as the proportion of the x-direction (or y-direction) pixel distance between centroids and the image pixel width (or height). Bottom—the mean ellipsis width and height differences between objects in each drawing condition and the objects in the original images. Y axis values represent width and height differences as a proportion of image width and height, respectively. Each dot represents each of the 60 images used in the experiments, and lines connect the same image across conditions. Significance in Wilcoxon rank-sum tests between conditions is indicated with horizontal lines at the top (Bonferroni-corrected p< 0.0167). b Example maps of the average ellipse encompassing the most commonly drawn objects in four of the images. Solid ellipses indicate the average object location in the Delayed Recall drawings, whereas dashed ellipses indicate the average object location in the Image Drawings. Participants in both conditions drew objects in the correct locations and at the right sizes; e.g., in the bathroom, putting the mirror in the upper left, the cabinet in the upper middle, the shower on the right, and the sink on the bottom left. This shows that participants drawing from memory had spatially accurate memory representations, drawing objects in the correct places and correct sizes from objects they had seen in images 11-min earlier
Fig. 5
Fig. 5
Comparing image-based metrics with object memory. a A comparison of which objects participants drew during Delayed Recall and the objects predicted by a graph-based visual saliency map (GBVS) and Meaning Maps,. The Object Memory Map shows the proportion of participants who drew each object during Delayed Recall; red indicates objects drawn by all participants who remembered a given drawing, whereas white indicates objects drawn by no one, as well as background regions. The Saliency Map shows the saliency scores calculated for each pixel based on the GBVS algorithm, while the Meaning Map shows the average smoothed meaning scores attributed to circular patches taken from each image (see). Their corresponding Object Maps show the average saliency and meaning scores across all pixels within a given object region, scaled to a range of 0 to 1. All results reported replicate when Object Maps are instead generated using peak saliency and meaning within an object. b The average maps across all experimental images, averaging (in order) the Object Saliency Maps, Object Meaning Maps, Object Delayed Recall Memory Maps, and Object Immediate Recall Memory Maps. Maps were normalized by number of objects across images at each pixel to take into account the natural spatial distribution of objects. The average vertical value shows the mean pixel value at each y-coordinate (from top pixel 0 to bottom pixel 700) for the four average maps, scaled to a range of 0 to 1. As one can see, while Saliency and Meaning Maps show a central bias, Delayed and Immediate Recall Maps show a tendency towards the lower part of the image
Fig. 6
Fig. 6
Comparing recognition and recall performance for images. a A scatterplot of the 60 images used in the experiment arranged by tied-ranked recall rate (proportion of Delayed Recall participants who successfully drew each image, using a tied rank where tied scores take on the average ranking) versus tied-ranked recognition rate (proportion of participants who recognized the image in the recognition task). Recall and recognition rates showed no significant Spearman correlation (ϱ= -0.08, p = 0.541), indicating there are different tendencies in the images one recalls versus those they recognize. b Example images that vary on the opposite ends of recall and recognition, determined by being beyond 1 SD above or below recognition and recall mean performance. Ranking number shows tied rank ranging from highest performance (1) to lowest (60). The points corresponding to these images are colored in a

References

    1. Landman R, Spekreijse H, Lamme VAF. Large capacity storage of integrated objects before change blindness. Vision. Res. 2003;43:149–164. doi: 10.1016/S0042-6989(02)00402-9.
    1. Hollingworth A. Constructing visual representations of natural scenes: the roles of short- and long-term visual memory. J. Exp. Psychol. Hum. Percept. Perform. 2004;30:519–537. doi: 10.1037/0096-1523.30.3.519.
    1. Brady TF, Konkle T, Alvarez GA, Oliva A. Visual long-term memory has a massive storage capacity for object details. Proc. Natl. Acad. Sci. USA. 2008;105:14325–14329. doi: 10.1073/pnas.0803390105.
    1. Cunningham CA, Yassa MA, Egeth HE. Massive memory revisited: limitations on storage capacity for object details in visual long-term memory. Learn. Mem. 2015;22:563–566. doi: 10.1101/lm.039404.115.
    1. Simons DJ, Rensink RA. Change blindness: past, present, and future. Trends Cogn. Sci. 2005;9:16–20. doi: 10.1016/j.tics.2004.11.006.
    1. Standing L, Conezio J, Haber RN. Perception and memory for pictures: single-trial learning of 2500 visual stimuli. Psychon. Sci. 1970;19:73–74. doi: 10.3758/BF03337426.
    1. Ducharme E, Fraisse P. Genetic study of the memorization of words and images. Can. J. Psychol. 1965;19:253–261. doi: 10.1037/h0082907.
    1. Deese J. On the prediction of occurrence of particular verbal intrusions in immediate recall. J. Exp. Psychol. 1959;58:17–22. doi: 10.1037/h0046671.
    1. McBride DM, Dosher BA. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis. Conscious. Cogn. 2002;11:423–460. doi: 10.1016/S1053-8100(02)00007-7.
    1. Intraub, H. & Richardson, M. Wide-angle memories of close-up scenes. J. Exp. Psychol. Learn. Mem. Cogn. 15, 179–187 (1989).
    1. Erdelyi MH, Becker J. Hypermnesia for pictures: incremental memory for pictures but not words in multiple recall trials. Cogn. Psychol. 1974;6:159–171. doi: 10.1016/0010-0285(74)90008-5.
    1. Madigan S. Representational storage in picture memory. B. Psychon. Soc. 1974;4:567–568. doi: 10.3758/BF03334293.
    1. Marks DF. Visual imagery differences in the recall of pictures. Br. J. Psychol. 1973;64:17–24. doi: 10.1111/j.2044-8295.1973.tb01322.x.
    1. Shiffrin RM. Visual free recall. Science. 1973;180:980–982. doi: 10.1126/science.180.4089.980.
    1. Tabachnick B, Brotsky SJ. Free recall and complexity of pictorial stimuli. Mem. Cogn. 1976;4:466–470. doi: 10.3758/BF03213205.
    1. Murdock BB. The serial position effect of free recall. J. Exp. Psychol. 1962;5:482–488. doi: 10.1037/h0045106.
    1. Bartlett FC. Remembering: A Study in Experimental and Social Psychology. Cambridge, UK: Cambridge University Press; 1932.
    1. Freeman NH, Janikoun R. Intellectual realism in children’s drawings of a familiar object with distinctive features. Child Dev. 1972;43:1116–1121. doi: 10.2307/1127668.
    1. Axia G, Bremner JG, Deluca P, Andreasen G. Children drawing Europe: the effects of nationality, age, and teaching. Br. J. Dev. Psychol. 1998;16:423–437. doi: 10.1111/j.2044-835X.1998.tb00762.x.
    1. Kosslyn SM, Heldmeyer KH, Locklear EP. Children’s drawings as data about internal representations. J. Exp. Child Psychol. 1977;23:191–211. doi: 10.1016/0022-0965(77)90099-6.
    1. Light P, McEwen F. Drawings as messages: the effect of a communication game upon production of view-specific drawings. Dev. Psychol. 1987;5:53–59. doi: 10.1111/j.2044-835X.1987.tb01041.x.
    1. Cohen DJ, Bennett S. Why can’t most people draw what they see? J. Exp. Psychol. Hum. Percept. Perform. 1997;23:609–612. doi: 10.1037/0096-1523.23.3.609.
    1. Perdreau F, Cavanagh P. Drawing experts have better visual memory while drawing. J. Vis. 2015;15:5. doi: 10.1167/15.5.5.
    1. Chamberlain R, Wagemens J. The genesis of errors in drawings. Neurosci. Biobehav. Rev. 2016;65:195–207. doi: 10.1016/j.neubiorev.2016.04.002.
    1. Fan, J. E., Yamins, D. L. K. & Turk-Browne, N. B. Common object representations for visual production and recognition. Cogn. Sci. 42, 2670–2698 (2018).
    1. Eitz M, Hays J, Alexa M. How do humans sketch objects? ACM Trans. Graph. 2012;31:44–51.
    1. Rey A. L’examen psychologique dans les cas d’encephalopathie traumatic. Arch. Psychol. 1941;28:286–340.
    1. Osterrieth PA. Le test de copie d’une figure complexe. Arch. Psychol. 1944;30:206–356.
    1. Corkin S. What’s new with the amnesic patient H.M.? Nat. Rev. Neurosci. 2002;3:153–160. doi: 10.1038/nrn726.
    1. Agrell B, Dehlin O. The clock-drawing test. Age Ageing. 1998;27:399–403. doi: 10.1093/ageing/27.3.399.
    1. Draschkow D, Wolfe JM, Võ MLH. Seek and you shall remember: scene semantics interact with visual search to build better memories. J. Vis. 2014;14:10. doi: 10.1167/14.8.10.
    1. Intraub H, Gottesman CV, Willey EV, Zuk IJ. Boundary extension for briefly glimpsed photographs: do common perceptual processes result in unexpected memory distortions? J. Mem. Lang. 1996;35:118–134. doi: 10.1006/jmla.1996.0007.
    1. Jacoby LL. A process dissociation framework: separating automatic from intentional uses of memory. J. Mem. Lang. 1991;30:513–541. doi: 10.1016/0749-596X(91)90025-F.
    1. Holdstock JS, et al. Under what conditions is recognition spared relative to recall after selective hippocampal damage in humans? Hippocampus. 2002;12:341–351. doi: 10.1002/hipo.10011.
    1. Staresina BP, Davachi L. Differential encoding mechanisms for subsequent associative recognition and free recall. J. Neurosci. 2006;26:9162–9172. doi: 10.1523/JNEUROSCI.2877-06.2006.
    1. Barbeau EJ, Pariente J, Felician O, Puel M. Visual recognition memory: a double anato-functional dissociation. Hippocampus. 2011;21:929–934.
    1. Isola, P., Xiao, J., Torralba, A. & Oliva, A. What makes an image memorable? IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 10.1109/CVPR.2011.5995721 (2011).
    1. Bylinskii Z, Isola P, Bainbridge C, Torralba A, Oliva A. Intrinsic and extrinsic effects on image memorability. Vision. Res. 2015;116:165–178. doi: 10.1016/j.visres.2015.03.005.
    1. Russell, B., Torralba, A., Murphy, K. & Freeman, W. T. LabelMe: a database and web-based tool for image annotation. Int. J. Comput. Vis. 77, 157–173 (2008).
    1. Harel, J., Koch, C. & Perona, P. Graph-based visual saliency. Adv. Neural Info. Process. Syst. 19, 545–552 (2007).
    1. Henderson JM, Hayes TR. Meaning-based guidance of attention in scenes as revealed by meaning maps. Nat. Hum. Behav. 2017;1:743. doi: 10.1038/s41562-017-0208-0.
    1. Henderson JM, Hayes TR. Meaning guides attention in real-world scene images: evidence from eye movements and meaning maps. J. Vis. 2018;18:1–18. doi: 10.1167/18.6.10.
    1. Bainbridge WA, Isola P, Oliva A. The intrinsic memorability of face images. J. Exp. Psychol. Gen. 2013;142:1323–1334. doi: 10.1037/a0033872.
    1. Gallistel CR. The importance of proving the null. Psychol. Rev. 2009;116:439–453. doi: 10.1037/a0015251.
    1. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 2009;16:225–237. doi: 10.3758/PBR.16.2.225.
    1. Levin DT, Simons DJ. Failure to detect changes to attended objects in motion pictures. Psychon. Bull. Rev. 1997;4:501–506. doi: 10.3758/BF03214339.
    1. Loftus EF. Planting misinformation in the human mind: a 30-year investigation of the malleability of memory. Learn. Mem. 2005;12:361–366. doi: 10.1101/lm.94705.
    1. Greene MR. Statistics of high-level scene context. Front. Psychol. 2013;4:777. doi: 10.3389/fpsyg.2013.00777.
    1. Xiao, J., Hays, J., Ehinger, K., Oliva, A. & Torralba, A. SUN Database: large-scale scene recognition from abbey to zoo. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.10.1109/CVPR.2010.5539970 (2010).
    1. Bainbridge WA, Oliva A. A toolbox and sample object perception data for equalization of natural images. Data Brief. 2015;5:846–851. doi: 10.1016/j.dib.2015.10.030.
    1. Barriuso, A. & Torralba, A. Notes on image annotation. Preprint at (2012).
    1. MATLAB. The MathWorks, Inc. Massachusetts, USA: Natick; 2016.
    1. Brainard DH. The psychophysics toolbox. Spat. Vis. 1997;10:433–436. doi: 10.1163/156856897X00357.
    1. Kleiner, M., Brainard, D. & Pelli, D. What’s new in Psychtoolbox-3? Proc. ECVP36, 1–10 (2007).
    1. Torralbo, A. et al. Good exemplars of natural scene categories elicit clearer patterns than bad exemplars but not greater BOLD activity. PLoS ONE 8, e58594 (2013).

Source: PubMed

3
구독하다