Distinct Representational Structure and Localization for Visual Encoding and Recall during Visual Imagery

Wilma A Bainbridge, Elizabeth H Hall, Chris I Baker, Wilma A Bainbridge, Elizabeth H Hall, Chris I Baker

Abstract

During memory recall and visual imagery, reinstatement is thought to occur as an echoing of the neural patterns during encoding. However, the precise information in these recall traces is relatively unknown, with previous work primarily investigating either broad distinctions or specific images, rarely bridging these levels of information. Using ultra-high-field (7T) functional magnetic resonance imaging with an item-based visual recall task, we conducted an in-depth comparison of encoding and recall along a spectrum of granularity, from coarse (scenes, objects) to mid (e.g., natural, manmade scenes) to fine (e.g., living room, cupcake) levels. In the scanner, participants viewed a trial-unique item, and after a distractor task, visually imagined the initial item. During encoding, we observed decodable information at all levels of granularity in category-selective visual cortex. In contrast, information during recall was primarily at the coarse level with fine-level information in some areas; there was no evidence of mid-level information. A closer look revealed segregation between voxels showing the strongest effects during encoding and those during recall, and peaks of encoding-recall similarity extended anterior to category-selective cortex. Collectively, these results suggest visual recall is not merely a reactivation of encoding patterns, displaying a different representational structure and localization from encoding, despite some overlap.

Trial registration: ClinicalTrials.gov NCT00001360.

Keywords: 7T fMRI; encoding–recall similarity; objects; representational similarity analyses; scenes.

Published by Oxford University Press 2020.

Figures

Figure 1
Figure 1
Experimental stimuli and task. (a) Nested structure of stimuli and example images. A total of 192 trial-unique images were encoded and recalled by participants, arranged under nested structure based on the “coarse” level (object/scene), “mid” level (e.g., open/closed or natural/manmade scene), and “fine” level (e.g., mountains/lake) of stimulus organization. Each fine-level category contained eight different exemplar images (e.g., eight different lake photographs). (b) The timing of each trial. Participants studied an image for 6 s, performed a distractor task requiring detection of an intact image amongst scrambled images for 4 s, and then after a randomized jitter of 1–4 s, recalled the original image through visual imagery for 6 s. Finally, they indicated the vividness of their memory with a button press.
Figure 2
Figure 2
Main regions of interest (ROIs). The current study focused on a set of visual- and memory-related ROIs. Visual regions consisted of early visual cortex (EVC), object-selective regions of the lateral occipital (LO) and the posterior fusiform (pFs), and scene-selective regions of the parahippocampal place area (PPA), medial place area (MPA), and occipital place area (OPA). Visual regions were individually localized using functional localizers in each participant; shown here are probabilistic ROIs of voxels shared by at least 12% of participants. Memory-related regions consisted of the hippocampus divided into anterior (head and body) and posterior (tail) subregions, as well as the perirhinal cortex (PRC, not shown) and parahippocampal cortex (PHC, not shown). These ROIs were segmented automatically using anatomical landmarks.
Figure 3
Figure 3
Calculating information discriminability from representational similarity matrices. (Left) Depictions of the cells of the representational similarity matrices (RSMs) used to calculate discrimination indices for key regions of interest (ROIs). The RSMs represent pairwise Pearson’s correlations of stimulus groupings calculated from ROI voxel t-values, compared across separate run split halves (odd vs. even runs). These depictions show which cells in the matrices are used in the calculation of discriminability of different properties, with green cells indicating within-condition comparisons, which are compared with gray cells indicating across-condition comparisons. For all discriminability calculations except fine-level discrimination of individual categories (object and scene individuation), the diagonal was not included. All operations were conducted on the lower triangle of the matrix, although both sides of the diagonal are shown here for clarity. (Right) Examples of encoding and recall RSMs from the data in the current study, specifically the rank-transformed average RSM for the parahippocampal place area (PPA), lateral occipital (LO), and the hippocampus head and body. Blue cells are more similar, whereas red cells are more dissimilar.
Figure 4
Figure 4
Information discriminability in scene- and object-selective regions. Discriminability for visual regions of interest (ROIs) for each stimulus property was calculated from the representational similarity matrices (as in Fig. 3). Bar graphs indicate mean discrimination index for different comparisons across ROIs, are split by coarse stimulus class, and show three levels of discrimination: (1) the coarse level (objects vs. scenes), (2) the mid level (objects: big/small, tools/nontools; scenes: open/closed, natural/manmade), and (3) the fine level (individuation of specific object and scene categories). The y-axis represents the average discrimination index (D), which ranges from −1 to 1. Significance (*) indicates results from a one-tailed t-test versus zero, with an FDR-corrected level of q < 0.05 (applied to all 21 comparisons within each ROI). Values that do not pass FDR correction can still be seen in Supplementary material SM6. Pink bars indicate discriminability during encoding trials, blue bars indicate discriminability during recall trials, and hatched purple bars indicate cross-discriminability (i.e., there is a shared representation between encoding and recall). Error bars indicate standard error of the mean.
Figure 5
Figure 5
Information discriminability in the hippocampus and medial temporal lobe. Discriminability for hippocampal ROIs, perirhinal cortex (PRC), and parahippocampal cortex (PHC) for each stimulus property was calculated from the RSMs. Bar graphs are displayed in the same manner as Figure 4 and indicate mean discrimination index for comparisons of different levels of stimulus information (coarse, mid, and fine levels for objects and scenes). Pink bars indicate discriminability during encoding trials, blue bars indicate discriminability during recall trials, and hatched purple bars indicate cross-discriminability (i.e., there is a shared representation between encoding and recall). Error bars indicate standard error of the mean. Asterisks (*) indicate significance at an FDR-corrected level of q < 0.05.
Figure 6
Figure 6
Comparing encoding and recall discriminability within the ROIs. (Top) Example ROIs from a single participant, where each point represents a voxel-centered spherical searchlight in that ROI and is plotted by the object/scene discrimination index during encoding (x-axis) versus the object/scene discrimination index during recall (y-axis). The 10% of searchlights showing strongest recall discriminability are colored in blue, whereas the 10% of searchlights showing strongest encoding discriminability are colored in red. Searchlights that overlap between the two (those that demonstrate both encoding and recall discrimination) are colored in purple. The patterns in this participant mirror the patterns found across participants—PPA shows low (in this case no) overlap, whereas pFs shows higher overlap. (Bottom) Histograms for these ROIs showing the participant distribution of the percentage of overlap between the top 10% of encoding discriminating and top 10% of recall discriminating voxels. The arrow represents the participant’s data plotted above, whereas the dashed red line shows the median overlap percentage across participants.
Figure 7
Figure 7
Whole-brain activation of objects and scenes during encoding and recall. Univariate whole-brain t-statistic maps of the contrast of objects (red/yellow) versus scenes (blue/cyan) in encoding (left) and recall (right). Contrasts show group surface-aligned data (N = 22), presented on the SUMA 141-subject standard surface brain (Saad and Reynolds 2012). Outlined ROIs are defined by voxels shared by at least 25% participants from their individual ROI definitions (using independent functional localizers), with the exception of the pFs and OPA, which were defined by 13% overlap (there were no voxels shared by 25% of participants). The encoding maps are thresholded at FDR-corrected q < 0.05. For the recall maps, no voxels passed FDR correction, so the contrast presented is thresholded at P < 0.01 for visualization purposes. Smaller surface maps show unthresholded results.
Figure 8
Figure 8
Whole-brain discrimination analyses for encoding, recall, and cross-discrimination of information. Whole-brain searchlight analyses here show discrimination of objects versus scenes during encoding (top left), recall (top right), and cross-discrimination (bottom). Brighter yellow indicates higher discrimination indices. Outlined ROIs are defined using independent stimuli in an independent localizer run. All maps are thresholded at P < 0.005 uncorrected, and unthresholded maps are also shown. The cross-discrimination searchlight shows regions that have a shared representation between encoding and recall.

Source: PubMed

3
Abonneren