Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices

Thomas C Sprague, John T Serences, Thomas C Sprague, John T Serences

Abstract

Computational theories propose that attention modulates the topographical landscape of spatial 'priority' maps in regions of the visual cortex so that the location of an important object is associated with higher activation levels. Although studies of single-unit recordings have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here we used functional magnetic resonance imaging and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size.

Figures

Figure 1. The effects of spatial attention…
Figure 1. The effects of spatial attention on region-level priority maps
Spatial attention might act via one of several mechanisms to change the spatial representation of a stimulus within a putative priority map.(a) The hypothetical spatial representation carried across an entire region in response to an unattended circular stimulus. (b) Under one hypothetical scenario, attention might enhance the spatial representation of the same stimulus by amplifying the gain of the spatial representation (i.e. multiplying the representation by a constant greater than 1). (c) Alternatively, attention might act via a combination of multiple mechanisms such as increasing the gain, decreasing the size, and increasing the baseline activity of the entire region (i.e. adding a constant to the response across all areas of the priority map). (d) Cross-sections of panelsa–c. Note that this is not meant as an exhaustive description of different attentional modulations. (e) These different types of attentional modulation can give rise to identical responses when the mean BOLD response is measured across the entire expanse of a priority map. Note that simple Cartesian representations, such as those shown ina–c, may be visualized in early visual areas where retinotopy is well-defined at the spatial resolution of the BOLD response. However, later areas might still encode precise spatial representations of a stimulus even when clear retinotopic organization is not evident, so using alternative methods for reconstructing stimulus representations, such as the approach described in Figure 3, is necessary to evaluate the fidelity of information encoded in putative attentional priority maps.
Figure 2. Task design & behavioral results
Figure 2. Task design & behavioral results
(a) Each trial consisted of a 500 ms target stimulus (T1), a 3000 ms flickering checkerboard (6 Hz, full contrast, 2.34° diameter), and a 500 ms probe stimulus (T2). T1 & T2 were at the same location on 50% of trials, and slightly offset on the remaining 50% of trials. During the stimulus presentation period, the stimulus dimmed briefly on 50% of trials and the fixation point dimmed on 50% of trials (each independently randomly chosen). Participants maintained fixation throughout the experiment, and eye position measured during scanning did not vary as a function of either task demands or stimulus position (see Supplementary Fig. 1). (b) On each trial, a single checkerboard stimulus appeared at one of 36 overlapping spatial locations with a slight spatial offset between runs (see Online Methods). Each spatial location was sampled once per run. This 6 × 6 grid of stimulus locations probes 6 unique eccentricities, as indicated by the color code of the dots (not present in actual stimulus display). (c) On alternating blocks of trials, participants either detected a dimming of the fixation point (attend fixation), detected a dimming of the checkerboard stimulus (attend stimulus), or they indicated if the spatial position of T1 and T2 matched (spatial working memory). Importantly, all tasks used a physically identical stimulus display – only the task demands varied. Each participant completed between 4 and 6 scanning runs of each of the 3 tasks. (d) For the attend fixation task, performance was better when the stimulus was presented at peripheral locations. In contrast, performance declined with increasing stimulus eccentricity in the attend stimulus and spatial working memory conditions. All error bars reflect ±1 S.E.M.
Figure 3. Encoding model used to reconstruct…
Figure 3. Encoding model used to reconstruct spatial representations of visual stimuli
Spatial representations of stimuli in each of the 36 possible positions were estimated separately for each ROI. (a) Training the encoding model: a set of linear spatial filters forms the basis set, or “information channels”, that we use to estimate the spatial selectivity of the BOLD responses in each voxel (see Online Methods: Encoding model, Supplementary Figs. 2 & 3). The shape of these filters determines how each information channel should respond on each trial given the position of the stimulus that was presented (thus forming a set of regressors, or predicted channel responses). Then, we constructed a design matrix by concatenating the regressors generated for each trial. This design matrix, in combination with the measured BOLD signal amplitude on each trial, was then used to estimate a weight for each channel in each voxel using a standard general linear model (GLM). (b)Estimating channel responses: given the known spatial selectivity (or weight) profile of each voxel as computed in step a, we then used the pattern of responses across all voxels on each trial in the ‘test’ set to estimate the magnitude of the response in each of the 36 information channels on that trial. This estimate of the channel responses is thus constrained by the multivariate pattern of responses across all voxels on each trial in the test set, and results in a mapping from voxel space (hundreds of dimensions) onto a lower-dimensional channel space (36 dimensions, for mathematical details see Online Methods). Finally, we produced a smooth reconstructed spatial representation on every trial by summing the response of all 36 filters after weighting them by the respective channel responses on each trial. An example of a spatial representation computed from a single trial using data from V1 when the stimulus was presented at the location depicted in (a) is shown in the lower right panel.
Figure 4. Task demands modulate spatial representations
Figure 4. Task demands modulate spatial representations
(a) Reconstructed spatial representations of each of 36 flickering checkerboard stimuli presented in a 6 × 6 grid. All 36 stimulus locations are shown, with each location’s representation averaged across participants (n = 8) using data from bilateral V1 during attend stimulus runs. One participant was not included in this analysis (AG3, see Supplementary Fig. 4). Each small image represents the reconstructed spatial representation of the entire visual field, and the position of the image in the panel corresponds to the location of the presented stimulus. (b) A subset of representations (corresponding to the upper left quadrant of the visual field, dashed box in a) for each ROI and each task condition. Results are similar for other quadrants (not shown, although see Fig. 5 for aggregate quantification of all reconstructions). All reconstructions in a and bare shown on the same color scale.
Figure 5. Fit parameters to reconstructed spatial…
Figure 5. Fit parameters to reconstructed spatial representations, averaged across like eccentricities
For each participant, we fit a smooth 2D surface (see Online Methods:Curvefitting) to the average reconstructed stimulus representation in all 36 locations, separately for each task condition and ROI. We allowed the amplitude, baseline, size, and center ({x,y} coordinate) of the fit basis function to vary freely during fitting. Fit parameters were then averaged within each participant across like eccentricities, and then averaged across participants. The size of the best fitting surface varied systematically with stimulus eccentricity and ROI, but did not vary as a function of task condition. In contrast, the amplitude of the best fitting surface increased with attention in hV4, hMT+ and sPCS (with a marginal effect in IPS, see text). *, †, × indicate main effect of task condition, eccentricity, and interaction between task and eccentricity, respectively at the p < 0.05 level, corrected for multiple comparisons (see Online Methods: Statistical Procedures). Grey symbols indicate trends at the p < 0.025 level, uncorrected for multiple comparisons. Error bars reflect within-participant S.E.M.
Figure 6. Results are consistent when task…
Figure 6. Results are consistent when task difficulty is matched
(a) Four participants were re-scanned while carefully matching task difficulty across all three experimental conditions. As in Figure 2d, performance is better on the attend fixation task when the checkerboard is presented in the periphery, and performance on the attend stimulus and spatial working memory tasks is better when the stimulus is presented near the fovea. (b) A subset of illustrative reconstructed stimulus representations from V1, hV4, hMT+, IPS 0/1, averaged across like eccentricities (correct trials only, number of averaged trials indicated by inset). See Supplementary Figure 7 for details on IPS subregion identification.
Figure 7. Fit parameters to spatial representations…
Figure 7. Fit parameters to spatial representations after controlling for task difficulty
As in Figure 5, a surface was fit to the averaged, coregistered spatial representations for each participant. However, in this case task difficulty was carefully matched between conditions, and representations were based solely on trials in which the participant made a correct behavioral response (Fig. 6b). Results are similar to those reported in Figure 5: attention acts to increase the fit amplitude of spatial representations in hV4, but does not act to decrease size. In hMT+, attention also acted in a non-localized manner to increase the baseline parameter. Statistics as in Figure 5. Error bars reflect within-participant S.E.M.

References

    1. Koch C, Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol. 1985;4:219–227.
    1. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 1998;20:1254–1259.
    1. Itti L, Koch C. Computational modelling of visual attention. Nat Rev Neurosci. 2001;2:194–203.
    1. Serences JT, Yantis S. Selective visual attention and perceptual coherence. Trends in cognitive sciences. 2006;10:38–45.
    1. Fecteau JH, Munoz DP. Salience, relevance, and firing: a priority map for target selection. Trends in Cognitive Sciences. 2006;10:382–390.
    1. Bichot NP, Schall JD. Effects of similarity and history on neural mechanisms of visual selection. Nat Neurosci. 1999;2:549–554.
    1. Luck SJ, Chelazzi L, Hillyard SA, Desimone R. Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. Journal of Neurophysiology. 1997;77:24–42.
    1. Reynolds JH, Chelazzi L, Desimone R. Competitive mechanisms subserve attention in macaque areas V2 and V4. The Journal of Neuroscience. 1999;19:1736–1753.
    1. McAdams CJ, Maunsell JHR. Effects of Attention on Orientation-Tuning Functions of Single Neurons in Macaque Cortical Area V4. The Journal of Neuroscience. 1999;19:431–441.
    1. Connor CE, Preddie DC, Gallant JL, Van Essen DC. Spatial attention effects in macaque area V4. The Journal of Neuroscience. 1997;17:3201–3214.
    1. Treue S, Maunsell JHR. Attentional modulation of visual motion processing in cortical areas MT and MST. Nature. 1996;382:539–541.
    1. Reynolds JH, Pasternak T, Desimone R. Attention increases sensitivity of V4 neurons. Neuron. 2000;26:703–714.
    1. McAdams CJ, Maunsell JHR. Attention to both space and feature modulates neuronal responses in macaque area V4. Journal of Neurophysiology. 2000;83:1751–1755.
    1. Treue S, Maunsell JHR. Effects of attention on the processing of motion in macaque middle temporal and medial superior temporal visual cortical areas. The Journal of Neuroscience. 1999;19:7591–7602.
    1. Seidemann E, Newsome WT. Effect of spatial attention on the responses of area MT neurons. Journal of Neurophysiology. 1999;81:1783–1794.
    1. Motter BC. Focal attention produces spatially selective processing in visual cortical areas V1, V2, and V4 in the presence of competing stimuli. Journal of neurophysiology. 1993;70:909–19.
    1. Moran J, Desimone R. Selective attention gates visual processing in the extrastriate cortex. Science. 1985;229:782–784.
    1. Saproo S, Serences JT. Spatial Attention Improves the Quality of Population Codes in Human Visual Cortex. Journal of Neurophysiology. 2010;104:885–895.
    1. Womelsdorf T, Anton-Erxleben K, Pieper F, Treue S. Dynamic shifts of visual receptive fields in cortical area MT by spatial attention. Nature Neuroscience. 2006;9:1156–1160.
    1. Womelsdorf T, Anton-Erxleben K, Treue S. Receptive Field Shift and Shrinkage in Macaque Middle Temporal Area through Attentional Gain Modulation. The Journal of Neuroscience. 2008;28:8934–8944.
    1. Anton-Erxleben K, Stephan VM, Treue S. Attention reshapes center-surround receptive field structure in macaque cortical area MT. Cerebral Cortex. 2009;19:2466–2478.
    1. Niebergall R, Khayat PS, Treue S, Martinez-Trujillo JC. Expansion of MT Neurons Excitatory Receptive Fields during Covert Attentive Tracking. The Journal of Neuroscience. 2011;31:15499–15510.
    1. Anton-Erxleben K, Carrasco M. Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence. Nature Reviews Neuroscience. 2013;14:188–200.
    1. Liu T, Pestilli F, Carrasco M. Transient attention enhances perceptual performance and FMRI response in human visual cortex. Neuron. 2005;45:469–477.
    1. Gandhi SP, Heeger DJ, Boynton GM. Spatial attention affects brain activity in human primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America. 1999;96:3314–3319.
    1. Kastner S, Pinsk MA, De Weerd P, Desimone R, Ungerleider LG. Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron. 1999;22:751–761.
    1. Brefczynski JA, DeYoe EA. A physiological correlate of the “spotlight” of visual attention. Nature Neuroscience. 1999;2:370–374.
    1. Silver MA, Ress D, Heeger DJ. Neural correlates of sustained spatial attention in human early visual cortex. Journal of Neurophysiology. 2007;97:229–237.
    1. Tootell RB, et al. The retinotopy of visual spatial attention. Neuron. 1998;21:1409–1422.
    1. Murray SO. The effects of spatial attention in early human visual cortex are stimulus independent. Journal of Vision. 2008;8:2.1–11.
    1. Silver MA, Ress D, Heeger DJ. Topographic maps of visual spatial attention in human parietal cortex. Journal of Neurophysiology. 2005;94:1358–1371.
    1. Jerde TA, Merriam EP, Riggall AC, Hedges JH, Curtis CE. Prioritized Maps of Space in Human Frontoparietal Cortex. The Journal of Neuroscience. 2012;32:17382–17390.
    1. Jehee JFM, Brady DK, Tong F. Attention Improves Encoding of Task-Relevant Features in the Human Visual Cortex. The Journal of Neuroscience. 2011;31:8210–8219.
    1. Serences JT, Saproo S. Computational advances towards linking BOLD and behavior. Neuropsychologia. 2011;50:435–446.
    1. Awh E, Jonides J. Overlapping mechanisms of attention and spatial working memory. Trends in Cognitive Sciences. 2001;5:119–126.
    1. Brouwer G, Heeger D. Decoding and Reconstructing Color from Responses in Human Visual Cortex. Journal of Neuroscience. 2009;29:13992–14003.
    1. Naselaris T, Kay K, Nishimoto S, Gallant J. Encoding and decoding in fMRI. Neuroimage. 2011;56:400–410.
    1. Scolari M, Byers A, Serences JT. Optimal Deployment of Attentional Gain during Fine Discriminations. Journal of Neuroscience. 2012;32:1–11.
    1. Gattass R, et al. Cortical visual areas in monkeys: location, topography, connections, columns, plasticity and cortical dynamics. Philosophical Transactions of the Royal Society B: Biological Sciences. 2005;360:709–731.
    1. Ben Hamed S, Duhamel JR, Bremmer F, Graf W. Visual receptive field modulation in the lateral intraparietal area during attentive fixation and free gaze. Cerebral cortex. 2002;12:234–245.
    1. Mohler CW, Goldberg ME, Wurtz RH. Visual receptive fields of frontal eye field neurons. Brain Research. 1973;61:385–389.
    1. Dumoulin S, Wandell B. Population receptive field estimates in human visual cortex. NeuroImage. 2008;39:647–660.
    1. Sereno MI, Pitzalis S, Martinez A. Mapping of Contralateral Space in Retinotopic Coordinates by a Parietal Cortical Area in Humans. Science. 2001;294:1350–1354.
    1. Swisher JD, Halko MA, Merabet LB, McMains SA, Somers DC. Visual topography of human intraparietal sulcus. The Journal of Neuroscience. 2007;27:5326–5337.
    1. Saygin AP, Sereno MI. Retinotopy and Attention in Human Occipital, Temporal, Parietal, and Frontal Cortex. Cerebral Cortex. 2008;18:2158–2168.
    1. Kastner S, et al. Modulation of sensory suppression: implications for receptive field sizes in the human visual cortex. Journal of Neurophysiology. 2001;86:1398–1411.
    1. Srimal R, Curtis CE. Persistent neural activity during the maintenance of spatial position in working memory. NeuroImage. 2008;39:455–468.
    1. Paus T. Location and function of the human frontal eye-field: A selective review. Neuropsychologia. 1996;34:475–483.
    1. Kastner S, et al. Topographic Maps in Human Frontal Cortex Revealed in Memory-Guided Saccade and Spatial Working-Memory Tasks. Journal of Neurophysiology. 2007;97:3494–3507.
    1. Fischer J, Whitney D. Attention Narrows Position Tuning of Population Responses in V1. Current biology. 2009;19:1356–1361.
    1. Engel SA, et al. fMRI of human visual cortex. Nature. 1994;369:525.
    1. Sereno MI, et al. Borders of Multiple Visual Areas in Humans Revealed by Functional Magnetic Resonance Imaging. Science. 1995;268:889–893.
    1. Tootell RB, et al. Functional analysis of human MT and related visual cortical areas using magnetic resonance imaging. The Journal of Neuroscience. 1995;15:3215–3230.
    1. Serences JT, Boynton GM. The representation of behavioral choice for motion in human visual cortex. The Journal of Neuroscience. 2007;27:12893–12899.
    1. Kay K, Naselaris T, Prenger R, Gallant J. Identifying natural images from human brain activity. Nature. 2008;452:352–355.
    1. Hoerl AE, Kennard RW. Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics. 1970;12:55–67.
    1. Lee S, Papanikolaou A, Logothetis NK, Smirnakis SM, Keliris Ga. A new method for estimating population receptive field topography in visual cortex. NeuroImage. 2013;81:144–157.
    1. Schwarz G. Estimating the Dimension of a Model. The Annals of Statistics. 1978;6:461–464.
    1. Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Annals of statistics. 2001;29:1165–1188.

Source: PubMed

3
Sottoscrivi