The representational dynamics of task and object processing in humans

Martin N Hebart, Brett B Bankson, Assaf Harel, Chris I Baker, Radoslaw M Cichy, Martin N Hebart, Brett B Bankson, Assaf Harel, Chris I Baker, Radoslaw M Cichy

Abstract

Despite the importance of an observer's goals in determining how a visual object is categorized, surprisingly little is known about how humans process the task context in which objects occur and how it may interact with the processing of objects. Using magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) and multivariate techniques, we studied the spatial and temporal dynamics of task and object processing. Our results reveal a sequence of separate but overlapping task-related processes spread across frontoparietal and occipitotemporal cortex. Task exhibited late effects on object processing by selectively enhancing task-relevant object features, with limited impact on the overall pattern of object representations. Combining MEG and fMRI data, we reveal a parallel rise in task-related signals throughout the cerebral cortex, with an increasing dominance of task over object representations from early to higher visual areas. Collectively, our results reveal the complex dynamics underlying task and object representations throughout human cortex.

Trial registration: ClinicalTrials.gov NCT00001360.

Keywords: MEG; MEG-fMRI fusion; fMRI; human; multivariate analysis; neuroscience; object processing; task context.

Conflict of interest statement

MH, BB, AH, CB, RC No competing interests declared

Figures

Figure 1.. Experimental paradigm.
Figure 1.. Experimental paradigm.
On each trial (Procedure depicted in Panel C), participants were presented with a stimulus from one of eight different object classes (Panel B) embedded in one of four task contexts (Panel A, top) indicated at the beginning of each trial. Participants carried out a task that either targeted low-level features (perceptual tasks) of the object or its high-level, semantic content (conceptual tasks). After a short delay, a response-mapping screen was shown that presented the possible response alternatives (Panel A, bottom) in random order either left or right of fixation to decouple motor responses from the correct response.
Figure 2.. Schematic for multivariate analyses of…
Figure 2.. Schematic for multivariate analyses of MEG data.
All multivariate analyses were carried out in a time-resolved manner on principal components (PCs) based on MEG sensor patterns (see Materials and methods for transformation of sensor patterns to PCs). (A) Time-resolved multivariate decoding was conducted using pairwise SVM classification at each time point, classifying all pairs of tasks or categories, and averaging classification accuracies within a given decoding analysis (e.g. decoding of task or category). (B) For model-based MEG-fMRI fusion, 32 × 32 representational dissimilarity matrices were constructed using Pearson's r for all combinations of task and category.
Figure 3.. Time-resolved MEG decoding of task…
Figure 3.. Time-resolved MEG decoding of task and objects across the trial.
After onset of the task cue (Task Cue Period), task-related accuracy increased rapidly, followed by a decay toward chance and significant above-chance decoding ~200 ms prior to object onset. After onset of the object stimulus (Object Stimulus Period), object-related accuracy increased rapidly, decaying back to chance with the onset of the response-mapping screen. This was paralleled by a gradual increase in task-related accuracy, starting 242 ms and peaking 638 ms after object onset and remaining high until onset of the response-mapping screen. Error bars reflect SEM across participants for each time-point separately. Significance is indicated by colored lines above accuracy plots (non-parametric cluster-correction at pMaterials and methods and Results), but are shown for completeness.
Figure 4.. Results of temporal generalization analysis…
Figure 4.. Results of temporal generalization analysis of task.
(A) Temporal cross-classification matrix. The y-axis reflects the classifier training time relative to task cue onset, the x-axis the classifier generalization time, and the color codes the cross-classification accuracy for each combination of training and generalization time. The outline reflects significant clusters (p<0.05, cluster-corrected sign permutation test). Results after the onset of the response-mapping screen are not included in the statistical evaluation but are shown for completeness. (see Results) (B). Panels that schematically indicate three patters in the temporal generalization results. First, there was a block structure (Within-Period Cross-Decoding) separately spanning the Task Cue Period and the Object Stimulus Period, indicating largely different representations during the different periods of the task (left panel). At the same time, there were two separate patterns of temporal generalization in the off-diagonals (Between-Period I and Between-Period II Cross-Decoding illustrated in middle and right panel, respectively), indicating a shared representational format between these time periods.
Figure 4—figure supplement 1.. Results of temporal…
Figure 4—figure supplement 1.. Results of temporal generalization analysis of task separated by task type.
Each map reflects the average of all pairwise classifications of a given task with all other tasks (e.g. color vs. tilt, color vs. content, color vs. size). The y-axis reflects the classifier training time relative to task cue onset, the x-axis the classifier generalization time, and the color codes the cross-classification accuracy for each combination of training and generalization time. The outline reflects significant clusters (p

Figure 4—figure supplement 2.. Results of temporal…

Figure 4—figure supplement 2.. Results of temporal generalization analysis of objects.

The y-axis reflects the…

Figure 4—figure supplement 2.. Results of temporal generalization analysis of objects.
The y-axis reflects the classifier training time relative to task cue onset, the x-axis the classifier generalization time, and the color codes the cross-classification accuracy for each combination of training and generalization time. The outline reflects significant clusters (p

Figure 5.. Comparison of object decoding for…

Figure 5.. Comparison of object decoding for different task types (p

Figure 5.. Comparison of object decoding for different task types (p
Error bars reflect standard error of the difference of the means. (A) Object decoding separated by perceptual and conceptual task types. Initially, object decoding for conceptual and perceptual tasks is the same, followed by decoding temporarily remaining at a higher level for conceptual tasks than perceptual tasks between 542 and 833 ms post-stimulus onset. (B) Object decoding within and across task types. A classifier was trained on data of different objects from one task type and tested either on object-related data from the same task type (within tasks) or on object-related data from the other task type (between tasks). There was no difference in within and between-task decoding.

Figure 6.. Model-based MEG-fMRI fusion procedure and…

Figure 6.. Model-based MEG-fMRI fusion procedure and results.

( A ) Model-based MEG-fMRI fusion in…

Figure 6.. Model-based MEG-fMRI fusion procedure and results.
(A) Model-based MEG-fMRI fusion in the current formulation reflects the shared variance (commonality) between three dissimilarity matrices: (1) an fMRI RDM generated from voxel patterns of a given ROI, (2) a model RDM reflecting the expected dissimilarity structure for a variable of interest (e.g. task) excluding the influence of another variable of interest (e.g. object) and (3) an MEG RDM from MEG data at a given time point. This analysis was conducted for each MEG time point independently, yielding a time course of commonality coefficients for each ROI. (B-F). Time courses of shared variance and commonality coefficients for five regions of interest (ROIs) derived from model-based MEG-fMRI fusion (p<0.05, cluster-corrected randomization test, corrected for multiple comparisons across ROIs): PPC (Panel B), lPFC (Panel C), EVC (Panel D), LO (Panel E) and pFS (Panel F). Blue plots reflect the variance attributed uniquely to task, while red plots reflect the variance attributed uniquely to object. Grey-shaded areas reflect the total amount of variance shared between MEG and fMRI RDMs, which additionally represents the upper boundary of the variance that can be explained by task or object models. Y-axes are on a quadratic scale for better comparability to previous MEG-RSA and MEG-fMRI fusion results reporting correlations (Cichy et al., 2014) and to highlight small but significant commonality coefficients.

Figure 6—figure supplement 1.. FMRI representational dissimilarity…

Figure 6—figure supplement 1.. FMRI representational dissimilarity matrices (RDMs) for the five regions of interest:…

Figure 6—figure supplement 1.. FMRI representational dissimilarity matrices (RDMs) for the five regions of interest: Posterior parietal cortex (PPC), lateral prefrontal cortex (lPFC), early visual cortex (EVC), object-selective lateral occipital cortex (LO), and posterior fusiform sulcus (pFS).
Since RDMs are compared to MEG data using Spearman r, rank-transformed dissimilarities are plotted.

Author response image 1.

Author response image 1.

Author response image 1.

Author response image 2.

Author response image 2.

Author response image 2.
All figures (11)
Similar articles
Cited by
References
    1. Bankson BB, Hebart MN, Groen II BCI. The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. bioRxiv. 2017 doi: 10.1101/223990. - DOI - PubMed
    1. Belongie S, Malik J, Puzicha J. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24:509–522. doi: 10.1109/34.993558. - DOI
    1. Bode S, Haynes JD. Decoding sequential stages of task preparation in the human brain. NeuroImage. 2009;45:606–613. doi: 10.1016/j.neuroimage.2008.11.031. - DOI - PubMed
    1. Bracci S, Daniels N, Op de Beeck H. Task context overrules object- and category-related representational content in the human parietal cortex. Cerebral Cortex. 2017;84:1-12. doi: 10.1093/cercor/bhw419. - DOI - PMC - PubMed
    1. Bracci S, Op de Beeck H. Dissociations and associations between shape and category representations in the two visual pathways. Journal of Neuroscience. 2016;36:432–444. doi: 10.1523/JNEUROSCI.2314-15.2016. - DOI - PMC - PubMed
Show all 64 references
Publication types
Associated data
[x]
Cite
Copy Download .nbib
Format: AMA APA MLA NLM
Figure 4—figure supplement 2.. Results of temporal…
Figure 4—figure supplement 2.. Results of temporal generalization analysis of objects.
The y-axis reflects the classifier training time relative to task cue onset, the x-axis the classifier generalization time, and the color codes the cross-classification accuracy for each combination of training and generalization time. The outline reflects significant clusters (p

Figure 5.. Comparison of object decoding for…

Figure 5.. Comparison of object decoding for different task types (p

Figure 5.. Comparison of object decoding for different task types (p
Error bars reflect standard error of the difference of the means. (A) Object decoding separated by perceptual and conceptual task types. Initially, object decoding for conceptual and perceptual tasks is the same, followed by decoding temporarily remaining at a higher level for conceptual tasks than perceptual tasks between 542 and 833 ms post-stimulus onset. (B) Object decoding within and across task types. A classifier was trained on data of different objects from one task type and tested either on object-related data from the same task type (within tasks) or on object-related data from the other task type (between tasks). There was no difference in within and between-task decoding.

Figure 6.. Model-based MEG-fMRI fusion procedure and…

Figure 6.. Model-based MEG-fMRI fusion procedure and results.

( A ) Model-based MEG-fMRI fusion in…

Figure 6.. Model-based MEG-fMRI fusion procedure and results.
(A) Model-based MEG-fMRI fusion in the current formulation reflects the shared variance (commonality) between three dissimilarity matrices: (1) an fMRI RDM generated from voxel patterns of a given ROI, (2) a model RDM reflecting the expected dissimilarity structure for a variable of interest (e.g. task) excluding the influence of another variable of interest (e.g. object) and (3) an MEG RDM from MEG data at a given time point. This analysis was conducted for each MEG time point independently, yielding a time course of commonality coefficients for each ROI. (B-F). Time courses of shared variance and commonality coefficients for five regions of interest (ROIs) derived from model-based MEG-fMRI fusion (p<0.05, cluster-corrected randomization test, corrected for multiple comparisons across ROIs): PPC (Panel B), lPFC (Panel C), EVC (Panel D), LO (Panel E) and pFS (Panel F). Blue plots reflect the variance attributed uniquely to task, while red plots reflect the variance attributed uniquely to object. Grey-shaded areas reflect the total amount of variance shared between MEG and fMRI RDMs, which additionally represents the upper boundary of the variance that can be explained by task or object models. Y-axes are on a quadratic scale for better comparability to previous MEG-RSA and MEG-fMRI fusion results reporting correlations (Cichy et al., 2014) and to highlight small but significant commonality coefficients.

Figure 6—figure supplement 1.. FMRI representational dissimilarity…

Figure 6—figure supplement 1.. FMRI representational dissimilarity matrices (RDMs) for the five regions of interest:…

Figure 6—figure supplement 1.. FMRI representational dissimilarity matrices (RDMs) for the five regions of interest: Posterior parietal cortex (PPC), lateral prefrontal cortex (lPFC), early visual cortex (EVC), object-selective lateral occipital cortex (LO), and posterior fusiform sulcus (pFS).
Since RDMs are compared to MEG data using Spearman r, rank-transformed dissimilarities are plotted.

Author response image 1.

Author response image 1.

Author response image 1.

Author response image 2.

Author response image 2.

Author response image 2.
All figures (11)
Figure 5.. Comparison of object decoding for…
Figure 5.. Comparison of object decoding for different task types (p
Error bars reflect standard error of the difference of the means. (A) Object decoding separated by perceptual and conceptual task types. Initially, object decoding for conceptual and perceptual tasks is the same, followed by decoding temporarily remaining at a higher level for conceptual tasks than perceptual tasks between 542 and 833 ms post-stimulus onset. (B) Object decoding within and across task types. A classifier was trained on data of different objects from one task type and tested either on object-related data from the same task type (within tasks) or on object-related data from the other task type (between tasks). There was no difference in within and between-task decoding.
Figure 6.. Model-based MEG-fMRI fusion procedure and…
Figure 6.. Model-based MEG-fMRI fusion procedure and results.
(A) Model-based MEG-fMRI fusion in the current formulation reflects the shared variance (commonality) between three dissimilarity matrices: (1) an fMRI RDM generated from voxel patterns of a given ROI, (2) a model RDM reflecting the expected dissimilarity structure for a variable of interest (e.g. task) excluding the influence of another variable of interest (e.g. object) and (3) an MEG RDM from MEG data at a given time point. This analysis was conducted for each MEG time point independently, yielding a time course of commonality coefficients for each ROI. (B-F). Time courses of shared variance and commonality coefficients for five regions of interest (ROIs) derived from model-based MEG-fMRI fusion (p<0.05, cluster-corrected randomization test, corrected for multiple comparisons across ROIs): PPC (Panel B), lPFC (Panel C), EVC (Panel D), LO (Panel E) and pFS (Panel F). Blue plots reflect the variance attributed uniquely to task, while red plots reflect the variance attributed uniquely to object. Grey-shaded areas reflect the total amount of variance shared between MEG and fMRI RDMs, which additionally represents the upper boundary of the variance that can be explained by task or object models. Y-axes are on a quadratic scale for better comparability to previous MEG-RSA and MEG-fMRI fusion results reporting correlations (Cichy et al., 2014) and to highlight small but significant commonality coefficients.
Figure 6—figure supplement 1.. FMRI representational dissimilarity…
Figure 6—figure supplement 1.. FMRI representational dissimilarity matrices (RDMs) for the five regions of interest: Posterior parietal cortex (PPC), lateral prefrontal cortex (lPFC), early visual cortex (EVC), object-selective lateral occipital cortex (LO), and posterior fusiform sulcus (pFS).
Since RDMs are compared to MEG data using Spearman r, rank-transformed dissimilarities are plotted.
Author response image 1.
Author response image 1.
Author response image 2.
Author response image 2.

References

    1. Bankson BB, Hebart MN, Groen II BCI. The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. bioRxiv. 2017 doi: 10.1101/223990.
    1. Belongie S, Malik J, Puzicha J. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24:509–522. doi: 10.1109/34.993558.
    1. Bode S, Haynes JD. Decoding sequential stages of task preparation in the human brain. NeuroImage. 2009;45:606–613. doi: 10.1016/j.neuroimage.2008.11.031.
    1. Bracci S, Daniels N, Op de Beeck H. Task context overrules object- and category-related representational content in the human parietal cortex. Cerebral Cortex. 2017;84:1-12. doi: 10.1093/cercor/bhw419.
    1. Bracci S, Op de Beeck H. Dissociations and associations between shape and category representations in the two visual pathways. Journal of Neuroscience. 2016;36:432–444. doi: 10.1523/JNEUROSCI.2314-15.2016.
    1. Bugatus L, Weiner KS, Grill-Spector K. Task alters category representations in prefrontal but not high-level visual cortex. NeuroImage. 2017;155:437–449. doi: 10.1016/j.neuroimage.2017.03.062.
    1. Carlson T, Tovar DA, Alink A, Kriegeskorte N. Representational dynamics of object vision: the first 1000 ms. Journal of Vision. 2013;13:1–19. doi: 10.1167/13.10.1.
    1. Chang C-C, Lin C-J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology. 2011;2:27.
    1. Cichy RM, Chen Y, Haynes JD. Encoding the identity and location of objects in human LOC. NeuroImage. 2011;54:2297–2307. doi: 10.1016/j.neuroimage.2010.09.044.
    1. Cichy RM, Pantazis D, Oliva A. Resolving human object recognition in space and time. Nature Neuroscience. 2014;17:455–462. doi: 10.1038/nn.3635.
    1. Cichy RM, Pantazis D, Oliva A. Similarity-based fusion of meg and fmri reveals spatio-temporal dynamics in human cortex during visual object recognition. Cerebral Cortex. 2016;26:3563–3579. doi: 10.1093/cercor/bhw135.
    1. Clarke A, Devereux BJ, Randall B, Tyler LK. Predicting the time course of individual objects with MEG. Cerebral Cortex. 2015;25:3602–3612. doi: 10.1093/cercor/bhu203.
    1. DiCarlo JJ, Zoccolan D, Rust NC. How does the brain solve visual object recognition? Neuron. 2012;73:415–434. doi: 10.1016/j.neuron.2012.01.010.
    1. Duncan J. The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends in Cognitive Sciences. 2010;14:172–179. doi: 10.1016/j.tics.2010.01.004.
    1. Emadi N, Esteky H. Behavioral demand modulates object category representation in the inferior temporal cortex. Journal of Neurophysiology. 2014;112:2628–2637. doi: 10.1152/jn.00761.2013.
    1. Erez Y, Duncan J. Discrimination of visual categories based on behavioral relevance in widespread regions of frontoparietal cortex. Journal of Neuroscience. 2015;35:12383–12393. doi: 10.1523/JNEUROSCI.1134-15.2015.
    1. Freedman DJ, Assad JA. Neuronal mechanisms of visual categorization: An abstract view on decision making. Annual Review of Neuroscience. 2016;39:129–147. doi: 10.1146/annurev-neuro-071714-033919.
    1. Freedman DJ, Riesenhuber M, Poggio T, Miller EK. A comparison of primate prefrontal and inferior temporal cortices during visual categorization. Journal of Neuroscience. 2003;23:5235–5246.
    1. Greene MR, Baldassano C, Esteva A, Beck DM, Fei-Fei L. Visual scenes are categorized by function. Journal of Experimental Psychology: General. 2016;145:82–94. doi: 10.1037/xge0000129.
    1. Grill-Spector K, Kushnir T, Edelman S, Avidan G, Itzchak Y, Malach R. Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron. 1999;24:187–203. doi: 10.1016/S0896-6273(00)80832-6.
    1. Groen II, Ghebreab S, Lamme VA, Scholte HS. The time course of natural scene perception with reduced attention. Journal of Neurophysiology. 2016;115:931–946. doi: 10.1152/jn.00896.2015.
    1. Grootswagers T, Wardle SG, Carlson TA. Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time series neuroimaging data. Journal of Cognitive Neuroscience. 2017;29:677–697. doi: 10.1162/jocn_a_01068.
    1. Harel A, Kravitz DJ, Baker CI. Task context impacts visual object processing differentially across the cortex. PNAS. 2014;111:E962–E971. doi: 10.1073/pnas.1312567111.
    1. Hebart MN, Baker CI. Deconstructing multivariate decoding for the study of brain function. NeuroImage. 2017 doi: 10.1016/j.neuroimage.2017.08.005.
    1. Hebart MN, Donner TH, Haynes JD. Human visual and parietal cortex encode visual choices independent of motor plans. NeuroImage. 2012;63:1393–1403. doi: 10.1016/j.neuroimage.2012.08.027.
    1. Hebart MN, Görgen K, Haynes JD. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data. Frontiers in Neuroinformatics. 2014;8:88. doi: 10.3389/fninf.2014.00088.
    1. Isik L, Meyers EM, Leibo JZ, Poggio T. The dynamics of invariant object recognition in the human visual system. Journal of Neurophysiology. 2014;111:91–102. doi: 10.1152/jn.00394.2013.
    1. Jehee JF, Brady DK, Tong F. Attention improves encoding of task-relevant features in the human visual cortex. Journal of Neuroscience. 2011;31:8210–8219. doi: 10.1523/JNEUROSCI.6153-09.2011.
    1. Jeong SK, Xu Y. Behaviorally relevant abstract object identity representation in the human parietal cortex. The Journal of Neuroscience. 2016;36:1607–1619. doi: 10.1523/JNEUROSCI.1016-15.2016.
    1. Kaiser D, Azzalini DC, Peelen MV. Shape-independent object category responses revealed by MEG and fMRI decoding. Journal of Neurophysiology. 2016;115:2246–2250. doi: 10.1152/jn.01074.2015.
    1. King JR, Dehaene S. Characterizing the dynamics of mental representations: the temporal generalization method. Trends in Cognitive Sciences. 2014;18:203–210. doi: 10.1016/j.tics.2014.01.002.
    1. Kok P, Brouwer GJ, van Gerven MA, de Lange FP. Prior expectations bias sensory representations in visual cortex. Journal of Neuroscience. 2013;33:16275–16284. doi: 10.1523/JNEUROSCI.0742-13.2013.
    1. Kok P, Jehee JF, de Lange FP. Less is more: expectation sharpens representations in the primary visual cortex. Neuron. 2012;75:265–270. doi: 10.1016/j.neuron.2012.04.034.
    1. Konen CS, Kastner S. Two hierarchically organized neural systems for object information in human visual cortex. Nature Neuroscience. 2008;11:224–231. doi: 10.1038/nn2036.
    1. Kravitz DJ, Kriegeskorte N, Baker CI. High-level visual object representations are constrained by position. Cerebral Cortex. 2010;20:2916–2925. doi: 10.1093/cercor/bhq042.
    1. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. 2008;2:4. doi: 10.3389/neuro.06.004.2008.
    1. Lowe MX, Gallivan JP, Ferber S, Cant JS. Feature diagnosticity and task context shape activity in human scene-selective cortex. NeuroImage. 2016;125:681–692. doi: 10.1016/j.neuroimage.2015.10.089.
    1. McKee JL, Riesenhuber M, Miller EK, Freedman DJ. Task dependence of visual and category representations in prefrontal and inferior temporal cortices. Journal of Neuroscience. 2014;34:16065–16075. doi: 10.1523/JNEUROSCI.1660-14.2014.
    1. Meyers EM, Freedman DJ, Kreiman G, Miller EK, Poggio T. Dynamic population coding of category information in inferior temporal and prefrontal cortex. Journal of Neurophysiology. 2008;100:1407–1419. doi: 10.1152/jn.90248.2008.
    1. Nastase SA, Connolly AC, Oosterhof NN, Halchenko YO, Guntupalli JS, Visconti di Oleggio Castello M, Gors J, Gobbini MI, Haxby JV. Attention selectively reshapes the geometry of distributed semantic representation. Cerebral Cortex. 2017;27:4277–4291. doi: 10.1093/cercor/bhx138.
    1. Nichols TE, Holmes AP. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping. 2002;15:1–25. doi: 10.1002/hbm.1058.
    1. Pedhazur EJ. Multiple Regression in Behavioral Research: Explanation and Prediction. Orlando, FL: Harcourt Brace; 1997.
    1. Peelen MV, Fei-Fei L, Kastner S. Neural mechanisms of rapid natural scene categorization in human visual cortex. Nature. 2009;460:94–97. doi: 10.1038/nature08103.
    1. Peelen MV, Kastner S. A neural basis for real-world visual search in human occipitotemporal cortex. PNAS. 2011;108:12125–12130. doi: 10.1073/pnas.1101042108.
    1. Peters B, Bledowski C, Rieder M, Kaiser J. Recurrence of task set-related MEG signal patterns during auditory working memory. Brain Research. 2016;1640:232–242. doi: 10.1016/j.brainres.2015.12.006.
    1. Proklova D, Kaiser D, Peelen M. MEG sensor patterns reflect perceptual but not categorical similarity of animate and inanimate objects. bioRxiv. 2017 doi: 10.1101/238584.
    1. Proklova D, Kaiser D, Peelen MV. Disentangling representations of object shape and object category in human visual cortex: The animate-inanimate distinction. Journal of Cognitive Neuroscience. 2016;28:680–692. doi: 10.1162/jocn_a_00924.
    1. Riesenhuber M, Poggio T. Neural mechanisms of object recognition. Current Opinion in Neurobiology. 2002;12:162–168. doi: 10.1016/S0959-4388(02)00304-5.
    1. Ritchie JB, Bracci S, Op de Beeck H. Avoiding illusory effects in representational similarity analysis: What (not) to do with the diagonal. NeuroImage. 2017;148:197–200. doi: 10.1016/j.neuroimage.2016.12.079.
    1. Ritchie JB, Tovar DA, Carlson TA. Emerging object representations in the visual system predict reaction times for categorization. PLOS Computational Biology. 2015;11:e1004316. doi: 10.1371/journal.pcbi.1004316.
    1. Seibold DR, McPHEE RD. Commonality analysis: A method for decomposing explained variance in multiple regression analyses. Human Communication Research. 1979;5:355–365. doi: 10.1111/j.1468-2958.1979.tb00649.x.
    1. Serre T, Oliva A, Poggio T. A feedforward architecture accounts for rapid categorization. PNAS. 2007;104:6424–6429. doi: 10.1073/pnas.0700622104.
    1. Siegel M, Buschman TJ, Miller EK. Cortical information flow during flexible sensorimotor decisions. Science. 2015;348:1352–1355. doi: 10.1126/science.aab0551.
    1. Sigala N, Kusunoki M, Nimmo-Smith I, Gaffan D, Duncan J. Hierarchical coding for sequential task events in the monkey prefrontal cortex. PNAS. 2008;105:11969–11974. doi: 10.1073/pnas.0802569105.
    1. Stoet G, Snyder LH. Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron. 2004;42:1003–1012. doi: 10.1016/j.neuron.2004.06.003.
    1. Stokes MG, Kusunoki M, Sigala N, Nili H, Gaffan D, Duncan J. Dynamic coding for cognitive control in prefrontal cortex. Neuron. 2013;78:364–375. doi: 10.1016/j.neuron.2013.01.039.
    1. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: a user-friendly application for MEG/EEG analysis. Computational Intelligence and Neuroscience. 2011;2011:1–13. doi: 10.1155/2011/879716.
    1. van de Nieuwenhuijzen ME, Backus AR, Bahramisharif A, Doeller CF, Jensen O, van Gerven MA. MEG-based decoding of the spatiotemporal dynamics of visual category perception. NeuroImage. 2013;83:1063–1073. doi: 10.1016/j.neuroimage.2013.07.075.
    1. VanRullen R, Thorpe SJ. The time course of visual processing: from early perception to decision-making. Journal of Cognitive Neuroscience. 2001;13:454–461. doi: 10.1162/08989290152001880.
    1. Vaziri-Pashkam M, Xu Y. Goal-directed visual processing differentially impacts human ventral and dorsal visual representations. The Journal of Neuroscience. 2017;37:8767–8782. doi: 10.1523/JNEUROSCI.3392-16.2017.
    1. Wallis JD, Anderson KC, Miller EK. Single neurons in prefrontal cortex encode abstract rules. Nature. 2001;411:953–956. doi: 10.1038/35082081.
    1. Waskom ML, Kumaran D, Gordon AM, Rissman J, Wagner AD. Frontoparietal representations of task context support the flexible control of goal-directed cognition. Journal of Neuroscience. 2014;34:10743–10755. doi: 10.1523/JNEUROSCI.5282-13.2014.
    1. Woolgar A, Thompson R, Bor D, Duncan J. Multi-voxel coding of stimuli, rules, and responses in human frontoparietal cortex. NeuroImage. 2011;56:744–752. doi: 10.1016/j.neuroimage.2010.04.035.
    1. Çukur T, Nishimoto S, Huth AG, Gallant JL. Attention during natural vision warps semantic representation across the human brain. Nature Neuroscience. 2013;16:763–770. doi: 10.1038/nn.3381.

Source: PubMed

3
Subscribe