Process reveals structure: How a network is traversed mediates expectations about its architecture

Elisabeth A Karuza, Ari E Kahn, Sharon L Thompson-Schill, Danielle S Bassett, Elisabeth A Karuza, Ari E Kahn, Sharon L Thompson-Schill, Danielle S Bassett

Abstract

Network science has emerged as a powerful tool through which we can study the higher-order architectural properties of the world around us. How human learners exploit this information remains an essential question. Here, we focus on the temporal constraints that govern such a process. Participants viewed a continuous sequence of images generated by three distinct walks on a modular network. Walks varied along two critical dimensions: their predictability and the density with which they sampled from communities of images. Learners exposed to walks that richly sampled from each community exhibited a sharp increase in processing time upon entry into a new community. This effect was eliminated in a highly regular walk that sampled exhaustively from images in short, successive cycles (i.e., that increasingly minimized uncertainty about the nature of upcoming stimuli). These results demonstrate that temporal organization plays an essential role in learners' sensitivity to the network architecture underlying sensory input.

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Figure 1
Figure 1
Representation of the graph and walk structure underlying visual sequences. The graph consisted of three distinct communities of interconnected nodes (shown in yellow, teal, and purple). Each node in the graph corresponded to a unique fractal image, and edges between nodes corresponded to their possible co-occurrence in a sequence. Sequences were generated by “walking” along the edges of the graph randomly, or according to successive Eulerian and Hamiltonian paths. In the color-coded walk samples shown above, we illustrate that sequences generated by Random walks and Eulerian paths tended to stay within a given community (relative to Hamiltonian paths, which only sparsely sampled from each community).
Figure 2
Figure 2
Boxplots of reaction time increases across experimental conditions (N = 59). Cross-community surprisal effects were calculated by subtracting, for each participant, mean RTs for pre-transition nodes from mean RTs for transition nodes. A value greater than 0 indicates an increase in RT upon entry into a new community during the exposure phase. Note that strong evidence for surprisal is observed only for walk types involving repeated exposure to common connections within the same community (Eulerian and Random). No surprisal effect was observed for participants in the Hamiltonian condition.

References

    1. Griffiths, T. L., Kemp, C. & Tenenbaum, J. B. Bayesian models of cognition. In The Cambridge Handbook of Computational Psychology 59–100 (2008).
    1. McClelland, J. L., Rumelhart, D. E. & McClelland, J. L. Parallel Distributed Processing Explorations in the Microstructure of Cognition: Foundations 1, (MIT Press, 1986).
    1. Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009;10:186–198. doi: 10.1038/nrn2575.
    1. Medaglia JD, Lynall M-E, Bassett DS. Cognitive network neuroscience. J. Cogn. Neurosci. 2015;27:1471–91. doi: 10.1162/jocn_a_00810.
    1. Girvan M, Newman MEJ. Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA. 2002;99:7821–6. doi: 10.1073/pnas.122653799.
    1. Cong J, Liu H. Approaching human language with complex networks. Phys. Life Rev. 2014;11:598–618. doi: 10.1016/j.plrev.2014.04.004.
    1. Bassett DS, Yang M, Wymbs NF, Grafton ST. Learning-induced autonomy of sensorimotor systems. Nat. Neurosci. 2015;18:744–51. doi: 10.1038/nn.3993.
    1. Goldstein R, Vitevitch MS. The influence of clustering coefficient on word-learning: how groups of similar sounding words facilitate acquisition. Front. Psychol. 2014;5:1307. doi: 10.3389/fpsyg.2014.01307.
    1. Engelthaler, T. & Hills, T. T. Feature biases in early word learning: Network distinctiveness predicts age of acquisition. Cogn. Sci. 41, 120–40 (2017).
    1. Schapiro AC, Rogers TT, Cordova NI, Turk-Browne NB, Botvinick MM. Neural representations of events arise from temporal community structure. Nat. Neurosci. 2013;16:486–92. doi: 10.1038/nn.3331.
    1. Fiser J, Aslin RN. Statistical learning of higher-order temporal structure from visual shape sequences. J. Exp. Psychol. Learn. Mem. Cogn. 2002;28:458–67. doi: 10.1037/0278-7393.28.3.458.
    1. Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science. 1996;274:1926–8. doi: 10.1126/science.274.5294.1926.
    1. Turk-Browne NB, Jungé J, Scholl BJ. The automaticity of visual statistical learning. J. Exp. Psychol. Gen. 2005;134:552–64. doi: 10.1037/0096-3445.134.4.552.
    1. Karuza EA, Thompson-Schill SL, Bassett DS. Local Patterns to Global Architectures: Influences of Network Topology on Human Learning. Trends Cogn. Sci. 2016;20:629–40. doi: 10.1016/j.tics.2016.06.003.
    1. Reber AS. Implicit learning of artificial grammars. J. Verbal Learning Verbal Behav. 1967;6:855–863. doi: 10.1016/S0022-5371(67)80149-X.
    1. Dienes Z, Broadbent D, Berry DC. Implicit and explicit knowledge bases in artificial grammar learning. J. Exp. Psychol. Learn. Mem. Cogn. 1991;17:875–887. doi: 10.1037/0278-7393.17.5.875.
    1. Knowlton BJ, et al. Artificial Grammar Learning Depends on Implicit Acquisition of Both Abstract and Exemplar-Specific Information. J. Exp. Psychol. Learn. Mem. Cogn. 1996;22:169–181. doi: 10.1037/0278-7393.22.1.169.
    1. Karuza EA, Farmer TA, Fine AB, Smith FX, Jaeger TF. On-line measures of prediction in a self-paced statistical learning task. Proc. 36th Annu. Meet. Cogn. Sci. Soc. 2013;1:725–730.
    1. MacDonald MC. How language production shapes language form and comprehension. Front. Psychol. 2013;4:226.
    1. Hale, J. A probabilistic earley parser as a psycholinguistic model. In NAACL ’01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies 2001 1–8 (Association for Computational Linguistics, 2001).
    1. Nissen MJ, Bullemer P. Attentional requirements of learning: Evidence from performance measures. Cogn. Psychol. 1987;19:1–32. doi: 10.1016/0010-0285(87)90002-8.
    1. Cleeremans A, McClelland JL. Learning the structure of event sequence. J. Exp. Psychol. Gen. 1991;120:235–253. doi: 10.1037/0096-3445.120.3.235.
    1. Tulving E, Schacter DL. Priming and Human Memory Systems. Science (80-.). 1987;247:301–306. doi: 10.1126/science.2296719.
    1. Larsson J, Smith AT. fMRI repetition suppression: neuronal adaptation or stimulus expectation? Cereb. Cortex. 2012;22:567–76. doi: 10.1093/cercor/bhr119.
    1. Bock K, Griffin ZM. The persistence of structural priming: Transient activation or implicit learning? J. Exp. Psychol. Gen. 2000;129:177–192. doi: 10.1037/0096-3445.129.2.177.
    1. Chang F, Dell GS, Bock K, Griffin ZM. Structural Priming as Implicit Learning: A Comparison of Models of Sentence Production. J. Psycholinguist. Res. 2000;29:217–230. doi: 10.1023/A:1005101313330.
    1. Fine AB, Jaeger TF. Evidence for Implicit Learning in Syntactic Comprehension. Cogn. Sci. 2013;37:578–591. doi: 10.1111/cogs.12022.
    1. Turk-Browne NB, Scholl BJ, Chun MM, Johnson MK. Neural evidence of statistical learning: efficient detection of visual regularities without awareness. J. Cogn. Neurosci. 2009;21:1934–45. doi: 10.1162/jocn.2009.21131.
    1. Qian T, Aslin RN. Learning bundles of stimuli renders stimulus order as a cue, not a confound. Proc. Natl. Acad. Sci. USA. 2014;111:14400–5. doi: 10.1073/pnas.1416109111.
    1. Palmer SD, Mattys SL. Speech segmentation by statistical learning is supported by domain-general processes within working memory. Q. J. Exp. Psychol. 2016;69:2390–2401. doi: 10.1080/17470218.2015.1112825.
    1. Jaeger TF. Redundancy and reduction: Speakers manage syntactic information density. Cogn. Psychol. 2010;61:23–62. doi: 10.1016/j.cogpsych.2010.02.002.
    1. Levy R. Expectation-based syntactic comprehension. Cognition. 2008;106:1126–1177. doi: 10.1016/j.cognition.2007.05.006.
    1. Rodi GC, Loreto V, Servedio VDP, Tria F. Optimal learning paths in information networks. Sci. Rep. 2015;5:10286. doi: 10.1038/srep10286.
    1. Abbott JT, Austerweil JL, Griffiths TL. Random walks on semantic networks can resemble optimal foraging. Psychol. Rev. 2015;122:558–69. doi: 10.1037/a0038693.
    1. Hills TT, Jones MN, Todd PM. Optimal foraging in semantic memory. Psychol. Rev. 2012;119:431–40. doi: 10.1037/a0027373.
    1. French RM, Addyman C, Mareschal D. TRACX: A recognition-based connectionist framework for sequence segmentation and chunk extraction. Psychol. Rev. 2011;118:614–636. doi: 10.1037/a0025255.
    1. Crump MJC, McDonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS One. 2013;8:e57410. doi: 10.1371/journal.pone.0057410.
    1. Bates D, Mächler M, Bolker BM, Walker SC. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 2014;67:1–51.
    1. R Development Core Team. R: A Language and Environment for Statistical Computing. (2008).

Source: PubMed

3
Subscribe