The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation

James C R Whittington, Timothy H Muller, Shirley Mark, Guifen Chen, Caswell Barry, Neil Burgess, Timothy E J Behrens, James C R Whittington, Timothy H Muller, Shirley Mark, Guifen Chen, Caswell Barry, Neil Burgess, Timothy E J Behrens

Abstract

The hippocampal-entorhinal system is important for spatial and relational memory tasks. We formally link these domains, provide a mechanistic understanding of the hippocampal role in generalization, and offer unifying principles underlying many entorhinal and hippocampal cell types. We propose medial entorhinal cells form a basis describing structural knowledge, and hippocampal cells link this basis with sensory representations. Adopting these principles, we introduce the Tolman-Eichenbaum machine (TEM). After learning, TEM entorhinal cells display diverse properties resembling apparently bespoke spatial responses, such as grid, band, border, and object-vector cells. TEM hippocampal cells include place and landmark cells that remap between environments. Crucially, TEM also aligns with empirically recorded representations in complex non-spatial tasks. TEM also generates predictions that hippocampal remapping is not random as previously believed; rather, structural knowledge is preserved across environments. We confirm this structural transfer over remapping in simultaneously recorded place and grid cells.

Keywords: entorhinal cortex; generalization; grid cells; hippocampus; neural networks; non-spatial reasoning; place cells; representation learning.

Conflict of interest statement

Declaration of Interests The authors declare no competing interests.

Copyright © 2020 The Authors. Published by Elsevier Inc. All rights reserved.

Figures

Graphical abstract
Graphical abstract
Figure 1
Figure 1
Spatial and Relational Inferences Cast as Structural Generalization (A–C) Structured relationships exist in many situations and can often be formalized on a connected graph, e.g., (A) social hierarchies, (B) transitive inference, and (C) spatial reasoning. Often the same relationships generalize across different sets of sensory objects (e.g., left/right in A). This transferable structure allows quick inference, e.g., seeing only the blue relationships allows you to infer the green ones. (D) Our task is predicting the next sensory observation in sequences derived from probabilistic transitions on a graph. Each node has an arbitrary sensory experience, e.g., a banana. An agent transitions on the graph observing only the immediate sensory stimuli and associated action taken, e.g., having seen motorbike → book → table → chair, it should predict the motorbike next if it understands the rules of the graph. (E) If you know the underlying structure of social hierarchies, observing a new node (in red) via a single relationship, e.g., Emily is Bob’s daughter, allows immediate inference about the new node’s (Emily’s) relationship to all other nodes (shown in black/gray). (F) Similarly for spatial graphs observing a new node on the left (solid red line) also tells us whether it is above or below (dashed red lines) other surrounding nodes. (G) Our agent performs this next step prediction task in many worlds sharing the same underlying structure (e.g., 6- or 4-connected graphs), but differing in size and arrangement of sensory stimuli. The aim is to learn the common structure in order to generalize and perform quick inferences. (H) Knowing the structure allows full graph understanding after only visiting all nodes, not all edges. Here, only 18 steps (red line) are required to infer all 42 links. (I) An agent that knows structure (node agent) will reach peak predictive performance after it has visited all nodes, quicker than one that has to see all transitions (edge agent). Icons from https://www.flaticon.com. See also Figure S1.
Figure S1
Figure S1
Task Schematics for TEM, Related to Figures 1 and 3 (A) Learning to predict the next sensory observation in environments that share the same structure but differ in their sensory observations. TEM only sees the sensory observations and associated action taken, it is not told about the underlying structure - this must be learned. (B) Transitive inference graph. When a new node (red) is seen to be one higher, all other (dotted) relations can be inferred i.e., 3 higher. (C) Example graph for a social hierarchy. (D) Example graph for 2D structure. (E) A complex task embedded in a spatial world. This is a schematic representation of the state space for the task in Sun et al. (2020). Each lap is of length 4 as the sensory objects (A, B, C, D) repeat every 4 nodes. There are 3 laps in total, and that defines the true state-space as a reward, r, is given every 3 laps.
Figure 2
Figure 2
The Tolman-Eichenbaum Machine (A) Factorization and conjunction as a principle for generalization. Separating structural codes (the transition rules of the graph) from sensory codes allows generalization over environments sharing the same structure. The conjunctive code represents the current environment in the context of this learned structure. (B and C) The two key elements of TEM. (B) Representations for path integration (g) on arbitrary graphs and C) relational memories (p) that bind abstract locations to sensory observations. (B) TEM must learn structural codes (g) that (1) represent each state differently so that different memories can be stored and retrieved and (2) have the same code on returning to a state (from any direction) so the appropriate memory can be retrieved. (C) Relational memories conjunctively combine the factorized structural (in blue representing location C) and sensory (in red representing the person) codes, thus these memories know what was where. The memories are stored in Hebbian weights (M) between the neurons of p. (D) Depiction of TEM at two time points, with each time point described at a different level of detail. Red shows predictions; green shows inference. Time point t shows network implementation and t+1 describes each computation in words. Circles depict neurons (blue is g, red is x, blue/red is p); shaded boxes depict computation steps; arrows show learnable weights; looped arrows describe recurrent attractor. Black lines between neurons in the attractor describe Hebbian weights M. Yellow arrows show errors that are minimized during training. Overall, TEM transitions through latent variables g and stores and retrieves memories p using Hebbian weights M. We note that this is a didactic schematic; for completeness and a faithful interpretation of the Bayesian underpinnings, please see STAR Methods and Figures S2, S3, and S4.
Figure S2
Figure S2
Full TEM Model, Related to Figure 2 (A) Generative model. (B) Inference model. Circled/ boxed variables are stochastic/ deterministic. Dashed arrows/boxes are optional as explained in the text. (C) Schematic to show the model flow in the neural network. Depiction of TEM at three time-points, with each time-point described at a different level of detail. Green/ red show inference and generative networks. Time point t−1 shows the overall Bayesian logic, t shows network implementation, t+1 describes each computation in words. Circles depict neurons (blue is g, red is x, blue/red is p); shaded boxes depict computation steps; arrows show learnable weights (green and red are weights in inference and generative networks); looped arrows describe recurrent attractor. Black lines between neurons in attractor describe Hebbian weights M. Wa are learnable, action dependent, transition weights. Wrepeat and Wtile are fixed weights that make the dimensions of the structural (blue) and sensory (red) inputs, respectively, to the attractor the same. Yellow arrows show training errors. We note we do not show the temporal filtering of sensory data x in this schematic.
Figure S3
Figure S3
Computations in TEM Generative Model, Related to Figure 2 This shows each computation, as described in Generative architecture, making clear the fixed Wtile and Wrepeat matrices perform appropriate dimension changes, though we note that the matrices may not be the sole computation in each step. Attractor dynamics are described in Retrieval using an attractor network. Red/Blue boxes describe two different ‘streams’.
Figure S4
Figure S4
Computations in TEM Inference Model, Related to Figure 2 This shows each computation, as described in Inference architecture, making clear the fixed Wtile and Wrepeat matrices perform appropriate dimension changes ensuring the entorhinal and sensory input to hippocampus have the same dimension. We note that the matrices may not be the sole computation in each step. Red/Blue boxes describe two different ‘streams’. In the bottom left, we show that multiplying together the representation after the Wtile and Wrepeat operations is equivalent to an outer product.
Figure 3
Figure 3
TEM Learns and Generalizes Abstract Relational Knowledge (A–C) Learning to learn: when TEM has only seen a few environments (blue/green) it takes many visits to each node to remember it. This is because it (1) does not yet understand the structure of the graph and (2) has not learned how to use memories. After visiting more environments and learning the common structure (cyan/yellow), TEM correctly predicts a node on the second visit regardless of the edge taken—TEM now understands both the rules of the graph (path integration) and how to store and retrieve memories. (A) Transitive inference, (B) social hierarchies, and (C) 2D graphs. (D–F) On 2D graphs. (D) TEM is able to predict sensory observations when returning to a node for the first time via a new direction—this is only possible with learned structural knowledge. (E) TEM can store long-term memories. (F) TEM’s performance tracks nodes visited, not edges. These results all demonstrate that TEM has learned and generalized abstract structural knowledge. See also Figure S1.
Figure 4
Figure 4
TEM Structural Neurons g Learn to Be Grid Cells that Generalize and TEM Conjunctive Memory Neurons p Learn to Be Place Cells that Remap We use 2D graphs with the number of nodes sampled from {61, 91, 127} or {64, 81, 100, 121} for hexagonal or square environments respectively. A cell’s rate map is obtained by allowing the agent to explore the environment then calculating its average firing rate at each point (graph node) in the environment. (A and B) TEM learned structural representations for random walks on 2D graphs. (A) Hexagonal worlds. Left to right: environments 1, 2, autocorrelation, real data (Krupic et al., 2012; Stensola et al., 2012), top to bottom: different cells. TEM learns grid-like cells, of different frequencies (top versus middle), and of different phases (middle versus bottom). (B) Square worlds. Two TEM learned structural cells—left/right; rate map/autocorrelation. (C) Raw unsmoothed rate maps. Left/right: bottom two cells from (A) both cells from (B). (D) TEM also learn band-like cells. Importantly, all TEM structural representations (A)–(D) generalize across environments. (E) Learned memory representations resemble place cells (left/right: environments 1/2; top 2 simulated, bottom 2 real cells) and have different field sizes. These cells remap between environments, i.e., do not generalize. (F) Grid scores of TEM grid-like cells correlate across environments. (G and H) To examine whether relationships between cells are preserved between environments, we correlated the spatial correlation coefficients of pairs of grid or place fields from each environment, using data from TEM or Barry et al. (2012) and Chen et al. (2018). (G) The spatial correlation coefficients of pairs of TEM structural cells and real data grid cells correlate strongly. (H) TEM hippocampal and real data place cells preserved their relationship to a lesser extent. This suggests that TEM structural cells, along with real grid cells, encode generalizable relationships to a greater extent than TEM hippocampal and real place cells. See also Figure S5.
Figure S5
Figure S5
Further TEM Cell Representations, Related to Figures 4 and 7 (A/B) Raw/smoothed structural cells, g, learned by TEM in diffusive behavior. C/D) Raw/smoothed learned TEM entorhinal cells, g, when trained on a square graph environment. D) E) Hippocampal cells, p, learned by TEM during diffusive behavior. (F) Random sample of TEM hippocampal cells when trained on 4-lap task of Sun et al. (2020). (G) Random sample of TEM entorhinal cells when trained on 4-lap task of Sun et al. (2020).
Figure 5
Figure 5
TEM Learned Representations Reflect Transition Statistics When the agent’s transition statistics mimic different behaviors, TEM learns new representations (left to right: different cells; top to bottom: environments 1, 2, real data). (A) When biased to move toward objects (white dots) TEM learns structural cells with a vector relationship to the objects—object vector cells (Høydal et al., 2019). These cells generalize to all objects. (B) TEM hippocampal cells reflect this behavioral transition change with similar cells, though they do not generalize to all objects—landmark cells (Deshmukh and Knierim, 2013). (C) When biased toward boundaries, TEM learns border cell-like representations (Solstad et al., 2008).
Figure 6
Figure 6
Structural Knowledge Is Preserved over Apparently Random Hippocampal Remapping (A) TEM predicts place cells remap to locations consistent with a grid code, i.e., a place cell co-active with a grid cell will be more likely to remap to locations where that grid cell is also active. (B and C) Data from open-field remapping experiments with simultaneously recorded place and grid cells (Barry et al., 2012; Chen et al., 2018). We compute the grid cell firing rate at the location of place cell peak for every grid cell, place cell pair in each of the two environments and then correlate this measure across environments (left). We compare this correlation coefficient to those computed equivalently but with randomly permuted place cell peaks (right). This is done for two independent datasets (B) (Barry et al., 2012) and (C) (Chen et al., 2018). The true observed correlation coefficients lies off the null distribution (p 

Figure S6

Fitting Ideal Grid Maps and…

Figure S6

Fitting Ideal Grid Maps and Analysis of Real Data Showing Grid Cells Realign…

Figure S6
Fitting Ideal Grid Maps and Analysis of Real Data Showing Grid Cells Realign and Place Cells Remap, Related to Figure 6 (A-C) Ideal grid. We fit an idealized grid rate map using the formula from Stemmler et al. (2015) to the original grid cell rate maps to remove any possible confounds and to ensure that we obtain accurate grid cell peaks. (A) An example original grid cell rate map. (B) An idealized rate map fit to that in (A). (C) Accurate finding of grid cell peaks (white crosses) on the idealized grid rate map, which also allows peaks that extend outside the box to be used (red crosses). D-E) Grid realignment and place cell remapping across environments in dataset 1. (D) Histograms showing the distributions of spatial correlations for place and grid cells both within and across environments. (E) Bar plots showing the mean (± SEM) of these distributions. F-G) Grid realignment and place cell remapping across environments in dataset 2. (F) and (G) are same analyses as (D) and (E) but with dataset 2. They demonstrate distributions of spatial correlations near 0 for dataset 2. (G) has its axis locked to that of (E) for visualization.

Figure 7

TEM Represents Non-spatial Reinforcement Learning…

Figure 7

TEM Represents Non-spatial Reinforcement Learning Tasks and Predicts Non-spatial Remapping (A) In Sun…

Figure 7
TEM Represents Non-spatial Reinforcement Learning Tasks and Predicts Non-spatial Remapping (A) In Sun et al. (2020), rodents perform laps of a track, only “rewarded” every 4 laps. Different hippocampal cell types are found: spatial place-like cells (top), those that preferentially fire on a given lap (middle), and those that count laps (bottom). (B) TEM learns similar representations when only “rewarded” every 4 laps. (C) TEM medial entorhinal cells learn both spatially periodic cells (top) and cells that represent the non-spatial task structure of “every 4 laps” (bottom). The latter cells are yet experimentally observed but are predicted by TEM. (D and E) TEM offers a mechanistic understanding of remapping in both spatial and non-spatial tasks. (D) Top/middle/bottom: schematic of entorhinal/hippocampal/sensory cells. Left/right: environment 1/2. Only two laps shown for clarity. TEM says spatial hippocampal cells are active when they receive input from both sensory (LEC; cell codes for A in this example) and MEC input. Place cells, thus, can only remap to other peaks (or within a broad MEC cell field) provided it also receives sensory input there. (E) TEM says, however, cells will retain their lap specificity despite spatially remapping (i.e., a lap 2 cell stays a lap 2 cell); since sensory observations repeat each lap, lap specificity is driven by MEC input. (F and G) Analysis to show TEM (F) and real (Sun et al., 2020) (G) hippocampal cells retain lap specificity after remapping. Left: distribution of lap-specificity correlations are significantly higher than shuffles. Top-right: distribution of spatial correlations (spatial) after remapping is compared to the distribution of lap-specific correlations (ESR). Bottom right: for cells of high lap-specific correlation after remapping (defined by the blue box in the left panel). (H) TEM prescribes which hippocampal cells will be active in each environment. The proportion of active cells that are ESR cells in environment 1, 2, or both (brown, purple, gray) imply approximate independence of cells recruited. (I) Data from Sun et al.(2020) showing the same effect1. Icons are from https://www.flaticon.com. See also Figure S5. .

Figure S7

Schematic of Analysis Showing Preserved…

Figure S7

Schematic of Analysis Showing Preserved Grid-Place Relationships after Remapping, with Corresponding Results, Related…

Figure S7
Schematic of Analysis Showing Preserved Grid-Place Relationships after Remapping, with Corresponding Results, Related to Figure 6 A) Schematic explaining the gridAtPlace analysis. Specifically how the scatterplot is generated. Note that in this figure original grid cell rate maps are shown, rather than ideal grid cell rate maps (Figures S6A–S6C) that were used to generate the main text figures. B-C) The grid cell correlation structure is preserved across environments in dataset 1. B) Dataset 1. Scatterplot shows the correlation across environments of the spatial correlations of grid cell-grid cell pairs (i.e., the correlation of the upper triangle of two grid cell by grid cell correlation matrices: one from environment 1 and one from environment 2). The histogram shows this correlation coefficient was significant relative to a null distribution of correlation coefficients obtained by permuting grid cell-grid cell pairs. (C) Same as A for place cells. D-E) Replication of preserved grid cell correlation structure across environments in dataset 2. D and E are the same format as (B) and (C). F-G) Preserved relationship between place and grid cells across environments in dataset 1. The scatterplots show the correlation of a given measure across trials, where each point is a place cell-grid cell pair. The histogram plots show where this correlation (gray line) lies relative to the null distribution of correlation coefficients. The p value is the proportion of the null distribution that is greater than the unshuffled correlation. (F) gridAtPlace (top) and minDist (bottom) measures are strongly significantly correlated over two trials within the same environment, as expected given the same place and grid code should be present. (G) These measures are also significantly correlated across the two different environments, providing evidence that place and grid cells retain their relationship across environments. (H) Replication of the preserved relationship between place and grid cells across environments in dataset 2. The gridAtPlace measure is significantly correlated at p<0.05 across real and virtual worlds and the minDist measure is trending very close to significance, replicating the preserved relationship between grid and place cells across environments.
All figures (15)
Figure S6
Figure S6
Fitting Ideal Grid Maps and Analysis of Real Data Showing Grid Cells Realign and Place Cells Remap, Related to Figure 6 (A-C) Ideal grid. We fit an idealized grid rate map using the formula from Stemmler et al. (2015) to the original grid cell rate maps to remove any possible confounds and to ensure that we obtain accurate grid cell peaks. (A) An example original grid cell rate map. (B) An idealized rate map fit to that in (A). (C) Accurate finding of grid cell peaks (white crosses) on the idealized grid rate map, which also allows peaks that extend outside the box to be used (red crosses). D-E) Grid realignment and place cell remapping across environments in dataset 1. (D) Histograms showing the distributions of spatial correlations for place and grid cells both within and across environments. (E) Bar plots showing the mean (± SEM) of these distributions. F-G) Grid realignment and place cell remapping across environments in dataset 2. (F) and (G) are same analyses as (D) and (E) but with dataset 2. They demonstrate distributions of spatial correlations near 0 for dataset 2. (G) has its axis locked to that of (E) for visualization.
Figure 7
Figure 7
TEM Represents Non-spatial Reinforcement Learning Tasks and Predicts Non-spatial Remapping (A) In Sun et al. (2020), rodents perform laps of a track, only “rewarded” every 4 laps. Different hippocampal cell types are found: spatial place-like cells (top), those that preferentially fire on a given lap (middle), and those that count laps (bottom). (B) TEM learns similar representations when only “rewarded” every 4 laps. (C) TEM medial entorhinal cells learn both spatially periodic cells (top) and cells that represent the non-spatial task structure of “every 4 laps” (bottom). The latter cells are yet experimentally observed but are predicted by TEM. (D and E) TEM offers a mechanistic understanding of remapping in both spatial and non-spatial tasks. (D) Top/middle/bottom: schematic of entorhinal/hippocampal/sensory cells. Left/right: environment 1/2. Only two laps shown for clarity. TEM says spatial hippocampal cells are active when they receive input from both sensory (LEC; cell codes for A in this example) and MEC input. Place cells, thus, can only remap to other peaks (or within a broad MEC cell field) provided it also receives sensory input there. (E) TEM says, however, cells will retain their lap specificity despite spatially remapping (i.e., a lap 2 cell stays a lap 2 cell); since sensory observations repeat each lap, lap specificity is driven by MEC input. (F and G) Analysis to show TEM (F) and real (Sun et al., 2020) (G) hippocampal cells retain lap specificity after remapping. Left: distribution of lap-specificity correlations are significantly higher than shuffles. Top-right: distribution of spatial correlations (spatial) after remapping is compared to the distribution of lap-specific correlations (ESR). Bottom right: for cells of high lap-specific correlation after remapping (defined by the blue box in the left panel). (H) TEM prescribes which hippocampal cells will be active in each environment. The proportion of active cells that are ESR cells in environment 1, 2, or both (brown, purple, gray) imply approximate independence of cells recruited. (I) Data from Sun et al.(2020) showing the same effect1. Icons are from https://www.flaticon.com. See also Figure S5. .
Figure S7
Figure S7
Schematic of Analysis Showing Preserved Grid-Place Relationships after Remapping, with Corresponding Results, Related to Figure 6 A) Schematic explaining the gridAtPlace analysis. Specifically how the scatterplot is generated. Note that in this figure original grid cell rate maps are shown, rather than ideal grid cell rate maps (Figures S6A–S6C) that were used to generate the main text figures. B-C) The grid cell correlation structure is preserved across environments in dataset 1. B) Dataset 1. Scatterplot shows the correlation across environments of the spatial correlations of grid cell-grid cell pairs (i.e., the correlation of the upper triangle of two grid cell by grid cell correlation matrices: one from environment 1 and one from environment 2). The histogram shows this correlation coefficient was significant relative to a null distribution of correlation coefficients obtained by permuting grid cell-grid cell pairs. (C) Same as A for place cells. D-E) Replication of preserved grid cell correlation structure across environments in dataset 2. D and E are the same format as (B) and (C). F-G) Preserved relationship between place and grid cells across environments in dataset 1. The scatterplots show the correlation of a given measure across trials, where each point is a place cell-grid cell pair. The histogram plots show where this correlation (gray line) lies relative to the null distribution of correlation coefficients. The p value is the proportion of the null distribution that is greater than the unshuffled correlation. (F) gridAtPlace (top) and minDist (bottom) measures are strongly significantly correlated over two trials within the same environment, as expected given the same place and grid code should be present. (G) These measures are also significantly correlated across the two different environments, providing evidence that place and grid cells retain their relationship across environments. (H) Replication of the preserved relationship between place and grid cells across environments in dataset 2. The gridAtPlace measure is significantly correlated at p<0.05 across real and virtual worlds and the minDist measure is trending very close to significance, replicating the preserved relationship between grid and place cells across environments.

References

    1. Abadi M., Barham P., Chen J., Chen Z., Davis A., Dean J., Devin M., Ghemawat S., Irving G., Isard M. TensorFlow: A system for large-scale machine learning. Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) 2016;16:265–283.
    1. Anderson M.I., Jeffery K.J. Heterogeneous modulation of place cell firing by changes in context. J. Neurosci. 2003;23:8827–8835.
    1. Ba J., Hinton G., Mnih V., Leibo J.Z., Ionescu C. Using Fast Weights to Attend to the Recent Past. Adv. Neural Inf. Process. Syst. 2016;29:4331–4339.
    1. Banino A., Barry C., Uria B., Blundell C., Lillicrap T., Mirowski P., Pritzel A., Chadwick M.J., Degris T., Modayil J. Vector-based navigation using grid-like representations in artificial agents. Nature. 2018;557:429–433.
    1. Baram A.B., Muller T.H., Nili H., Garvert M., Behrens T.E.J. Entorhinal and ventromedial prefrontal cortices abstract and generalise the structure of reinforcement learning problems. bioRxiv. 2019 doi: 10.1101/827253.
    1. Barry C., Ginzberg L.L., O’Keefe J., Burgess N. Grid cell firing patterns signal environmental novelty by expansion. Proc. Natl. Acad. Sci. USA. 2012;109:17687–17692.
    1. Bishop C.M. Springer; 2006. Pattern Recognition and Machine Learning.
    1. Bliss T., Collingridge G. A synaptic model of memory: long-term potentiation in the hippocampus. Nature. 1993;361:31–39.
    1. Boccara C.N., Nardin M., Stella F., O’Neill J., Csicsvari J. The entorhinal cognitive map is attracted to goals. Science. 2019;363:1443–1447.
    1. Bostock E., Muller R.U., Kubie J.L. Experience-dependent modifications of hippocampal place cell firing. Hippocampus. 1991;1:193–205.
    1. Brandon M.P., Bogaard A.R., Andrews C.M., Hasselmo M.E. Head direction cells in the postsubiculum do not show replay of prior waking sequences during sleep. Hippocampus. 2012;22:604–618.
    1. Bright I.M., Meister M.L.R., Cruzado N.A., Tiganj Z., Buffalo E.A., Howard M.W. A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex. Proc. Natl. Acad. Sci. USA. 2020;117:20274–20283.
    1. Bunsey M., Eichenbaum H. Conservation of hippocampal memory function in rats and humans. Nature. 1996;379:255–257.
    1. Burak Y., Fiete I.R. Accurate path integration in continuous attractor network models of grid cells. PLoS Comput. Biol. 2009;5:e1000291.
    1. Butler W.N., Hardcastle K., Giocomo L.M. Remembered reward locations restructure entorhinal spatial maps. Science. 2019;363:1447–1452.
    1. Buzsáki G., Tingley D. Space and Time: The Hippocampus as a Sequence Generator. Trends Cogn. Sci. 2018;22:853–869.
    1. Chen G., King J.A., Lu Y., Cacucci F., Burgess N. Spatial cell firing during virtual navigation of open arenas by head-restrained mice. eLife. 2018;7:7.
    1. Chen G., Lu Y., King J.A., Cacucci F., Burgess N. Differential influences of environment and self-motion on place and grid cell firing. Nat. Commun. 2019;10:630.
    1. Cohen N.J., Eichenbaum H. MIT Press; 1993. Memory, Amnesia, and the Hippocampal System.
    1. Cueva C.J., Wei X.-X. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. arXiv. 2018 1803.07770.
    1. Dayan P., Abbott L.F. MIT Press; 2001. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems.
    1. Dayan P., Hinton G.E., Neal R.M., Zemel R.S. The Helmholtz machine. Neural Comput. 1995;7:889–904.
    1. Derdikman D., Whitlock J.R., Tsao A., Fyhn M., Hafting T., Moser M.B., Moser E.I. Fragmentation of grid cell maps in a multicompartment environment. Nat. Neurosci. 2009;12:1325–1332.
    1. Deshmukh S.S., Knierim J.J. Representation of non-spatial and spatial information in the lateral entorhinal cortex. Front. Behav. Neurosci. 2011;5:69.
    1. Deshmukh S.S., Knierim J.J. Influence of local objects on hippocampal representations: Landmark vectors and memory. Hippocampus. 2013;23:253–267.
    1. Dordek Y., Soudry D., Meir R., Derdikman D. Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis. eLife. 2016;5:e10094.
    1. Dusek J.A., Eichenbaum H. The hippocampus and memory for orderly stimulus relations. Proc. Natl. Acad. Sci. USA. 1997;94:7109–7114.
    1. Eichenbaum H., Cohen N.J. Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron. 2014;83:764–770.
    1. Eichenbaum H., Dudchenko P., Wood E., Shapiro M., Tanila H. The hippocampus, memory, and place cells: is it spatial memory or a memory space? Neuron. 1999;23:209–226.
    1. Evans T., Burgess N. Coordinated hippocampal-entorhinal replay as structural inference. Adv. Neural Inf. Process. Syst. 2019;32:1731–1743.
    1. Foster D.J., Wilson M.A. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature. 2006;440:680–683.
    1. Frank L.M., Brown E.N., Wilson M. Trajectory encoding in the hippocampus and entorhinal cortex. Neuron. 2000;27:169–178.
    1. Fuhs M.C., Touretzky D.S. A spin glass model of path integration in rat medial entorhinal cortex. J. Neurosci. 2006;26:4266–4276.
    1. Fyhn M., Hafting T., Treves A., Moser M.-B., Moser E.I. Hippocampal remapping and grid realignment in entorhinal cortex. Nature. 2007;446:190–194.
    1. Gemici M., Hung C.-C., Santoro A., Wayne G., Mohamed S., Rezende D.J., Amos D., Lillicrap T. Generative Temporal Models with Memory. arXiv. 2017 1702.046490.
    1. Gershman S.J., Niv Y. Learning latent structure: carving nature at its joints. Curr. Opin. Neurobiol. 2010;20:251–256.
    1. Guanella A., Verschure P.F.M.J. Artificial Neural Networks – ICANN 2006. Springer; 2006. A Model of Grid Cells Based on a Path Integration Mechanism; pp. 740–749.
    1. Gupta K., Beer N.J., Keller L.A., Hasselmo M.E. Medial entorhinal grid cells and head direction cells rotate with a T-maze more often during less recently experienced rotations. Cereb. Cortex. 2014;24:1630–1644.
    1. Gustafson N.J., Daw N.D. Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. PLoS Comput. Biol. 2011;7:e1002235.
    1. Guzowski J.F., McNaughton B.L., Barnes C.A., Worley P.F. Environment-specific expression of the immediate-early gene Arc in hippocampal neuronal ensembles. Nat. Neurosci. 1999;2:1120–1124.
    1. Hafting T., Fyhn M., Molden S., Moser M.-B., Moser E.I. Microstructure of a spatial map in the entorhinal cortex. Nature. 2005;436:801–806.
    1. Hasselmo M.E. A model of prefrontal cortical mechanisms for goal-directed behavior. J. Cogn. Neurosci. 2005;17:1115–1129.
    1. Higgins I., Matthey L., Pal A., Burgess C., Glorot X., Botvinick M., Mohamed S., Lerchner A. 2017. βVAE: Learning basic visual concepts with a constrained variational framework. International Conference on Learning Representations 0.
    1. Hinton G.E., Dayan P., Frey B.J., Neal R.M. The “wake-sleep” algorithm for unsupervised neural networks. Science. 1995;268:1158–1161.
    1. Høydal Ø.A., Skytøen E.R., Andersson S.O., Moser M.-B., Moser E.I. Object-vector coding in the medial entorhinal cortex. Nature. 2019;568:400–404.
    1. Jung M.W., Wiener S.I., McNaughton B.L. Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the rat. J. Neurosci. 1994;14:7347–7356.
    1. Kaefer K., Nardin M., Blahna K., Csicsvari J. Replay of Behavioral Sequences in the Medial Prefrontal Cortex during Rule Switching. Neuron. 2020;106:154–165.
    1. Kemp C., Tenenbaum J.B. The discovery of structural form. Proc. Natl. Acad. Sci. USA. 2008;105:10687–10692.
    1. Kingma D.P., Ba J.L. Adam: A Method for Stochastic Optimization. arXiv. 2014 1412.69800.
    1. Kingma D.P., Welling M. Auto-Encoding Variational Bayes. arXiv. 2013 1312.61140.
    1. Kjelstrup K.B., Solstad T., Brun V.H., Hafting T., Leutgeb S., Witter M.P., Moser E.I., Moser M.-B. Finite scale of spatial representation in the hippocampus. Science. 2008;321:140–143.
    1. Komorowski R.W., Manns J.R., Eichenbaum H. Robust conjunctive item-place coding by hippocampal neurons parallels learning what happens where. J. Neurosci. 2009;29:9918–9929.
    1. Krupic J., Burgess N., O’Keefe J. Neural representations of location composed of spatially periodic bands. Science. 2012;337:853–857.
    1. Kumaran D., Melo H.L., Duzel E. The emergence and representation of knowledge about social and nonsocial hierarchies. Neuron. 2012;76:653–666.
    1. Lever C., Wills T., Cacucci F., Burgess N., O’Keefe J. Long-term plasticity in hippocampal place-cell representation of environmental geometry. Nature. 2002;416:90–94.
    1. Lewis P.A., Durrant S.J. Overlapping memory replay during sleep builds cognitive schemata. Trends Cogn. Sci. 2011;15:343–351.
    1. Liu Y., Dolan R.J., Kurth-Nelson Z., Behrens T.E.J. Human Replay Spontaneously Reorganizes Experience. Cell. 2019;178:640–652.
    1. MacKay D.J.C. Volume 1. Cambridge University Press; 2003. Information Theory, Inference and Learning Algorithms.
    1. Manns J.R., Eichenbaum H. Evolution of declarative memory. Hippocampus. 2006;16:795–808.
    1. McClelland J.L., McNaughton B.L., O’Reilly R.C. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 1995;102:419–457.
    1. Mittelstaedt M.L., Mittelstaedt H. Homing by path integration in a mammal. Naturwissenschaften. 1980;67:566–567.
    1. Momennejad I. Learning Structures: Predictive Representations, Replay, and Generalization. Curr. Opin. Behav. Sci. 2020;32:155–166.
    1. Morrissey M.D., Insel N., Takehara-Nishiuchi K. Generalizable knowledge outweighs incidental details in prefrontal ensemble code over time. eLife. 2017;6:1–20.
    1. Muller R.U., Kubie J.L. The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J. Neurosci. 1987;7:1951–1968.
    1. Nakazawa K., Quirk M.C., Chitwood R.A., Watanabe M., Yeckel M.F., Sun L.D., Kato A., Carr C.A., Johnston D., Wilson M.A., Tonegawa S. Requirement for hippocampal CA3 NMDA receptors in associative memory recall. Science. 2002;297:211–218.
    1. Neunuebel J.P., Yoganarasimha D., Rao G., Knierim J.J. Conflicts between local and global spatial frameworks dissociate neural representations of the lateral and medial entorhinal cortex. J. Neurosci. 2013;33:9246–9258.
    1. O’Keefe J., Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 1971;34:171–175.
    1. O’Keefe J., Nadel L. Oxford University Press; 1978. The Hippocampus as a Cognitive Map.
    1. Purcell E.M. Life at low Reynolds number. Am. J. Phys. 1977;45:3–11.
    1. Rezende D.J., Mohamed S., Wierstra D. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv. 2014 1401.40820.
    1. Rich P.D., Liaw H.-P., Lee A.K. Place cells. Large environments reveal the statistical structure governing hippocampal representations. Science. 2014;345:814–817.
    1. Schendan H.E., Searl M.M., Melrose R.J., Stern C.E. An FMRI study of the role of the medial temporal lobe in implicit and explicit sequence learning. Neuron. 2003;37:1013–1025.
    1. Scoville W.B., Milner B. Loss of recent memory after bilateral hippocampal lesions. J. Neurol. Neurosurg. Psychiatry. 1957;20:11–21.
    1. Shankar K.H., Howard M.W. A scale-invariant internal representation of time. Neural Comput. 2012;24:134–193.
    1. Solstad T., Boccara C.N., Kropff E., Moser M.-B., Moser E.I. Representation of geometric borders in the entorhinal cortex. Science. 2008;322:1865–1868.
    1. Sorscher B., Mel G.C., Ganguli S., Ocko S.A. A unified theory for the origin of grid cells through the lens of pattern formation. Adv. Neural Inf. Process. Syst. 2019;32:10003–10013.
    1. Stachenfeld K.L.K.L., Botvinick M.M., Gershman S.J. The hippocampus as a predictive map. Nat. Neurosci. 2017;20:1643–1653.
    1. Stella F., Baracskay P., O’Neill J., Csicsvari J. Hippocampal Reactivation of Random Trajectories Resembling Brownian Diffusion. Neuron. 2019;102:450–461.
    1. Stemmler M., Mathis A., Herz A.V.M. Connecting multiple spatial scales to decode the population activity of grid cells. Sci. Adv. 2015;1 e1500816–e1500816.
    1. Stensola H., Stensola T., Solstad T., Frøland K., Moser M.B., Moser E.I. The entorhinal grid map is discretized. Nature. 2012;492:72–78.
    1. Sun C., Yang W., Martin J., Tonegawa S. Hippocampal neurons represent events as transferable units of experience. Nat. Neurosci. 2020;23:651–663.
    1. Tahvildari B., Fransén E., Alonso A.A., Hasselmo M.E. Switching between “On” and “Off” states of persistent activity in lateral entorhinal layer III neurons. Hippocampus. 2007;17:257–263.
    1. Taube J.S., Muller R.U., Ranck J.B., Jr. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 1990;10:420–435.
    1. Tolman E.C. Cognitive maps in rats and men. Psychol. Rev. 1948;55:189–208.
    1. Tsao A., Sugar J., Lu L., Wang C., Knierim J.J., Moser M.-B., Moser E.I. Integrating time from experience in the lateral entorhinal cortex. Nature. 2018;561:57–62.
    1. Vertes E., Sahani M. A neurally plausible model learns successor representations in partially observable environments. Adv. Neural Inf. Process. Syst. 2019;32:13714–13724.
    1. Whittington J.C.R., Muller T.H., Mark S., Barry C., Behrens T.E.J. Generalisation of structural knowledge in the hippocampal-entorhinal system. Adv. Neural Inf. Process. Syst. 2018;31:8493–8504.
    1. Wills T.J., Lever C., Cacucci F., Burgess N., O’Keefe J. Attractor dynamics in the hippocampal representation of the local environment. Science. 2005;308:873–876.
    1. Wood E.R., Dudchenko P.A., Eichenbaum H. The global record of memory in hippocampal neuronal activity. Nature. 1999;397:613–616.
    1. Wood E.R., Dudchenko P.A., Robitsek R.J., Eichenbaum H. Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron. 2000;27:623–633.
    1. Yoon K., Buice M.A., Barry C., Hayman R., Burgess N., Fiete I.R. Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nat. Neurosci. 2013;16:1077–1084.
    1. Zhang K. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci. 1996;16:2112–2126.

Source: PubMed

Подписаться