An information integration theory of consciousness

Giulio Tononi, Giulio Tononi

Abstract

Background: Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition?

Presentation of the hypothesis: This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation - the availability of a very large number of conscious experiences; and integration - the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Phi value of a complex of elements. Phi is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Phi>0 that is not part of a subset of higher Phi. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex.

Testing the hypothesis: The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness.

Implications of the hypothesis: The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts.

Figures

Figure 1
Figure 1
Effective information, minimum information bipartition, and complexes. a. Effective information. Shown is a single subset S of 4 elements ({1,2,3,4}, blue circle), forming part of a larger system X (black ellipse). This subset is bisected into A and B by a bipartition ({1,3}/{2,4}, indicated by the dotted grey line). Arrows indicate causally effective connections linking A to B and B to A across the bipartition (other connections may link both A and B to the rest of the system X). To measure EI(A→B), maximum entropy Hmax is injected into the outgoing connections from A (corresponding to independent noise sources). The entropy of the states of B that is due to the input from A is then measured. Note that A can affect B directly through connections linking the two subsets, as well as indirectly via X. Applying maximum entropy to B allows one to measure EI(B→A). The effective information for this bipartition is EI(A B) = EI(A→B) + EI(B→A). b. Minimum information bipartition. For subset S = {1,2,3,4}, the horizontal bipartition {1,3}/{2,4} yields a positive value of EI. However, the bipartition {1,2}/{3,4} yields EI = 0 and is a minimum information bipartition (MIB) for this subset. The other bipartitions of subset S = {1,2,3,4} are {1,4}/{2,3}, {1}/{2,3,4}, {2}/{1,3,4}, {3}/{1,2,4}, {4}/{1,2,3}, all with EI>0. c. Analysis of complexes. By considering all subsets of system X one can identify its complexes and rank them by the respective values of Φ – the value of EI for their minimum information bipartition. Assuming that other elements in X are disconnected, it is easy to see that Φ>0 for subset {3,4} and {1,2}, but Φ = 0 for subsets {1,3}, {1,4}, {2,3}, {2,4}, {1,2,3}, {1,2,4}, {1,3,4}, {2,3,4}, and {1,2,3,4}. Subsets {3,4} and {1,2} are not part of a larger subset having higher Φ, and therefore they constitute complexes. This is indicated schematically by having them encircled by a grey oval (darker grey indicates higher Φ). Methodological note. In order to identify complexes and their Φ(S) for systems with many different connection patterns, each system X was implemented as a stationary multidimensional Gaussian process such that values for effective information could be obtained analytically (details in [8]). Briefly, in order to identify complexes and their Φ(S) for systems with many different connection patterns, we implemented numerous model systems X composed of n neural elements with connections CONij specified by a connection matrix CON(X) (no self-connections). In order to compare different architectures, CON(X) was normalized so that the absolute value of the sum of the afferent synaptic weights per element corresponded to a constant value w<1 (here w = 0.5). If the system's dynamics corresponds to a multivariate Gaussian random process, its covariance matrix COV(X) can be derived analytically. As in previous work, we consider the vector X of random variables that represents the activity of the elements of X, subject to independent Gaussian noise R of magnitude c. We have that, when the elements settle under stationary conditions, X = X * CON(X) + cR. By defining Q = (1-CON(X))-1 and averaging over the states produced by successive values of R, we obtain the covariance matrix COV(X) = <X*X> = <Qt * Rt * R * Q> = Qt * Q, where the superscript t refers to the transpose. Under Gaussian assumptions, all deviations from independence among the two complementary parts A and B of a subset S of X are expressed by the covariances among the respective elements. Given these covariances, values for the individual entropies H(A) and H(B), as well as for the joint entropy of the subset H(S) = H(AB) can be obtained as, for example, H(A) = (1/2)ln [(2π e)n|COV(A)|], where |•| denotes the determinant. The mutual information between A and B is then given by MI(A;B) = H(A) + H(B) - H(AB). Note that MI(A:B) is symmetric and positive. To obtain the effective information between A and B within model systems, independent noise sources in A are enforced by setting to zero strength the connections within A and afferent to A. Then the covariance matrix for A is equal to the identity matrix (given independent Gaussian noise), and any statistical dependence between A and B must be due to the causal effects of A on B, mediated by the efferent connections of A. Moreover, all possible outputs from A that could affect B are evaluated. Under these conditions, EI(A→B) = MI(AHmax;B). The independent Gaussian noise R applied to A is multiplied by cp, the perturbation coefficient, while the independent Gaussian noise applied to the rest of the system is given by ci, the intrinsic noise coefficient. Here cp = 1 and ci = 0.00001 in order to emphasize the role of the connectivity and minimize that of noise. To identify complexes and obtain their capacity for information integration, one considers every subset S of X composed of k elements, with k = 2,..., n. For each subset S, we consider all bipartitions and calculate EI(A B) for each of them. We find the minimum information bipartition MIB(S), the bipartition for which the normalized effective information reaches a minimum, and the corresponding value of Φ(S). We then find the complexes of X as those subsets S with Φ>0 that are not included within a subset having higher Φ and rank them based on their Φ(S) value. The complex with the maximum value of Φ(S) is the main complex. MATLAB functions used for calculating effective information and complexes are at .
Figure 2
Figure 2
Effective information matrix and activity states for two complexes having the same value of Φ. a. Causal interactions diagram and analysis of complexes. Shown are two systems, one with a "divergent" architecture (left) and one with a "chain" architecture (right). The analysis of complexes shows that both contain a complex of four elements having a Φ value of 10. b. Effective information matrix. Shown is the effective information matrix for the two complexes above. For each complex, all bipartitions are indicated by listing one part (subset A) on the upper row and the complementary part (subset B) on the lower row. In between are the values of effective information from A to B and from B to A for each bipartition, color-coded as black (zero), red (intermediate value) and yellow (high value). Note that the effective information matrix is different for the two complexes, even though Φ is the same. The effective information matrix defines the set of informational relationships, or "qualia space" for each complex. Note that the effective information matrix refers exclusively to the informational relationships within the main complex (relationships with elements outside the main complex, represented here by empty circles, do not contribute to qualia space). c. State diagram. Shown are five representative states for the two complexes. Each is represented by the activity state of the four elements of each complex arranged in a column (blue: active elements; black: inactive ones). The five states can be thought of, for instance, as evolving in time due the intrinsic dynamics of the system or to inputs from the environment. Although the states are identical for the two complexes, their meaning is different because of the difference in the effective information matrix. The last four columns represent four special states, those corresponding to the activation of one element at a time. Such states, if achievable, would correspond most closely to the specific "quale" contributed by that particular element in that particular complex.
Figure 3
Figure 3
Information integration for a thalamocortical-like architecture. a. Optimization of information integration for a system that is both functionally specialized and functionally integrated. Shown is the causal interaction diagram for a network whose connection matrix was obtained by optimization for Φ (Φ = 74 bits). Note the heterogeneous arrangement of the incoming and outgoing connections: each element is connected to a different subset of elements, with different weights. Further analysis indicates that this network jointly maximizes functional specialization and functional integration among its 8 elements, thereby resembling the anatomical organization of the thalamocortical system [8]. b. Reduction of information integration through loss of specialization. The same amount of connectivity, distributed homogeneously to eliminate functional specialization, yields a complex with much lower values of Φ (Φ = 20 bits). c. Reduction of information integration through loss of integration. The same amount of connectivity, distributed in such a way as to form four independent modules to eliminate functional integration, yields four separate complexes with much lower values of Φ (Φ = 20 bits).
Figure 4
Figure 4
Information integration and complexes for other neural-like architectures. a. Schematic of a cerebellum-like organization. Shown are three modules of eight elements each, with many feed forward and lateral connections within each module but minimal connections among them. The analysis of complexes reveals three separate complexes with low values of Φ (Φ = 20 bits). There is also a large complex encompassing all the elements, but its Φ value is extremely low (Φ = 5 bits). b. Schematic of the organization of a reticular activating system. Shown is a single subcortical "reticular" element providing common input to the eight elements of a thalamocortical-like main complex (both specialized and integrated, Φ = 61 bits). Despite the diffuse projections from the reticular element on the main complex, the complex comprising all 9 elements has a much lower value of Φ (Φ = 10 bits). c. Schematic of the organization of afferent pathways. Shown are three short chains that stand for afferent pathways. Each chain connects to a port-in of a main complex having a high value of Φ (61 bits) that is thalamocortical-like (both specialized and integrated). Note that the afferent pathways and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). Thus, elements in afferent pathways can affect the main complex without belonging to it. d. Schematic of the organization of efferent pathways. Shown are three short chains that stand for efferent pathways. Each chain receives a connection from a port-out of the thalamocortical-like main complex. Also in this case, the efferent pathways and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). e. Schematic of the organization of cortico-subcortico-cortical loops. Shown are three short chains that stand for cortico-subcortico-cortical loops, which are connected to the main complex at both ports-in and ports-out. Again, the subcortical loops and the elements of the main complex together constitute a large complex, but its Φ value is low (Φ = 10 bits). Thus, elements in loops connected to the main complex can affect it without belonging to it. Note, however, that the addition of these three loops slightly increased the Φ value of the main complex (from Φ = 61 to Φ = 63 bits) by providing additional pathways for interactions among its elements.
Figure 5
Figure 5
Information integration and complexes after anatomical and functional disconnections. a. Schematic of a split-brain-like anatomical disconnection. Top. Shown is a large main complex obtained by connecting two thalamocortical-like subsets through "callosum-like" reciprocal connections. There is also a single element that projects to all other elements, representing "subcortical" common input. Note that the Φ value for the main complex (16 elements) is high (Φ = 72 bits). There is also a larger complex including the "subcortical" element, but its Φ value is low (Φ = 10). Bottom. If the "callosum-like" connections are cut, one obtains two 8-element complexes, corresponding to the two "hemispheres", whose Φ value is reduced but still high (Φ = 61 bits). The two "hemispheres" still share some information due to common input from the "subcortical" element with which they form a large complex of low Φ. b. Schematic of a functional disconnection. Top. Shown is a large main complex obtained by linking with reciprocal connections a "supramodal" module of four elements (cornerstone) with a "visual" module (to its right) and an "auditory" module (below). Note that there are no direct connections between the "visual" and "auditory" modules. The 12 elements together form a main complex with Φ = 61 bits. Bottom. If the "auditory" module is functionally disconnected from the "supramodal" one by inactivating its four elements (indicated in blue), the main complex shrinks to include just the "supramodal" and "visual" modules. In this case, the Φ value is only minimally reduced (Φ = 57 bits).

References

    1. Tononi G, Edelman GM. Consciousness and complexity. Science. 1998;282:1846–1851. doi: 10.1126/science.282.5395.1846.
    1. Tononi G. Information measures for conscious experience. Arch Ital Biol. 2001;139:367–371.
    1. Tononi G. Consciousness and the brain: Theoretical aspects. In: Adelman G, Smith, B, editor. Encyclopedia of Neuroscience. 3. Elsevier; 2004.
    1. Shannon CE, Weaver W. The mathematical theory of communication. Urbana: University of Illinois Press; 1963.
    1. Sperry R. Consciousness, personal identity and the divided brain. Neuropsychologia. 1984;22:661–673. doi: 10.1016/0028-3932(84)90093-9.
    1. Bachmann T. Microgenetic approach to the conscious mind. Amsterdam; Philadelphia: John Benjamins Pub. Co; 2000.
    1. Poppel E, Artin T. Mindworks: Time and conscious experience. Boston, MA, US: Harcourt Brace Jovanovich, Inc; 1988.
    1. Tononi G, Sporns O. Measuring information integration. BMC Neurosci. 2003;4:31. doi: 10.1186/1471-2202-4-31.
    1. Edelman GM, Tononi G. A universe of consciousness: how matter becomes imagination. 1. New York, NY: Basic Books; 2000.
    1. Nagel T. What is the mind-body problem? Ciba Foundation Symposium. 1993;174:1–7. discussion 7–13.
    1. Buonomano DV, Merzenich MM. Cortical plasticity: from synapses to maps. Annu Rev Neurosci. 1998;21:149–186. doi: 10.1146/annurev.neuro.21.1.149.
    1. Zeki S. A vision of the brain. Oxford; Boston: Blackwell Scientific Publications; 1993.
    1. Tononi G. Galileo e il fotodiodo. Bari: Laterza; 2003.
    1. Tononi G, Sporns O, Edelman GM. A complexity measure for selective matching of signals by the brain. Proceedings of the National Academy of Sciences of the United States of America. 1996;93:3422–3427. doi: 10.1073/pnas.93.8.3422.
    1. Plum F. Normal and Altered States of Function. Peters A, Jones EG. Vol. 9. New York: Plenum Press; 1991. Coma and related global disturbances of the human conscious state; pp. 359–425.
    1. Crick F, Koch C. Are we aware of neural activity in primary visual cortex? Nature. 1995;375:121–123. doi: 10.1038/375121a0.
    1. Crick F, Koch C. Consciousness and neuroscience. Cereb Cortex. 1998;8:97–107. doi: 10.1093/cercor/8.2.97.
    1. Dehaene S, Naccache L. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition. 2001;79:1–37. doi: 10.1016/S0010-0277(00)00123-2.
    1. Zeman A. Consciousness. Brain. 2001;124:1263–1289. doi: 10.1093/brain/124.7.1263.
    1. Rees G, Kreiman G, Koch C. Neural correlates of consciousness in humans. Nat Rev Neurosci. 2002;3:261–270. doi: 10.1038/nrn783.
    1. Crick F, Koch C. A framework for consciousness. Nat Neurosci. 2003;6:119–126. doi: 10.1038/nn0203-119.
    1. Laureys S, Antoine S, Boly M, Elincx S, Faymonville ME, Berre J, Sadzot B, Ferring M, De Tiege X, van Bogaert P, Hansen I, Damas P, Mavroudakis N, Lambermont B, Del Fiore G, Aerts J, Degueldre C, Phillips C, Franck G, Vincent JL, Lamy M, Luxen A, Moonen G, Goldman S, Maquet P. Brain function in the vegetative state. Acta Neurol Belg. 2002;102:177–185.
    1. Schiff ND, Ribary U, Moreno DR, Beattie B, Kronberg E, Blasberg R, Giacino J, McCagg C, Fins JJ, Llinas R, Plum F. Residual cerebral activity and behavioural fragments can remain in the persistently vegetative brain. Brain. 2002;125:1210–1234. doi: 10.1093/brain/awf131.
    1. Adams JH, Graham DI, Jennett B. The neuropathology of the vegetative state after an acute brain insult. Brain. 2000;123:1327–1338. doi: 10.1093/brain/123.7.1327.
    1. Kolb B, Whishaw IQ. Fundamentals of human neuropsychology. 4. New York, NY: WH. Freeman; 1996.
    1. Srinivasan R, Russell DP, Edelman GM, Tononi G. Increased synchronization of neuromagnetic responses during conscious perception. J Neurosci. 1999;19:5435–5448.
    1. McIntosh AR, Rajah MN, Lobaugh NJ. Interactions of prefrontal cortex in relation to awareness in sensory learning. Science. 1999;284:1531–1533. doi: 10.1126/science.284.5419.1531.
    1. Vuilleumier P, Sagiv N, Hazeltine E, Poldrack RA, Swick D, Rafal RD, Gabrieli JD. Neural fate of seen and unseen faces in visuospatial neglect: a combined event-related functional MRI and event-related potential study. Proc Natl Acad Sci U S A. 2001;98:3495–3500. doi: 10.1073/pnas.051436898.
    1. Cosmelli D, David O, Lachaux JP, Martinerie J, Garnero L, Renault B, Varela F. Waves of consciousness: ongoing cortical patterns during binocular rivalry. Neuroimage. 2004;23:128–140. doi: 10.1016/j.neuroimage.2004.05.008.
    1. Passingham RE, Stephan KE, Kotter R. The anatomical basis of functional localization in the cortex. Nat Rev Neurosci. 2002;3:606–616.
    1. Engel AK, Fries P, Singer W. Dynamic predictions: oscillations and synchrony in top-down processing. Nat Rev Neurosci. 2001;2:704–716. doi: 10.1038/35094565.
    1. Singer W. Consciousness and the binding problem. Ann N Y Acad Sci. 2001;929:123–146.
    1. Bressler SL, Coppola R, Nakamura R. Episodic multiregional cortical coherence at multiple frequencies during visual task performance. Nature. 1993;366:153–156. doi: 10.1038/366153a0.
    1. Friston KJ. Brain function, nonlinear coupling, and neuronal transients. Neuroscientist. 2001;7:406–418.
    1. Stam CJ, Breakspear M, van Cappellen van Walsum AM, van Dijk BW. Nonlinear synchronization in EEG and whole-head MEG recordings of healthy subjects. Hum Brain Mapp. 2003;19:63–78. doi: 10.1002/hbm.10106.
    1. Cohen YE, Andersen RA. A common reference frame for movement plans in the posterior parietal cortex. Nat Rev Neurosci. 2002;3:553–562. doi: 10.1038/nrn873.
    1. Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, Newman EL, Fried I. Cellular networks underlying human spatial navigation. Nature. 2003;425:184–188. doi: 10.1038/nature01964.
    1. Tononi G, Sporns O, Edelman GM. Reentry and the problem of integrating multiple cortical areas: simulation of dynamic integration in the visual system. Cerebral Cortex. 1992;2:310–335.
    1. Pouget A, Deneve S, Duhamel JR. A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci. 2002;3:741–747. doi: 10.1038/nrn914.
    1. Salinas E. Fast remapping of sensory stimuli onto motor actions on the basis of contextual modulation. J Neurosci. 2004;24:1113–1118. doi: 10.1523/JNEUROSCI.4569-03.2004.
    1. Cohen D, Yarom Y. Patches of synchronized activity in the cerebellar cortex evoked by mossy-fiber stimulation: questioning the role of parallel fibers. Proc Natl Acad Sci U S A. 1998;95:15032–15036. doi: 10.1073/pnas.95.25.15032.
    1. Bower JM. The organization of cerebellar cortical circuitry revisited: implications for function. Ann N Y Acad Sci. 2002;978:135–155.
    1. Moruzzi G, Magoun HW. Brain stem reticular formation and activation of the EEG. Electroencephalog Clin Neurophysiol. 1949;1:455–473.
    1. Steriade M, McCarley RW. Brainstem control of wakefulness and sleep. New York: Plenum Press; 1990.
    1. Alexander GE, Crutcher MD, DeLong MR. Basal ganglia-thalamocortical circuits: parallel substrates for motor, oculomotor, "prefrontal" and "limbic" functions. Prog Brain Res. 1990;85:119–146.
    1. Middleton FA, Strick PL. Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain Res Brain Res Rev. 2000;31:236–250. doi: 10.1016/S0165-0173(99)00040-5.
    1. Baars BJ. A cognitive theory of consciousness. New York, NY, US: Cambridge University Press; 1988.
    1. Raichle ME. The neural correlates of consciousness: an analysis of cognitive skill learning. Philos Trans R Soc Lond B Biol Sci. 1998;353:1889–1901. doi: 10.1098/rstb.1998.0341.
    1. Logothetis NK, Leopold DA, Sheinberg DL. What is rivalling during binocular rivalry? Nature. 1996;380:621–624. doi: 10.1038/380621a0.
    1. Ascoli GA. Progress and perspectives in computational neuroanatomy. Anat Rec. 1999;257:195–207. doi: 10.1002/(SICI)1097-0185(19991215)257:6<195::AID-AR5>;2-H.
    1. Sporns O, Tononi G, Edelman GM. Theoretical neuroanatomy and the connectivity of the cerebral cortex. Behav Brain Res. 2002;135:69–74. doi: 10.1016/S0166-4328(02)00157-2.
    1. Dehaene S, Sergent C, Changeux JP. A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc Natl Acad Sci U S A. 2003;100:8520–8525. doi: 10.1073/pnas.1332574100.
    1. Lumer ED. A neural model of binocular integration and rivalry based on the coordination of action-potential timing in primary visual cortex. Cereb Cortex. 1998;8:553–561. doi: 10.1093/cercor/8.6.553.
    1. Hobson JA, Pace-Schott EF, Stickgold R. Dreaming and the brain: toward a cognitive neuroscience of conscious states. Behav Brain Sci. 2000;23:793–842. doi: 10.1017/S0140525X00003976. discussion 904–1121.
    1. Steriade M. Synchronized activities of coupled oscillators in the cerebral cortex and thalamus at different levels of vigilance. Cerebral Cortex. 1997;7:583–604. doi: 10.1093/cercor/7.6.583.
    1. Libet B. Brain stimulation in the study of neuronal functions for conscious sensory experiences. Human Neurobiology. 1982;1:235–242.
    1. Lamme VA, Roelfsema PR. The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci. 2000;23:571–579. doi: 10.1016/S0166-2236(00)01657-X.
    1. Lumer ED, Edelman GM, Tononi G. Neural dynamics in a model of the thalamocortical system.1. Layers, loops and the emergence of fast synchronous rhythms. Cerebral Cortex. 1997;7:207–227. doi: 10.1093/cercor/7.3.207.
    1. Lumer ED, Edelman GM, Tononi G. Neural dynamics in a model of the thalamocortical system.2. The role of neural synchrony tested through perturbations of spike timing. Cerebral Cortex. 1997;7:228–236. doi: 10.1093/cercor/7.3.228.
    1. Edelman GM. The remembered present: A biological theory of consciousness. New York, NY, US: BasicBooks, Inc; 1989.
    1. Damasio AR. The feeling of what happens: body and emotion in the making of consciousness. 1. New York: Harcourt Brace; 1999.
    1. Metzinger T. Being no one: the self-model theory of subjectivity. Cambridge, Mass: MIT Press; 2003.
    1. Shalizi CR, Crutchfield JP. Computational mechanics: Pattern and prediction, structure and simplicity. Journal of Statistical Physics. 2001;104:817–879. doi: 10.1023/A:1010388907793.
    1. Cohen MR, Newsome WT. What electrical microstimulation has revealed about the neural basis of cognition. Curr Opin Neurobiol. 2004;14:169–177. doi: 10.1016/j.conb.2004.03.016.

Source: PubMed

3
Tilaa