Integrated Information Theory, or IIT for short, is a brave attempt by Giulio Tonini and colleagues to capture scientifically the essence of what it is to be a rational, conscious mind. Its focus on information is surely correct, but at least in its current form it suffers serious shortcomings.
When we look for the "seat of consciousness" in our minds, there is a growing acceptance that it cannot be found simply by examining the brain in minute detail. The wiring of the brain's thinking centres does not encapsulate what we are thinking about any more than the wiring of a computer chip encapsulates the program running on it. To understand thoughts and experiences we must turn not to the wiring but to the information flowing along those circuits. If we stop the information flowing in our thinking centresthe wiring remains but consciousness vanishes. Reawaken consciousness and the buzz of information passing to and fro resumes. It is clearly the information in these regions of our brains which makes us consciously aware of ourselves and our experiences. Indeed this information completely characterises any given experience: change the information in the slightest degree, say with a little electrode, and you change the experience. Brain surgeons use such techniques increasingly often to help diagnose brain functions during an operation. In this way, we come to understand the human mind as a vast store of information, constantly updating itself.
But not every vast and busy database is conscious. Studies of mental states suggest that consciousness involves not only huge amounts of data but the ability to join the right bits together, say to join the sight of my fingers with the feel of the keyboard under them, the deliberate moving of my arms and the ideas I want to share, all brought together in the act of typing. Integrated Information Theory suggests that this bringing together, or integration, of information is the key to consciousness.
For Tononi, a neurologist and sleep researcher, this is key because it provides an explanation for how a conscious mind can fall asleep when the brain stops integrating so much stuff.
Yet he makes one astonishing blunder. He suggests that a sufficiently interconnected array of information will be conscious even when the array is static and no information is being processed. This is arrant nonsense. Consider dumping that arary to a printer. Are we seriously to believe that the resulting mile-high pile of paper is conscious? Even if we do subscribe to some form of panpsychism, that applies regardless of the sophistication of the information intergration level of the Universe, it has nothing whatever to do with Tononi's theory.
It is not enough to simply examine different causally-related states and describe that relation as something called "time". Rather, we must understand consciousness to be a flow of information, a dynamic complexity in real-time, like a film reel unwinding and projected onto a screen as a moving picture. The film analogy is particularly apt, as the human brain employs exactly this kind of stop-frame animation to capture our visual sense data and integrated it into a conscious experience of smooth movement. For Tononi to describe the differences between frames, and expect to have thus described the experience of motion which the brain extracts from these differences, is wholly inadequate.
IIT rather pretentiously defines the properties of conscious experience as a set of axioms. Such axioms of consciousness place IIT squarely in the arena of philosophy, not only of the mind but also of formal logic. But Tonini is a doctor of medicine, not of philosophy. It can come as no surprise to anybody with genuine philosophical knowledge that his axiomatic edifice does not stand up to scrutiny.
His first axiom is merely Descartes' cogito ergo sum in new clothes; our conscious experience is the foundational reality which affirms our personal existence. The Buddha would have had a few things to say about that anyway, not least that all of conscious experience is by its very nature illusion.
His remaining axioms model the information that consciousness attaches itself to. To my eye they appear a muddled collection of basic imperatives for any formal system, together with some cod definitions of acceptable substructures within a complex experience. They may make sense to a psychiatrist, they make very little to a philosopher (and believe you me, philosophers have entertained some cray-zee notions in their time!).
It is important to understand that the mathematical fun and games which bulk out the theory are thus built on sand. They might or might not prove a consistent edifice. But even if they do, any relation to human (or any other) consciousness is far from guaranteed.
The next step in the theory is to provide a mathematical description of integrated information, so that the level of integration of any data set can be calculated. The higher the level of integration, the more conscious the information becomes. Physicist Max Tegmark encapsulated the principle beautifully when he remarked that:
"consciousness is what information feels like when it reaches a certain level of complexity".
I find it hard to disagree with that.
So, how is the level of integration calculated? Well, let us say that several formulae have been bandied about but the theory is at far too early a stage for these to be more than poster children for a useful definition. Some critics have dismissed IIT because these embryonic ideas are, well. so embryonic. For example a if you were to take all of some masive database and compress it into one file, you may find that, according to your preferred formula, the act of compression adds hugely to the integration of the output file. Yet nobody (except perhaps Tonini) would claim that a compressed archive of Google Earth is conscious.
In truth, all these early models have shown is that we do not yet understand the brain enough to create a useful definition of conscious-level integration. Indeed, the failure to incorporate dynamic information flow and transformation and into the model is disastrously naive. The qualitative and time-related characteristics of the integration are at least as important as the sheer complexity and extent. For example one might expect complex dynamic cross-relationships, far more layered than mere data compression. Along with a high level of dynamic flow or change, the presence of key structures such as internal representations of time passing while things happen, of one's self as a conscious entity, and so forth, almost certainly need to be present.
But to dismiss the theory on account of its crude first steps is unfair. It would be more useful to propose ways of taking the theory forward.
In the theory of mind, there is an issue known to philosophers simply as "the hard problem". In a conscious mind, every nuance of information is accompanied by a subjective experience. For example every time my brain signals it has seen something red, I experience a visual quality of redness. We say that the particular brain signal and the particular experience are "correlates" of one another. The trouble is, that no matter how exactly and minutely anyone may describe that pattern of visual information, nowhere does the subjective quality of redness appear in that description. Nor does asking me help you very much. I will just say, "Yes, it was red", which you could have predicted from the brain signal anyway. But what I cannot communicate to you is what that quality of redness felt like. You too probably feel an experience of redness when you see something red, but equally you cannot explain its quality to me. On the other hand, perhaps you are colourblind and experience something else. Worse still, whether you are colourblind or not, there is no way of telling whether your experience and mine are anything like each other's. I have no idea of how you, personally, experience redness and, apparently, no way of ever knowing. Explaining these gaps, between the physical neural phenomenon and the internal "quale" of each different person's experience, is the hard problem.
Some people hotly deny that there is any problem. They see a complete, logical identity between the brain signals and the inner experience: there is no gap because they are just the one thing. And indeed, any good logician will tell you that if two things are, by any applicable yarddstick, identical in character, then they must be the same thing. The next step goes that, since inner experience is inaccessible to objective science, the only possible scientific yardstick is the measured brain signal. This signal can then be correlated anecdotally to the quale, but that is just how the brain signal manifests in the mind, it is not something addidional or of a wholly different kind. This is in essence a flat denial that there is any kind of distinction between a pattern of signals in the brain and the feeling of redness that correlates with it. At first, one might note that it is hard to argue with a flat denial, one can only disagree.
However, IIT introduces a hugely significant subtlety into the picture, by treating the immediate correlate of the quale as information. No two brains are wired exactly alike, no two signals directly comparable synapse-by-synapse. The best we can do is to establish what information the signal is carrying. Thus, we have a tryptich of correlates - neural activity, information content and subjective experience. The exact wiring patterns differ between individual brains, and change over time even within an individual brain. Yet the information (of a certain shade of red) remains unchanging. Maintaining a logical identity across all three is impossible. And even if you don't buy IIT, the information content of the brain signal is undeniable. That flat denial really is missing a critical fact; we really cannot tell whether the quale remains as constant as the information content.
Thus, IIT helps to illustrate the nature of the hard problem and to highlight its intractability. However the theory is quite unable to resolve the problem. Its first axiom states that consciousness exists; it is a done deal. All the others do is model the information it attaches itself to. In doing this, it in effect acknowledges the hard problem but deliberately sidesteps any attempt to grapple with it.
For example, suppose that we eventually produce some marvellous and ingenious equation for integrated awareness, which captures a threshold of consciousness that doctors can reliably apply to patients, simply by plugging their brain scanner into a computer. The computer reports a conscious sensation of redness, the doctor shows the patient a red card and the patient says, "Yes, it is red". What is this quality of redness that has attached itself to the mental information and the patient has experienced? IIT leaves us not the slightest bit wiser than we have ever been.
Besides its gross mathematical immaturity and its failure to account for dynamical complexity, IIT lacks philosophical rigour and explicitly fails to address the hard problem.
The first three problems can potentially be put right. Giving its axioms a professional philosophical makeover would be a good start. Turning its mathematical expression into a dynamic flow (would it be naive to suggest a derivative with respect to time along the lines of Φ = dp/dt where p is the complicated bit?) might help open the path to a more realistic mathematical model. But it had best avoid the hard problem and make no pretence of meeting it.
All that the theory really is at the moment is the idea that consciousness is a property of information rather than of physical objects, and it requires a high degree of organised complexity. It may one day help us to quantify consciousness, which might be a help to neurologists, animal psychologists and artificial intelligence researchers, but it can never explain it. At best it might shed light on the relationships between such things as intelligence, conscious states, sentience and self-awareness. Ultimately, it can never be more than a theory of mental information. It should stop pretending to be anything else.
Updated 1 Jun 2021