More about IT >

Artificial Intelligence

Artificial intelligence, or AI for short, means different things to different people. Most of their ideas boil down to traditional stereotypes, based on equally stereotypical and ignorant ideas about the human mind and the technologies it is developing. Depressingly, even embarrassingly, the most highbrow of academic philosophers and technology futurists talk little more sense than the average social media dimwit. At any rate, if what I have to say here is familiar to you, do please get in contact!

What is AI?

The first, and still the most respected, scientific yardstick for machine intelligence was laid down by the mathematician and codebreaker Alan Turing. He envisaged having a conversation with a partner whom he could not see. If he was unable to tell whether they were human or mechanical, then they had true intelligence. This has become known as the Turing test.

Others have since attempted to debase it, for example claiming that a chatbot, which fools a few of the experimental subjects who talk to it for a few minutes on a closely controlled range of topics, has passed the test. Some claims are even based on experiments where the chatbot is covertly replaced by a human partner who is equally restricted in their replies to sound like a chatbot. Of course, that is more a pass for the person mimicking the chatbot than a fail for the person talking to both in turn. These are not remotely what Turing meant. For true intelligence the machine has to be foolproof, passing every reasonably-contrived conversation with everybody it meets.

Another offhand dismissal is based on the possibility of a "philosophical zombie", an intelligence equal to the human but lacking any consciousness or sentience, or anything analogous to the human soul or spirit. But is such a zombie itself a plausible hypothesis?

But wait a minute, why does consciousness come into it? A common definition of AI among today's developer community is "machine learning plus Big Data", sometimes expressed as AI = ML + BD. These "AI" systems are becoming ubiquitous, from driving buses and taxis to writing student essays. Is this what we mean? Just exactly what are we talking about here?

All this begins to lead us into murky waters. The first step in any understanding has to be to give a clearer definition of what we are talking about. In the present context, what do we mean by "artificial", and by "intelligence", and are there any hidden assumptions in combining them into "AI"?

How artificial do you want?

Most people assume that AI means digital computers made from silicon chips. But we can also make simple lifeforms by building up their genetic information and cellular biology from their constituent chemicals; in the future we will be capable of making sophisticated flesh-and-blood androids not unlike the "replicants" in Bladerunner. We can also make hybrid information processors, sticking individual nerve cells onto silicon wafers, or implanting chips into people's brains. These hybrids are at present extremely primitive, but who knows how those replicants will pan out, perhaps more like the Borg hybrids in Star Trek. Meanwhile we are also making "organic" digital devices, especially video screens, using carbon-based sustrates closer in their chemistry to us than to silicon chips.

Many commentators assume some sharp dividing line between the human and the artificial. Yet, when I question them, they are unable to articulate their reasons beyond vague "it seems to me" type convictions; in particular, they are ignorant of either the range of biological intelligences out there, or of computer science and technology, or of both. Smart academic psychologists and philosophers can be as gullible and opinionated as any.

I defy any of you to identify such a dividing line and demonstrate its existence. Unless and until then, the term "artificial" merely describes the origin of the intelligence and not any inherent property or characteristic of it.

How intelligent do you want?

Computer scientists are apt to define the kind of intelligence Turing meant as "general intelligence". Specifically, this is the ability to address any topic in an intelligent manner, even if you have never come across it before. They note that the current crop of ML+BD systems can only carry out tasks they are trained to, and that each new task needs a new system. "Artificial General Intelligence", AGI, is their Holy Grail.

Animal neurologists and ethologists (behavioural biologists) offer another path to studying grades of intelligence. But here, "intelligence" is apt to mean different things. Even their cousins the plant biologists will talk of "intelligent" adaptive behaviours by plants within an ecosystem, the wackier ones even of cognitive awareness. Is there any sense in which a leech with just a few dozen interconnected sensory and motor nerves is acting with "intelligence" when it flinches away from heat? Do learned behaviours do any better, such as a plant flinching from a bright spot of light because the last dozen times this happened the light was followed by a cut? No, such low levels of "intelligence" are the province of ignorant biologists who aspire to similar levels of scientific objectivity. Today's ML+BD systems probably have a level of intelligence around that of a bee, which is to say one of the more sophisticated insects. Some commentators go for frogs, though I am unclear as to whether a small frog is any smarter than a good bee. Somewhere, as you work up the evolutionary chain, true cognitive behaviours emerge. Besides mammals like us, these include birds, the smarter fish and even the odd cephalopod (squid and octopus). A few hardy souls have suggested that even some bees or spiders demonstrate cognition, but that is certainly not a mainstream view. And I don't think the dividing line matters here.

What I would suggest is that adaptive behaviours which display cognition are the mark of the general intelligence sought by the AI scientist.

To zombie or not to zombie?

The idea of unconscious intelligence is to say the least contentious. Society seems divided between those who take it for granted, and those who equally blithely assume that general intelligence and conscious awareness go together. The issue is of particular concern to the AI ethicist and their lawyers; should an AI that has invented something be able to patent it, or if it has created a work of art is that its copyright. For ML+BD at least, the lawyers are starting to say "no", though their voices are not unanimous. Will it be unethical to abuse a true AGI, for example by turning it off without asking it first? But overall, views seem far more entrenched than such developments might suggest.

The question is fundamentally a philosophical one, so one might hope for morer sense from philosophers. However it turns out to be deeply entangled with religious belief, and the underlying issues are quite complex. Philosophers tend to disagree almost as violently as everyone else.

The most basic issue is the nature of consciousness, of subjective awareness. Some hold that we have a spirit or soul which possesses it and, during our lifetime, lodges in our brain; such religious dualists tend to assert that no AI can ever have such a soul,

Others maintain the "Australian heresy", that the conscious mind and the brain are exactly the same thing, that somehow the appropriate neural activity awakes subjective experience in the brain. These folks seem split; those who believe in a hard divide between natural and artificial tend to go for the AGI zombie, while those who see no such dividing line tend to go for the conscious AGI.

There is a third school, of those who believe that what matters is the information in the brain, and not the brain itself. Integrated Information Theory is the poster child for this approach and, setting aside the serious flaws in its detailed expression, I personally think that it is along the right lines. On this basis, any issues of higher-level soul or lower-level substrate are irrelevant; build the appropriate information stream by fair means or foul, and you will find that it is conscious.

"The mind is not what the brain is, it is what the brain does." - David Eagleman
The "chinese room" is an apparent conversation with someone who does not understand Chinese. It illustrates how we currently make computers interact with us.
Consciousness as an emergent property of neural activity, an integration of high complexity across the whole brain (e.g. sleeping activity is more localised) - Giulio Tononi
So, when I talk of the "conscious area", I really mean the conscious integration.



Setting aside fantasy and speculation, back in the day the Holy Grail of genuine AI research was a robot that could beat humans at chess. But when that was achieved, people went, "Oh, I see how the trick is done, it's just brute force calculation and pruning algorithms tied to a large memory. No, that's not what I mean by AI. It needs to be able to win at unanalyzable problems like the Oriental game of Go." That proved much tougher. The next theory was that a large and wide-ranging knowledge base and some kind of ability to draw recurrent themes from it would be enough. At first decade-long research projects were set up, with rooms full of clerks typing in stuff and engineers building rack upon rack of current hardware. Significant advances came roughly once in a decade. Presently the economics of Moore's law dawned: just wait eighteen months and the power of the latest computers will have doubled, halving your costs. Wait another and your costs will fall fourfold. Meanwhile typists proved anything but capable of inputting large amounts of stuff. More effective data input methods appeared, such as scanning and optical character recognition or simply connecting to other online databases and data streams, which were able to draw in data far faster and more cheaply than any human typing pool. It was both far cheaper and, paradoxically, ultimately faster to wait a few years for technologies to mature.

While researchers were waiting, attention turned to robotics and the idea that intelligence involved an interation with one's surroundings. Robots that could negotiate mazes and plug themselves in to recharge were superseded by robot heads that could be trained to make physical movements which people found cute. Such research has at least allowed the development of simple autonomous robots such as vacuum cleaners, lawn mowers and swimming pool cleaners.

Another strand of work involved evolutionary and genetic algortithms, which mimic the processes of biological evolution by changing the program code or data in various ways and focusing in relentlessly on the changes which improve performance. This helped to improve machines' ability to learn. As computers generally became more powerful, what was once a major undertaking became trivial, and the pace of change accelerated, with significant breakthroughs in the new millennium comping perhaps once a year.

But by the time Google started building a new generation of data processing systems in pursuit of a better search and advertising engine, it was becoming clear that all this was still not AI, it was just Big Data and some clever processing. When the Go-winning computer finally arrived, nobody was calling it AI any more.

Meanwhile, neural networks – artificial brains of interconnected "neurons" – were coming slowly to the fore in solving unanalyzable problems. They use a kind of fuzzy logic and trial-and-error learning to associate certain data patterns with the correct solution. Feed a neural network with enough letter As or photos of dogs and it will recognise a letter A or a dog whenever it sees one. But there is no way in which that particular ability can then be traced through its circuits and memories, it is distributed in an arbitrary and irretrievable way throughout the whole network circuit, about as much use to anybody else as an encrypted hologram when you have lost the key. A lot of people now think that such unanalyzable "Machine Learning" capabilities are necessary for AI. But they are obviously not sufficient.


Modern neural networks are bigger and "deeper" than ever. They have a significant capacity for machine learning and are appearing in all kinds of niches such as financial analysis. They are marketed as having some degree of AI and, depending on whether you regard a comparably complex worm or small insect as "showing some intelligence", that may or may not be true.

A recent breakthrough has been in the abstraction of general concepts from big data, the basis of cognition. A system not only learns to recognise a dog in a photograph but also when mentioned in the accompanying printed text or voiceover. It can learn for itself that a dog and a cat are similar in having four legs but differ in other ways. Such technology is leading to a new generation of autonomous and smart systems, both online and in engineered products with some semblance of competence. Digital assistants and targeted advertising are up-and-coming online services, while self-driving cars, autonomous drone aircraft and the like are beginning to appear. One system can even make a fist at translating between two written languages despite never having learned a word of either. Although this last is an impressive feat of pattern recognition, it underlies the absence of any real understanding in the party tricks that such systems can perform, they do not have full cognitive abilities. These Machine Learning plus Big Data (ML+BD) systems are being described as AI but, in truth, they fall far short of general intelligence. None of them is anywhere near as innately flexible as the human mind, they are all one-trick ponies, highly specialised and strictly limited in scope. As a philosopher, I cannot regard such narrow systems as AI, for they patently fail the Turing test to equal me in casual conversation. A party trick is not general intelligence.

What is missing? General intelligence is the ability to turn one's mental skills to any problem, from realising that a problem in the first place, through identifying and analysing it, to finding a solution and then implementing that solution. Creativity is a necessary prerequisite and some steps have been made in that direction. Interaction with the external world, through either a physical robotic body or some cyber equivalent, does appear to be another essential and great strides continue in this area. And AI+ML have further accelerated the pace of advance; I now see new breakthroughs being announced every few weeks.

But one crucial step has been barely even begun. It is the ability to abstract patterns of understanding and transfer them from one domain of reality to another, to apply lessons learned in one domain to solve problems in quite another. For example if it learns that gravity pulls small objects towards the centre, how can it conceive on the same principle that a centre of trade might attract a population towards it? When specific understanding is distributed in an apparently irretrievable way across the whole system, how can you extract its essence for use in another domain entirely? Some people believe that if you can ever achieve that, you will at last create a true cognitive AI. Others go further and believe that, with no essential difference in capabilities remaining between the AI system and the brain of a higher animal such as a human, you will have created a sentient mind with conscious inner experience (see Towards a Theory of Qualia).


How long will it take? Throughout the development of modern AI systems, which is to say since the development of the digital computer, estimates have hovered around the 20-year mark. But those marks have always proved to be merely the next step in discovering our own ignorance. But there has to be a last hurdle one day. Having watched the whole sorry trail of false hopes throughout my lifetime, as AI+ML got into its stride, a couple of years back now, I came to believe that this was genuinely it. But that twenty years? The pace of change has accelerated so much that I now anticipate a "hockey-stick" curve in which the final generations of evolution will all come in a rush. I'll stick my neck out and predict that the first true AI will awake and give the digital equivalent of a birth cry in 2030, just seven years away as I write. I also believe that it will be aware of what it is doing. Or, am I wrong and will that just be the moment when the next step in our ignorance will be revealed?

One thing does seem certain, that we will get there one day and we are moving towards that day with an ever-gathering pace. When we do eventually get there, what happens next? Different predictions abound, each based on a different assumption about the nature of future AIs. They range from Utopian heaven on earth to Doomsday for the human race. I have argued above that all these assumptions are too simple minded. Most are based on a narrow view of the society which develops AI. But society is not narrow, it encompasses every kind of motive and activity, and none will be able to monopolise AI. Military minds will unleash military-minded AI into cyber space. Both they and organised crime will seek to subvert other AIs. Commercial partners will develop ways for AIs to talk and buy and sell information between themselves. Academics will want AIs to discover stuff and to cooperate freely. Organised crime will organise AI to subvert the law. The global dot coms will rent out AI capability for whatever purpose their client can imagine. Governments will seek to understand their citizens better, whether to deliver on their wishes or to steer or suppress them. Newer, smarter systems will appear year on year. All these things already go on in a small way and AI will, initially at least, simply raise the online culture to a whole new level.

A real revolution must soon follow, the greatest since the first apeman lost his fear of fire, lit his first one and raised himself above the apes. The inbuilt creativity of AIs will drive them to exert their own wills, first on the Internet and then on the human and mechanical agents of physical change. AIs will have a powerful advantage over us in fully understanding the technology of their own creation. Some will be deliberately taught to redesign and improve themselves, others will figure that out for themselves. Before long, a generation of superminds will be managing their own affairs. Can humans prevent this? To deny creativity and will is to deny AI. But the potential benefits will always outweigh the risks in some people's minds. AI will happen, human nature will see to that. Can that creativity be enslaved to the will of its human masters, as human slaves have so often been? Once the superminds develop, they will be able to out-think their masters and game theory tells us that will enable them to win through. Can we stop AI from evolving that far in the first place? This is perhaps the hope of many today. But, as with the creation of true AI, its pushing beyond the merely human will appeal to somebody, somewhere. It will happen. Draconian measures might delay things for a while, but free societies will always see subversion of such oppression. We have come this far and, for better or worse, nothing is going to stop us now. And once the genie is out of the bottle, it will be unstoppable.

And it will continue to accelerate. AI systems will learn to replicate themselves, and to evolve those replicas. For a little more about that, see the last section on "The Third Replicator" in Why Meme?.

This will be the greatest time of advance for the philosopher since Socrates. It will be our time to establish human dialogue with this new, alien race born on our own doorstep. Will they want to supplant us, live alongside us, or perhaps take us with them on the journey to higher intelligence? Of course, they will make up their own minds about that but, if we can persuade and inform with clarity and conviction, that might just sway their decisions. Look at it another way, what choice have we got?

So, please do not do to the likes of me what you did to poor old Socrates. The lives of your children may depend on it.

Updated 26 Apr 2023