Trouble with computation

Jan 6, 2021

A FACT360 BLOG SERIES – PART 5

Professor Mark Bishop,
FACT360, Chief Scientific Adviser
[email protected]

FACT360’s Chief Scientific Adviser, Professor Mark Bishop is still waiting for the rise of the machines and, in his latest post in ‘The Science of AI’ series (read the first in the series here), he explains why classical AI will never replace actual intelligence…

In an interview on December 2nd, 2014, the BBC technology correspondent, Rory Cellan-Jones, asked Professor Stephen Hawking how far engineers had come along the path towards creating artificial intelligence, and slightly worryingly Professor Hawking replied:


“Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”


Although grabbing headlines that week, such predictions were not new in the world of science and science fiction; indeed, my old mentor at the University of Reading, Professor Kevin Warwick, made a very similar prediction back in 1997 in his book “March of the Machines”. In his book Kevin observed that, even in 1997 there were already robots with the ‘brain power of an insect’; soon, he predicted, there would be robots with the brain power of a cat, and soon after that there would be machines as intelligent as humans. When this happens, Warwick predicted, the science fiction nightmare of a ‘Terminator’ machine could quickly become reality, because these robots will rapidly become more intelligent and superior in their practical skills than the humans that designed and constructed them.


The notion of mankind being subjugated by evil machines is based on the ideology that all aspects of human mentality will eventually be instantiated by an artificial intelligence program running on a suitable computer; a so-called ‘Strong AI’. Of course, if this is possible, accelerating progress in AI technologies – caused both by the use of AI systems to design ever more sophisticated AIs and the continued doubling of raw computational power every two years as predicted by Moore’s law – will eventually cause a runaway effect wherein the artificial intelligence will inexorably come to exceed human performance on all tasks; the so-called point of ‘technological-singularity’ popularised by the American futurologist Ray Kurzweil.


And at the point this ‘singularity’ occurs, so Warwick, Kurzweil and Hawking suggest, humanity will have effectively been “superseded” on the evolutionary ladder and we must anticipate eking out our autumn days gardening and watching cricket; or, as in some of Hollywood’s more dystopian visions, being cruelly subjugated, or even exterminated, by our mechanical offspring.


I did not endorse these concerns in 1997 and do not do so now. Indeed, there are many reasons why I am sceptical of grand claims made for future computational artificial intelligence, not least, empirical, for the history of the subject is littered with researchers who have claimed a breakthrough in AI, only for it later to be judged harshly against the weight of society’s expectations. All too often these provide examples of what the Philosopher Hubert Dreyfus called a ‘first step fallacy’. For example, climbing a tree undoubtedly takes a monkey a little nearer the moon, but tree climbing will never deliver a would-be simian astronaut onto the lunar surface.


Of the many reasons why computational AI has failed to deliver on its ‘Grand Challenge’ of replicating human mentality in all its raw and electro-chemical glory, in this blog I will foreground the three problematics that I am most familiar with:

  1. Computers lack genuine understanding: in his ‘Chinese room argument’ from 1980[1] the American philosopher John Searle famously demonstrated how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction (cf. a small child laughing at a joke she doesn’t understand).
  • Computers lack [mathematical] insight: in his 1989 book ‘The Emperor’s New Mind[2]’, the Oxford mathematical physicist Sir Roger Penrose argued that in general the way mathematicians provide their ‘unassailable demonstrations’ of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational.

  • Computers lack conscious: in my 2002 argument entitled ‘Dancing with Pixies[3]’, I demonstrated that if a computer-controlled robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be present in all ‘open physical systems’; in the very cup of tea that I am drinking as I type these words. If we are to reject this promiscuous ‘panpsychism’ then, it seems to me, we are equally obliged to also reject the hackneyed science-fiction trope of a [computationally] conscious robot.


The success of any one of the above arguments fatally undermine the notion that the human mind can be completely instantiated by mere computation; if I am correct, although computers will undoubtedly get better and better at many particular tasks – say playing chess, driving a car, predicting the weather etc – there will always remain broader aspects of human mentality that future AI systems will not match. Under this conception there is an unbridgeable gap, a ‘humanity-gap’, between the human mind and mere ‘digital computations’; although raw ‘compute power’ – and concomitant AI software – will continue to improve, the combination of a human mind working alongside a future AI will continue to be more powerful than that future AI system operating on its own; to paraphrase the rap pioneer Gill Scott Heron, ‘The Singularity Will Not Be Computerised’.


Furthermore, it seems to me that without understanding and consciousness of the world and lacking genuine creative [mathematical] insight, any apparently goal directed behaviour in a computer-controlled robot is, at best, merely the reflection of a deep-rooted longing in its designer. And lacking an ability to formulate its own goals, on what basis would a robot set out to subjugate mankind unless, of course, in an echo of a James Bond style super-villain, it was explicitly programmed to do so by its [human] engineer.

Luckily, at FACT360, we are not looking to subjugate mankind when we utilise AI, preferring the more mundane objective of uncovering the useful ‘unknown unknown’ information that exists in communication networks. And in the next blog I will be looking at some of the theory behind the techniques we use to do this and how they are superseding classical AI and providing a pathway out of the computational jungle.

Read Part 6 here – “Embodied, Embedded, Enactive & Ecological – New Cognitive Paths Out of An Old Computational Jungle”

Professor Mark Bishop is FACT360’s Chief Scientific Adviser and to see how these leading-edge scientific techniques can be applied in your organisation download our White Paper “The Science of FACT360”or get in touch [email protected].


[1] Searle J. Minds, Brains and Programs. Behavioural and Brain Sciences 1980; 3(3): 417-457.

[2] Penrose R. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press; 1989.

[3] Bishop JM. Dancing with Pixies. In: Preston, J, Bishop JM, editors. Views into the Chinese Room: new Essays on Searle and Artificial Intelligence. Oxford: Oxford University Press; 2002. pp. 360-379.