Making Sense of the World: from ‘Computations’ to ‘Transactions’

Sep 29, 2020

A FACT360 Blog Series – Part 1

Professor Mark Bishop,
FACT360, Chief Scientific Adviser
[email protected]

Join FACT360’s Chief Scientific Adviser, Professor Mark Bishop over the next few weeks as he examines why computers sometimes perform poorly understanding the everyday aspects of the world and the techniques FACT360 employs to overcome them…

What is the creative force that directs my actions as I type these words? Where lies the body-magic that brings forth this perception of my world? Wherein ‘sits’ this mind? If science can fully respond to such questions then, one day perhaps, it will be possible to fully simulate the human ability to act mindfully, with intelligence, via computation. And if that is possible, doors will open to widespread automation of even the most challenging processes of the workplace; significantly reducing costs, but also presaging unimaginable changes in long-term patterns of employment.

…why do computers sometimes perform poorly on activities that involve understanding everyday aspects of the world?

In a series of blogs unfolding over the coming weeks, I will guide readers on a spiral of two journeys, with the end-goal of shedding light on why computers sometimes perform poorly on activities that involve engagement with, and understanding of, everyday aspects of the world. The first journey will begin with a series of ‘shallow dives’ into computation, human thought and language, before probing the outer limits of that which can be achieved via ’mere’ computation; the latter will revisit these territories through a series of ‘deeper dives’.

computational ‘understanding’ of human social intercourse will continue to throw-up deep problematics

At the end of each journey, readers will have a clearer view of why computational ‘understanding’ of human social intercourse will continue to throw-up deep problematics; a core insight which motivated the team at FACT360 to explore ‘Transactional Analytics’ – a unique approach to ‘AI and Natural Language Processing’ with roots in the activities of trail-blazing mathematicians and scientists at Bletchley Park in WW2; an approach that emphasizes the actions people articulate and the style in which they communicate, over the hazy, shifting semantics of everyday human discourse.

I am already exploring many of these topics in my role as an AI expert on the EU Committee helping to shape European policy on ‘industrial specialisation’ and the ‘digital transform of work’.  And the transformation is already well under way.

43% of job categories in America are at risk of automation

One of the most cited studies into likely future workplace changes brought about by AI, the 2013 report by Frey and Osborne (from the Oxford Martin School at Oxford University), has predicted 43% [of 702 job categories in America] are at risk of automation within the next decade or two: ranking ‘recreational therapists’ and ‘first-line supervisors of mechanics/repairers’ as professions least likely to be automated and ‘Title Examiners’, ‘Abstractors’, ‘Searchers’ and ‘Telemarketers’ as the most likely.

Driving this, seemingly inexorable, progress in AI, the dominant cognitive paradigm throughout much of the twentieth century seated the mind in the brain; if computers can model the brain then, theory goes, it ought to be possible for computers to act like minds; the ‘mechanical’ intelligentsia.

In the latter part of the twentieth century this insight – that intelligence is grounded in the brain -fuelled an explosion of interest in computationally realised “neural networks”: accurate (high fidelity) simulations of the brain (cf. ‘computational neuroscience’) and simpler approximations used to control intelligent machines (‘connectionism’). Indeed, these ‘Artificial Neural Networks’ have now reached ‘Grandmaster’ and even ‘super-human’ performance’ across a variety of games: from those involving perfect-information, such as Go, to those involving imperfect-information, such as ‘Starcraft’.

…when AI gets things wrong the incident often rapidly becomes international news

Concomitantly, technological developments from AI-labs have ushered new applications throughout the world of business, where an ‘AI’ brand-tag is fast becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong – an autonomous vehicle crashes; a chatbot exhibits ‘racist’ behaviour; automated credit-scoring processes ‘discriminate’ on gender etc. – there are often significant financial, legal and brand consequences, and the incident rapidly becomes international news.

But if AI is getting increasingly better at tasks associated with extreme expressions of human intellect, then why does it continue to make such egregious errors when engaging with more mundane human behaviours, such as understanding language or driving a car; everyday behaviours in which people, typically, demonstrate thorough competence?

In the next post I will embark on the first leg of the journey examining why this is the case by looking in more detail at ‘Computation’.  Follow me and follow FACT360 to make sure you don’t miss any of the series.

Read Part 2 here – What is Computation? (And why this matters)

Professor Mark Bishop is FACT360’s Chief Scientific Adviser and to see how these leading-edge scientific techniques can be applied in your organisation download our White Paper “The Science of FACT360” or get in touch [email protected].