By Dr Simon Walker, STEER 15.9.2017
Machine intelligence is progressing at a tremendous rate; it was once considered impossible for a machine to beat a Grandmaster at chess. Surely it is likely that, at some point, machines will be able to replicate the brain functions which constitute Steering Cognition?
This is indeed possible, but we believe it is a long way off, and indeed, may in fact never be achievable. The reason is that machine intelligence and Steering Cognition use two different processing architectures. Machine intelligence is built upon an algorithmic architecture, whilst Steering Cognition is built on associative.
The associative architecture of Steering Cognition
The central function of Steering Cognition is as a mental simulator, which enables the brain to ‘manipulate or turn round’ novel, external data to work out what kind of data it is, how to attend to it, where to locate it in our long term memory and how to act back out into our environment in response to it.
To visualise the function of Steering Cognition in the brain, think of the board game ‘Downfall’. The object of the game is to get counters of the right colour into the right container at the bottom, via a series of cogs. The counters are like ‘data’ from the outside world, which come in a huge array of forms and colours. The containers at the bottom are like our existing memory in our minds: our brain creates long term memories by categorising counters into groups with similar associatations (shape, size, smell, feel) so that we can recognise a leaf as a leaf, or a smile as a smile.
To get the coloured counters from the varied outside world into the right containers that exist in our minds, we need an equivalent of the ‘Downfall cogs’ in between. Steering Cognition is our ‘Downfall cogs’. The Steering Cognition cogs enable our brains to turn round the data, work out what kind of data it is, whether we have seen it before and how it relates to our existing categories.
These cogs take place in our working memory and involve our imagination: we use our imagination to mentally simulate whether we have experienced this data before, how we reacted to it if we did, how much attention to pay to it now, and how to relate it to our existing memories.
The brain’s Steering Cognition cogs have to be very flexible, constantly adjusting between and within the varied languages through which we ‘read the world’. We read ’emotional’ data, ‘social’ data, and ‘spatial/physical’ data, ‘numerical’ data and of course ‘linguistic’ data.
In the course of an everyday task, like shopping in the supermarket or chatting with friends over a drink, ‘the meaning’ is contained not in one language, but within the fluid combination of these different data languages (gestures, tones, words, numbers, ideas etc). Steering Cognition prioritises, adjusts and regulates our limited attentional focus between these different kinds of data, in order to detect the meaning of the whole. When it fails to regulate appropriately, our attention and subsequent action can become biased, focused on some languages over the others.
The ability to recruit imagination is critical to achieve such attentional regulation, because imagination allows us to ‘see ourselves in relation to’ new data experiences. The imagination ‘puts us into the picture’ so to speak in the first person; in this way it ‘recruits and associates’ past emotional, social, linguistic, numerical memories with the new experience; new data is initially not processed procedurally and atomistically, but holistically and integratively.
Often, those associations are oblique, ambiguous, unresolved and putative, waiting to be more fully crystallised as we anchore the new more closely into our existing structures of memory and meaning (which is why the imagination is also the realm of metaphor, symbol, allusion and inference). Tolerance of the unresolved is critical: By sustaining and retaining such putative ‘associations’ in our mental simulation circuitry, we can create time to make connections with almost any new experience or kind of data. This allows us to adjust to, process and incorporate into our internal memory, a much richer, combinative and unpredictable stream of information than any other animal species.
Machines, on the other hand, are restricted to data types for which they already have an existing internal coding architecture. Otherwise the external data ‘does not fit’. Internet companies have faced and had to overcome this problem in its simplest form. Different applications (Facebook, Instagram, Blogger) are coded in different data architectures. When we post a picture on Instgram and want to link it to our Facebook page, the data has to be translated through an external, third party interface programme (an API). It cannot be read directly.
Internet companies are having to create more and more APIs to link different applications- maybe one day they will have created 1000s. The brain has to have an ‘Steering Cognition’ API for an almost limitless array of external data grammars. This helps us appreciate how difficult it is for any cognitive processor (e.g. the brain or a computer) to process data from an external source that is of an unpredictable and different structure to that held in its internal database (memory). It is for this reason that machines are only intelligent when faced with narrowly predictable and routine environments; e.g. a chess game, maths problems, stock market trading judgements etc- the data comes in one format.
The algorithmic architecture of machine and anayltical learning
The brain’s almost limitless ability to process varied data grammars relies on its capacity to ‘associate’ unrelated data via its mental simulator, the imagination. The imagination’s ‘associative processing’ is what makes Steering Cognition critically and uniquely human. Machines make judgements by following programmed algorithims (10000’s of procedures in a step by step sequence to arrive at an answer). In this, machines analyse data similarly to how the brain sieves through, analyses, computes and finds patterns in its existing retained memories.
We have tests to measure the brain’s ability to process internal data algorithmically– we call them IQ tests. The capacity of the brain to perform such complex, procedural mental tasks quickly and accurately is given a term ‘general intelligence’. The term is somewhat misleading as it suggests it refers to the overall capacity of the brain to learn. However, general intelligence is not the intelligence of the brain overall; rather, it is the ability of the brain to process and use existing, learned data algorithmically.
Machines have a superior potential to replicate and surpass the algorithmic (IQ) functions of the brain. Machine learning enterprises are and can only (currently) be coded by algorithms. They will inevitably become more powerful in mastering the processing of data sets the like of which they have encountered before.
But it is our Steering Cognition, rather than algorithmic processing, that enables us as humans to process data in the complexity of the real and living world.
This leads me to a fairly confident set of 5 reasons why machines will not be able to replicate Steering Cognition:
1. We are involved in every situation we process: machines never will be
When we ‘read’ a social situation we are also a contributor ‘to’ that social situation. An agent as well as a reader. Human knowledge therefore always involves conscious mental representation of ourselves, and as such requires the capacity to become aware of ourselves AND become aware of the state of the other persons around us. A machine will never be able to do this because, by definition, they are non-human and therefore will not be ‘included’ as a contributor to the external social dataset by another human being. Our ability to ‘imagine’ ourselves as a first person, and to ‘imagine’ the state of another (empathy) centrally requires a community of imaginative participants. A singular machine intelligence would need to be come a community of machine intelligences, and even then, would only potentially be able to understand its own kind.
2. Steering cognitive requires self-representation
Fundamentally, Steering Cognition requires the thinker to ‘see themself’ as an entity participating in a situation. The capacity to ‘self-represent’ is an emergent property of the brains interaction, understanding of which remains beyond the grasp of philosophers let along computer scientists. The most sophisticated neural machine architectures have been developed without any understanding of what constitutes, neurally, a state of self- conscious, self-representation.
2. Much external data requires associative and ambiguous processing.
For example, what does a child waving mean? The answer is determined by wider context, personal history and other factors not immediately discernible from the dataset. This requires associating symbolic, gestural, metaphorical data to form a composite picture involving memory and personal story. Machines cannot do that. Philip K Dick’s Do Androids Dream of Electric Sheep? (the basis for Ridley Scott’s film Bladerunner) explored this problem poignantly in 1979.
3. Data processing requires a person to move ‘into’ and ‘out of’ a situation mentally in their head, from first to third person in order to understand it.
For example, how do you help a child with a nasty cough? Appropriate action requires the capacity to ‘stand in her shoes’ and feel the illness, be gentle, kind but also ‘step back’ and consider medical data, temperature, symptoms etc. Real world intelligence requires the ability to move from ‘object-to subject’ moment by moment depending on the structure of data presenting itself. Machines cannot detect when to make those judgements because they cannot discriminate between the subtle, unpredicted data types presenting themselves moment by moment.
4. Human beings fake.
Is the ill child faking to get off school? Does my smile mean I agree with you or am hiding my real reaction? Human cognition involves social codes of disclosure/shame etc which are cultural as well as personal. They are critical for social cohesion and influence; we use our self-presentation to effect influence upon other human beings. The Hal problem in 2001 Space Odyssey explores machine and human non-disclosure; Hal, the computer, withholds information from the human astronaut, who works out that Hal is not-disclosing and works round him. Hal, on the other is flummoxed when the astronaut becomes non-disclosing because he cannot compute what his intentions are.