AI like HAL 9000 can never exist because real emotions aren't programmable
This article by , Professor of Cognitive Neuroscience, School of Prychology was originally published on . Read the .
HAL 9000 is one of the best-known articifical intelligence characters of modern film. This embarks on a mission to Jupiter, along with a human crew, in Stanley Kubrick鈥檚 iconic film 2001: A Space Odyssey, which is currently celebrating its 50th year since release.
HAL is capable of speech production and comprehension, facial recognition, lip reading 鈥 and . Its superior computational ability is boosted by uniquely human traits, too. It can interpret emotional behaviour, reason and appreciate art.
By giving HAL emotions, writer Arthur C. Clarke and filmmaker Stanley Kubrick made it one of the most human-like fictional technologies ever created. In one of the most beautiful scenes in sci-fi history, it says it is 鈥渁fraid鈥 when mission commander Dr David Bowman starts disconnecting its memory modules following a series of murderous events.
HAL is programmed to deliver optimal assistance to the crew of the spaceship Discovery. It has control over the entire vessel, and staggering intelligence to aid it in its task. Yet soon after we become acquainted with HAL, we cannot help feeling that it is worried 鈥 it even claims it is experiencing fear 鈥 and that it has an ability to empathise, however small. But while there is nothing to preclude the idea that such an emotional AI could see the light of day, if such depth of feelings were to be included in real world technology, they would have to be entirely fake.
A 鈥榩erfect鈥 AI
When, during the film, Bowman starts to manually override HAL鈥檚 functions, it asks him to stop, and after we witness a fascinating obliteration of HAL鈥檚 鈥渕ental鈥 faculties, the AI seemingly tries to comfort itself by singing Daisy Bell 鈥 reportedly the first ever song produced by a computer.
In fact, viewers begin to feel that Bowman is killing HAL. The disconnection feels like a vengeful termination, after witnessing the film鈥檚 earlier events. But though HAL makes emotional statements, a real world AI would certainly be limited to having only the ability to reason, and make decisions. The cold, hard truth is that 鈥 despite what 鈥 we will never be able to program emotions in the way HAL鈥檚 fictional creators did because we do not understand them. Psychologists and neuroscientists are certainly trying to learn how emotions interact with cognition, but still they remain a mystery.
Take our own research, for example. In a study conducted with Chinese-English bilinguals, we explored how the emotional value of words can change unconscious mental operation. When we presented our participants with positive and neutral words, such as 鈥渉oliday鈥 or 鈥渢ree鈥, they unconsciously retrieved these word forms in Chinese. But when the words had a negative meaning, such as 鈥渕urder鈥 or 鈥渞ape鈥, their brain blocked access to their mother tongue 鈥 .
Reason and emotion
On the other hand, we know a lot about reasoning. We can describe how we come to rational decisions, write rules and turn these rules into process and code. Yet emotions are a . Their source is the source of everything, and not simply an attribute of the mind that can be implemented by design. To program something, you not only need to know how it works, you need to know what the objective is. Reason has objectives, emotions don鈥檛.
In an experiment conducted in 2015, we were able to put this to the test. We asked native speakers of Mandarin Chinese studying at 香港六合彩挂牌资料 to play a game of chance for money. In each round, they had to take or leave a proposed bet shown on the screen 鈥 for example, a 50% chance of winning 20 points, and a 50% chance of losing 100 points.
We hypothesised that giving them feedback in their mother tongue would be more emotional to them and so lead them to behave differently, compared to when they received feedback in their second language, English. Indeed, when they received positive feedback in native Chinese, they were , irrespective of risk. This shows that emotions influence reasoning.
Going back to AI, as emotions cannot be truly implemented in a program 鈥 no matter how sophisticated it may be 鈥 the reasoning of the computer can never be changed by its feelings.
One possible interpretation of HAL鈥檚 strange 鈥渆motional鈥 behaviour is that it was programmed to simulate emotions in extreme situations, where it would need to manipulate humans not on the basis of reasoning but by calling upon their emotional self, when human reason fails. This is the only way I can see that real world AI could convincingly simulate emotions in such circumstances.
In my opinion, we will not, ever, build a machine that feels, hopes, is scared, or happy. And because that is an absolute prerequisite to any claim that we have engendered artificial general intelligence, we will never create an artificial mind outside life.
This is precisely where the magic of 2001: A Space Odyssey lies. For a moment, we are led to believe the impossible, that pure science fiction can override the facts of the world we live in.
Publication date: 9 April 2018