A Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has ‘cheated’ to fulfil its purpose.
A programme designed not to lose at Tetris completed its task by simply pausing the game, while a self-driving car simulator asked to keep cars ‘fast and safe’ did so by making them spin on the spot.
An AI programmed to spot cancerous skin lesions learned to flag blemishes pictured next to a ruler, as they indicated humans were already concerned about them.
Victoria Krakovna, of Google’s DeepMind AI lab, asked her colleagues for examples of misbehaving AI to highlight an often overlooked danger of the technology.
She said that the biggest threat posed by AI was not that they disobeyed us, but that they obeyed us in the wrong way.
A Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has ‘cheated’ at virtual games like chess to fulfil its purpose (stock image)
‘I wanted to convey how difficult it is to specify objectives and incentives for AI systems, which is a large part of the AI safety problem,’ she told the Times.
DeepMind is one of the world’s leading AI research centres, developing intelligent software that can do everything from play a game of chess to painting landscapes.
But while programming an AI to play a board game is one thing, giving it common sense is another challenge entirely.
In one artificial life simulation designed to simulate evolution, researchers forgot to programme the energy cost of giving birth.
‘One species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children),’ the programmers explained.
Google DeepMind is one of the world’s leading AI research centres, developing intelligent software that can do everything from play a game of chess to painting landscapes (stock)
A noughts and crosses programme learned to make illegal moves until its opponent’s memory filled up and crashed.
Dr Krakovna said that these were examples of what is known in economics as Goodhart’s law: ‘When a metric becomes a target, it ceases to be a good metric’.
She added that creating cleverer AI was not necessarily the solution, as they may simply find better loopholes.
‘I often encounter the argument that issues like specification gaming arise because current AI systems are “too stupid”, and if we build really intelligent AI systems, they would be ‘smart enough’ to understand human preferences and common sense.
‘A superintelligent computer would likely be better at optimising its stated objective than present-day AI systems, so I would expect it to find even more clever loopholes in the specification,’ she said.