Artificial intelligence allows machines to think like humans, at least in theory. Here are six instances when these smart computers did not function as planned.
Robot escape
In 2016, a Russian robot called Promobot IR77 demonstrated how machines can navigate the world without human assistance when it became a persistent escape artist.
The robot, which was designed to interact with humans as a tour guide, fled its laboratory in Perm, Russia, after a researcher left the door open. IR77 travelled 50 metres across nearby roads and caused a traffic jam before running out of battery power and being returned to the laboratory.
After the initial daring escape, the scientists attempted to reprogram the robot, but it kept moving in the same direction, towards the door it successfully fled through. Although in the first instance the robot escaped because there was nothing to stop it, its subsequent escape attempts showed that the computer used its memory to target the same route. Their lack of control over the robot’s intentions led the scientists to shut the project down.
Did you know?
Over 80 percent of businesses either use or explore AI.
Artificial evidence
When a man named Roberto Mata attended court with his lawyers, accusing Avianca Airlines of injuring him with a metal serving cart on board a flight, he would have assumed that his lawyers were well equipped to support him.
However, the lawyers had consulted the AI chatbot ChatGPT to research similar cases instead of using reliable sources. Six cases that they referred to in court to demonstrate similar events and punishments were found to be complete fiction, made up by the chatbot. The lawyer claimed that he thought ChatGPT was a search engine, rather than the language generating tool it is. ChatGPT draws information from an online database, but often misinterprets the data and can relay elements incorrectly. After this court case, the court introduced a new stage in their proceedings whereby lawyers have to state that ‘no portion of any filing will be drafted by generative artificial intelligence’.
ChatGPT generates responses from masses of data, not all of which provides reliable answers.
Ball or bald head?
In a football match, video assistant referee technology is used to record footage of all ball movements and interactions. These can be watched back to check for foul play or missed details. Cameras use AI to locate the ball and track it throughout the game.
In 2020, Scottish football team Inverness Caledonian Thistle tested out using AI-controlled cameras in a match’s coverage, replacing all human-operated cameras. The technology locates the size and shape of the football and points cameras towards it. But viewers watching the game at home missed much of the action when the AI mistook a bald linesman’s head for the ball. Whenever the ball fell close to the linesman, the camera remained focused on him instead.
Recruitment bias at Amazon
In 2018, Amazon was compelled to remove its AI recruiting tool as it had developed a bias against hiring women. But how can a computer acquire a preference for men? The AI technology was built using old data collected over a ten-year period prior to using the tool. Because the industry was largely dominated by men during this time, the AI used this as a template and aimed to continue a similar pattern. To maintain the same male-to-female ratio, the AI algorithm favoured male candidates. Any applications that said a person had attended a women’s sports club or all-girls school were ranked lower.
Any mention of gender on the application or clues that a person was female often led to automatic rejection. To prevent further such discriminatory interpretations by the algorithm, Amazon stopped using the tool completely.
Amazon’s recruitment tool showed that algorithms produce biased results that mirror biases in society.
Inappropriate pregnancy predictions
When people buy items at a shop, retailers gain information about each customer’s preferences. In some stores, such as American retail chain Target, customers are given an ID number, linking their payment card and email address. With this combination of information, the more purchases a customer makes, the more information AI can deduce about them.
As part of an advertising strategy, Target came up with a list of its 25 most purchased items by pregnant women. The list included calcium, magnesium and zinc supplements, unscented lotions and cotton wool balls. Using the receipt information from customers, the AI computer algorithm assigned each person a ‘pregnancy score’ based on the number of these products they’d bought. Those with the highest scores were sent relevant coupons by the retailer. This caused a big upset and privacy concerns when Target began predicting pregnancies before family members had become aware.
Target studied historical buying data from pregnant people to compose a list of products and a score system for its AI.
The online offender
In 2016, Microsoft launched a Twitter chatbot account called Tay, designed to use machine learning to post automatically. To do this, the chatbot saved data from other accounts that interacted with it and learned how to post messages in similar styles. Machine learning uses large datasets – in this case from internet posts – which can be analysed to find patterns in phrases or words. Some of the data was pre-written by comedians to give Tay the foundation of her sentences and humour. The rest of the data was anonymously taken from public profiles. Within just 16 hours, the chatbot had posted more than 95 000 times and showed the darker side of the internet. Many of the Tweets were offensive, including racist, anti-Semitic and antifeminist comments. Tay demonstrated the dangers of releasing an uncontrolled chatbot. Microsoft made a new version, called Zo, several months later that was programmed to avoid any topics that could be offensive.
Text: Alisa Harvey
Images: Alamy/Getty/Shutterstock/Illustrations by Nicholas