Computer bots, like the ones running Wikipedia, have strops and fights just like humans
Artificially intelligent robots have been put to work removing vandalism and checking spelling across Wikipedia for more than 10 years.
Now researchers claim “benevolent bots” are more like humans than they previously expected, after the find the software acting very strangely – and not by design.
But the findings could prove problematic – as it may mean software developers could be forced to take into account bots’ emotions when issuing them a task.
The research, conducted by the University of Oxford and the Alan Turing Institute in the UK, revealed different bots can exhibit a vast array of unique behavioural characteristics.
And the bots, which are software programmes written to carry out certain processes such as importing content or creating links, have even begun to ‘talk’ to each other.
And when a team analysed the editing bots, they found the interactions often lead to unpredictable circumstances.
Now scientists at the university claim the devices can display bizarre behaviour similar to humans – even having fights that can last for several years.
But while the research may been seen as a positive to some, the study is being held as a warning to those using artificial intelligence for building driverless cars, cyber security systems or for managing social media.
The experts warned developers must stay on top of their designs, as these ‘personalities’ can result in problematic outcomes.
While bots constitute a tiny proportion (0.1 per cent) of Wikipedia editors, they are behind a significant proportion of edits.
And although conflicts represent a small amount of the bots’ overall activity, these findings highlight their unpredictability and complexity.
The team analysed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years from 2001 to 2010.
And the study found bots in Germany had the last conflict, each undoing another’s edit 24 times – while in the UK it was significantly higher, at 105 times over 10 years.
Lead author Dr Milena Tsvetkova, from the Oxford Internet Institute, said: “We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors.
“This has implications not only for how we design artificial agents but also for how we study them.
“We need more research into the sociology of bots.”