Facebook taught AI to negotiate, compromise like humans and then it started to lie
Researchers tested the bots with actual humans - many of whom did not realise they were talking to a bot.
From ordering you some takeaway to accepting payments and getting you out of a parking ticket, artificial intelligence-powered chatbots are increasingly seen as as one of the hottest tech tools for businesses. Now, Facebook's AI researchers have taught bots the art of negotiation.
In new research released on Wednesday (15 June), Facebook's Artificial Intelligence Researchers (FAIR) published open-sourced code and outlined their efforts on deal-making.
They taught a recurrent neural network to "imitate people's actions" through a process called "supervised learning" and try to guess what a human would say in a similar situation.
In the training process, researchers asked the bots to divide a collection of objects, such as books or basketballs, each of which correlated with a different point value. The bots' goal was to divide the objects by negotiating via text and end up with the most possible points.
The researchers also set up the system to not accept getting nothing from the transaction which means walking away from the negotiation table was not an option.
"Negotiation is simultaneously a linguistic and a reasoning problem, in which an intent must be formulated and then verbally realised," the researchers wrote. "Such dialogs contain both cooperative and adversarial elements, requiring agents to understand and formulate long-term plans and generate utterances to achieve their goals."
The researchers also tested the bots online in conversations with actual humans, many of whom did not realise that they were talking to a bot rather than another person.
The researchers noted the chatbots held longer conversations with humans and accepted deals less quickly.
"While people can sometimes walk away with no deal, the model in this experiment negotiates until it achieves a successful outcome," Facebook's researchers wrote.
"There were cases where agents initially feigned interest in a valueless item, only to later "compromise" by conceding it — an effective negotiating tactic that people use regularly," they added. This means the bots actually learned to lie in order to achieve the most favourable outcome.
"This behaviour was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals," FAIR said. "The performance of FAIR's best negotiation agent, which makes use of reinforcement learning and dialog rollouts, matched that of human negotiators. It achieved better deals about as often as worse deals, demonstrating that FAIR's bots not only can speak English but also think intelligently about what to say."
FAIR did not mention how or whether this bargaining-capable technology could be implemented in any future Facebook products. However, it said this "breakthrough" marks an "important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalised digital assistant".
© Copyright IBTimes 2024. All rights reserved.