Cybercriminals Will Leverage Chatbot Technology Used to 'Pass' Turing Test
There was a lot of hullaballoo on Monday morning when the entire internet reported that a computer programme posing as a 13-year-old Russian boy had passed the Turing Test.
While we were not quite at the point of Skynet and the machines becoming sentient, many hailed this as a landmark moment for artificial intelligence.
The truth is somewhat more prosaic. The 'supercomputer' which the University of Reading claimed had passed the test was in fact just a chatbot, a computer program designed to mimic human conversation.
This in fact is not even the first chatbot to pass the test, with one called Cleverbot convincing 59% of judges that it was human three years ago - much higher than the 33% success rate achieved by the Eugene Goostman chatbot.
While the announcement will have little to no impact on the artificial intelligence community, it does give us some insight into how cybercriminals might leverage the technology to trick more people into downloading malicious software.
"Wake-up call to cybercrime"
In the press release accompanying the announcement, Kevin Warwick of the University of Reading said the development should act as "a wake-up call to cybercrime."
However chatbots like this have been in operation for some time, particularly on Skype where they pretend to be a victim's friend or even Skype's own tech support, and try to trick people into clicking on links to websites which contain malware.
So, while the Turing Test being passed is more of a publicity stunt than a step-change in artificial intelligence, it does highlight a growing problem for those trying to protect computers against cybercriminals.
While the tricks employed by chatbots may seem obvious to those in the security industry, security advisor at F-Secure Labs, Sean Sullivan says that many people are tricked because the scams are not that obvious to non-technical people.
Broken English
What trips up the scams in their current form is the use of broken or grammatically poor english, meaning people quickly become suspicious.
However, as chatbots evolve, Sullivan says he "is worried about less automated Turing passable stuff and more and more cyborgs - humans working with machines."
Sullivan believes that criminals will be able to leverage the power of artificial intelligence programmes in order to supply them with the responses which will help convince victims to click on malicious links.
However while there are benefits for cybercriminals, Sullivan believes the benefits for cyber security will outweigh these threats if company's leverage the power of these chatbots to provide better automated customer support:
"There might also be cyber-security benefits, as you can now probably go ask questions of someone because, the better the AI, the better you can have people asking questions they would be embarrassed to ask to a human being tech support person."
TK Keanini, the CTO of Lancope, believes that cybercriminals don't even need to employ these sophisticated techniques to be successful, as people are still too naive when it comes to online security:
"If you think this will have an impact on security, think again. People are still so trusting and lack the habit of using proper cryptographic assurance methods that this new technology is not even needed to fool most of society into that download or malicious website."
© Copyright IBTimes 2024. All rights reserved.