Report uncovers alarming vulnerability of UK websites to bot attacks
The report reveals that 66 per cent of UK websites are vulnerable to basic bot assaults, posing serious risks to British businesses
In an alarming revelation, a recent report has shed light on the vulnerability of UK websites to bot attacks.
The findings by DataDome, a major provider of online fraud and bot prevention powered by artificial intelligence, have highlighted that a significant number of digital platforms in the country are ill-equipped to defend against basic bot assaults, posing serious risks to businesses and their customers. As the prevalence of automated threats continues to grow, this concerning lack of protection raises concerns about potential financial losses, reputational damage, and compromised data security across various sectors.
According to the survey, two-thirds (66%) of UK websites are vulnerable to basic bot assaults, illustrating how exposed British businesses are to various risks. The report revealed the internet is under siege from bad bots.
They make up for more than 30 per cent of all internet traffic today, which fraudsters utilise to target online businesses with fraud and other attacks. Bots interrupt digital corporate operations, jeopardising data security and the customer experience, with serious implications such as financial losses and reputational damage.
DataDome evaluated over 2,400 of the largest UK-based websites across a range of industries, from banking and tickets to e-commerce and gambling, to learn more about how British businesses safeguard themselves from these dangerous bots. The findings offer insight into the current status of bot protection across industries and business sizes, as well as differences in the performance of various bot detection systems and the usefulness of classic CAPTCHAs as a defence measure.
The report found a lack of adequate protection against basic bots attacks, which is alarming. A reported 66 per cent of UK websites evaluated are vulnerable to basic bot assaults. Only 7.9 per cent of bot requests were effectively stopped.
Furthermore, 22.8 per cent of the bots were found and banned. A whopping 69.4 per cent of the nine distinct bot combinations tested were successful.
The report discovered that websites for e-commerce and classified ads are particularly vulnerable. Over 70 per cent of these websites failed all nine bot tests, according to the report. The best-defended sites are gambling sites, with 29 per cent barring all BotTester bots.
The report revealed that CAPTCHA, a commonly used bot defence tool, was largely ineffective. Less than four per cent of the 515 websites equipped with merely a CAPTCHA tool recognised and blacklisted all bots. The CAPTCHA tools failed to block even one bot on 75 per cent of the websites.
According to the report, fake Chrome bots were the most successful' bots (from an attacker's perspective). Ninety per cent of DataDome's phoney Chrome bots went unnoticed. Simple Curl command bots got undetected 87 per cent of the time. Also, 75 per cent of the fraudulent Google Bots went unnoticed.
"Bots are becoming more sophisticated by the day," said Antoine Vastel, head of research at DataDome, "and UK businesses are clearly unprepared for the financial and reputational damage these silent assassins can cause."
From ticket scalping to inventory hoarding to account fraud, Vastel noted that rogue bots pose significant threats to both consumers and businesses. Businesses that fail to deal with rogue bots effectively risk severe reputational harm as well as exposing their clients to unneeded risk. The head of research, therefore, urged businesses to act immediately to defend themselves against this expanding peril.
In a related context, Google has reportedly warned its employees against sharing confidential information with chatbots, including its own Bard. At the same time, the search giant continues to promote its AI bot around the world. Four people familiar with the matter shared this piece of information with Reuters.
Notably, this precautionary measure is part of Google's long-standing policy on safeguarding information. Widely popular human-sounding programs like Bard and ChatGPT use generative artificial intelligence to interact with users.
In fact, these chatbots rely on generative AI to answer users' prompts. Some researchers found that other AI bots are capable of reproducing the absorbed data during training. There are no prizes for guessing that this flaw could create a leak risk.
© Copyright IBTimes 2024. All rights reserved.