AI could increase the risk of fraud in the financial services of Britain
The FCA sounded the warning bell for financial services as AI fraud takes centre stage. The Head of FCA provided some crucial insights on how they are preparing to tackle AI frauds.
As artificial intelligence becomes an integral part of everyday life and business, financial services are getting prone to AI frauds, says the Financial Conduct Authority (FCA) head.
Recently in his speech the Chief Executive of FCA, Nikhil Rathi sounded the warning bell for the financial services sector in the wake of AI frauds like the deep fake video of personal finance campaigner Martin Lewis endorsing an investment scheme.
While he welcomed the UK government's initiative to make the UK the global hub of AI regulation, he also cautioned the gatekeepers of financial data against AI frauds. Now that the government has made the nation's AI facilities open to firms looking to test the latest innovations, the responsibility of FCA in financial data protection and prevention of financial frauds has increased, said Rathi.
According to the Annual Fraud Report released by UK Finance, Britain has lost more than £1.2 billion in frauds in 2022 and 80 per cent of it happened online.
Tackling AI Fraud
Acknowledging that artificial intelligence has the potential to disrupt financial services like never before, the FCA underlined how it would take action against AI frauds.
- FCA thinks that financial services firms have the capacity to innovate and strengthen market integrity so it won't regulate firms only help them with new rules and guidelines from time to time.
- FCA will only regulate Critical Third Parties that underpin financial services, affecting the confidence and stability of the market.
He further highlighted how the intraday volatility has increased compared to the 2008 financial crisis. The FCA has found that intraday volatility has nearly doubled which suggests that traders are using automated strategies in short-term trading across markets and asset classes.
Rathi identified key areas of AI fraud which include a highly enhanced and sophisticated nature of identity fraud, more resilient cyber attacks and cyber frauds. To counter these issues, investments in cyber resilience and fraud prevention need to speed up at the same rate, said Rathi. The FCA will provide full support in developing innovative and proportionate ways of protection.
The FCA will also assist firms regarding AI models so that they can instil faith regarding their system by explaining how the AI models work in protecting customers, especially when things go wrong.
In his speech, Rathi highlighted how the FCA is planning to tackle new challenges in financial regulation in an outcome and principles-based manner. As part of its outcome-based approach, the FCA has mandated that financial firms design their products in a way that protects the consumer. This is part of FCA's Consumer Duty guideline which also underlines that the entire supply chain upholds this - from sales to distribution to after-sales.
Through the Senior Managers & Certification Regime, the FCA has put the onus on senior managers of firms. Senior managers are liable for the activities of the firm. This system gives FCA a clear framework to address AI-related concerns.
Rathi stressed the use of a bespoke SMCR-type regime for these managers to handle AI systems. This will be a critical issue in the future of AI as these individuals form a crucial part of decision-making in the companies and hence the safety of the market relies on them, he added.
Need to formalise AI regulation
Although the Prime Minister is hoping to make Britain a hub of AI regulation, a framework is yet to be developed. The UK hasn't formalised AI regulation yet and the country is taking a non-statutory approach to it which is a stark contrast to the EU legislation, said Rathi.
At present the European Union is just a step away from its path-breaking AI regulation act which seeks to control how AI is deployed and created.
Recently, the UK's Equality and Human Rights Commission called for better AI regulation in order to protect people and uphold human rights.
Firms need more AI talent
The CEO of Investigo, Derek Mackenzie supported the FCA's AI fraud tackling measures but expressed concern over the lack of talent in financial firms to follow these guidelines. According to Mackenzie, many firms will be on the back foot regarding AI capabilities in the absence of adequate in-house regulatory expertise.
Every aspect of businesses in this area - from compliance to coding to operation - is crying for tech talent in order to implement AI but they remain understaffed, said Mackenzie.
Through these measures on AI, the FCA is strengthening its efforts in regulating the big tech sector where financial products like Google Pay and Apple Pay are increasing day by day. This also includes cryptocurrency-related financial services.
© Copyright IBTimes 2024. All rights reserved.