AI Laws Inevitable But 'Not Right For Today', Says UK Government
The UK government have set out their position after EU legislators confirmed this week that their 'AI Act' will be passed as a law by next year.
AI laws are inevitable but are "not right for today", the UK government says.
Every country in the world will eventually need to adopt new legislation to address "the challenges posed by AI technologies", but it is not the right approach to implement new laws "today", according to a new AI policy paper produced by the government.
The document adds that legislating for AI would only make sense when understanding the risk it poses "has matured".
The UK government have set out their position after EU legislators confirmed this week that their 'AI Act' will be passed as a law by next year.
The European Union first finalised the world's first 'AI Law' in December, which will aim to regulate systems based on the level of risk they pose.
Negotiations on the final legal text began in June, but a fierce debate over how to regulate general-purpose AI like ChatGPT and Google's Bard chatbot threatened talks at the last minute.
The newly agreed draft of the EU's upcoming AI Act will require OpenAI, the company behind ChatGPT, and other companies to divulge key details about the process of building their products.
However, the UK has signalled that it will pursue a more flexible approach to AI regulation, in the short term at least, marking a point of major differentiation between the UK approach to AI regulation and that of EU legislators.
The government's views were set out in its response to its AI white paper proposals it consulted on last year.
Public policy expert Mark Ferguson of Pinsent Masons said: "The ambition of the UK government is for its AI regulation to be agile and able to adapt quickly to emerging issues while avoiding placing undue burden on business innovation. The UK government's response notes that the speed at which the technology is developing means that the risks and most appropriate mitigations are still not fully understood."
"Therefore, the government will not legislate or implement 'quick fixes' that could soon become outdated."
"The approach of the Labour Party to AI regulation is something that businesses should also be tracking with a view to sharing their views with the party that, with recent polling in mind, looks most likely to form the next UK government," he added.
Currently in the UK, a range of legislation and regulations applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use.
The world's first AI conference was hosted by UK Prime Minister Rishi Sunak in November last year, with the concept of stricter regulation as a discussion point.
In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.
The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".
However, the agreement is not legally binding.
The government's immediate approach to AI regulation involves retaining the sector-based approach to regulation, but it wants UK regulators to fulfil their regulatory functions as they relate to AI with due regard to five cross-sector principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The government's response paper also provided insight into how the UK's approach to AI regulation may evolve over time.
The government said it would only legislate to address AI risks if it "determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way"; if it was "not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers"; and if it was "confident that we could mandate measures in a way that would significantly mitigate risk without unduly dampening innovation and competition".
A recent report released by the Parliamentary House of Lords' Communications and Digital Committee suggests that the UK government needs to broaden its perspective on AI safety to avoid falling behind in the rapidly evolving AI landscape.
The report, following extensive evidence gathering from various stakeholders, including big tech companies, academia, venture capitalists, media and government, highlighted the need for the government to focus on more immediate security and societal risks posed by large language models (LLMs).
The committee's chairman, Baroness Stowell, asserted that the rapid development of AI large language models is comparable to the introduction of the internet, stressing the importance of the government adopting a balanced approach.
© Copyright IBTimes 2024. All rights reserved.