Britain's AI task force aims to tackle cybersecurity threats that are national and global safety risks
Seven leading AI experts have joined the Frontier AI Taskforce with the aim of fostering global coordination and research on AI safety.
Britain's AI task force got a major boost as leading figures from the computer science industry and national security joined the team to address AI safety issues
The stalwarts in the AI taskforce team have joined in their individual capacity as a representative of their respective organisations and they will withhold any procurement deals signed by their companies during this time.
On September 7, the Department for Science, Innovation and Technology announced that a "Frontier AI Taskforce" will be created with stalwarts from the IT industry and national security.
The AI task force aims to research AI safety and find out potential areas of AI usage in the public sector. Through this process, the UK government wants to strengthen the capabilities of the UK across various sectors.
Earlier known as the Foundation Model Taskforce, the new AI task force will focus on frontier AI issues, specifically AI cybersecurity issues that are global security and public safety risks.
The UK government acknowledged that frontier AI has a huge potential to foster the economic growth of the country with widespread public benefits but they also pose certain safety risks. Frontier AI systems like cutting-edge large-scale machine models which handle large amounts of data will be scrutinised by the task force.
Within 11 weeks of launch, the AI task force has recruited seven heavyweight experts. Turing Prize Laureate Yoshua Bengio and GCHQ Director Anne Keast-Butler, stalwarts in national security and deep computer learning respectively have joined the External Advisory Board of the taskforce.
All the board members will oversee the development of new approaches to tackle AI cybersecurity risks with evidence-based solutions from their respective fields of expertise.
Machine Learning and Artificial Intelligence expert from Oxford University, Professor Yarin have been appointed as the first Taskforce Research Director. Joining him in the research programme is another AI and deep learning expert from Cambridge University, Professor David Kreuger. Both of them will research the potential cybersecurity risks of frontier AI.
The research program will recruit technical people from the AI sector to evaluate the AI cybersecurity risks. Meanwhile, major artificial intelligence companies like Open AI, DeepMind and Anthropic have assured of deep access to their AI models and tools.
The UK government has assigned two other missions to the Frontier AI Taskforce: identifying AI uses for the public sector and strengthening the UK's capabilities.
Earlier in April, Prime Minister Rishi Sunak launched the AI taskforce with £100 million funding for the development of safe and reliable frontier AI models.
New appointments in the AI task force show confidence in Britain
UK Secretary of State for Science, Innovation and Technology Michelle Donelan termed the new appointments as a "vote of confidence" for the UK's role in AI safety as it taps into the best minds across the world.
Donelan underlined the transformative nature of Artificial Intelligence as seen in breakthroughs in healthcare and climate change issues. The Science and Technology department will ensure that the UK leads the way in frontier AI, Donelan added.
The Chair of the Frontier AI Taskforce Ian Hogarth welcomed the progress of hiring the best AI researchers in just 11 weeks.
Hogarth stressed the diversity of the group as it has AI experts from the industry, academia and the government which will ensure that the UK delivers cutting-edge AI safety standards.
Speaking about the matter, Turing Prize Laureate Yoshua Bengio said: there is massive investment in improving AI capabilities but not enough investment in AI safety to protect the public.
Bengio underlined the goal of this AI taskforce as AI for the benefit of all. He lauded the UK government for taking the lead in advancing global coordination on AI.
Other key members of the External Advisory Panel include the Prime Minister's Representative for the AI Safety Summit Matt Clifford who has been appointed as the Vice-Chair and Deputy National Security Adviser Matt Collins.
Academy of Medical Royal Colleges Chair Dame Helen Stokes-Lampard, Chief Scientific Adviser for National Security Alex Van Someren and Alignment Research Centre Chief Paul Christiano are notable members of the board.
The AI task force is forging long-term partnerships with American-based companies Trail of Bits and ARC Evals to understand the implications of frontier AI in national security and its potential cybersecurity risks. They have also signed agreements with The Center for AI Safety and The Collective Intelligence Project for this AI risk assessment.
With these new appointments, the AI task force is well placed to perform its critical role in the Global AI Safety Summit hosted by Britain on November 1 and November 2.
© Copyright IBTimes 2024. All rights reserved.