WHO asks policymakers to ensure 'patient safety' over ChatGPT issue
The World Health Organization has called for a cautious approach towards AI, underlining that "AI-generated Large Language Model Tools or LLMs" needs to be used cautiously "to protect public health".
In a major development, the World Health Organization has expressed their opinion about Artificial Intelligence or AI, calling for a cautious approach towards it. This comes at a time when AI tools like ChatGPT have entered every sphere of life. Everything from research papers to books to newspaper reports is generated using them.
In a statement issued on May 16, the WHO underlined that "AI-generated Large Language Model Tools or LLMs" needs to be used cautiously "to protect public health" and the well-being of human beings, including their autonomy and safety. In the press release the WHO particularly warned about LLM platforms like ChatGPT, Bert and Bard.
While the healthcare organisation acknowledges that these applications' ability to skillfully assess and understand human communication has opened up a new sphere in public health where there's excitement about using this as a support system, there are dangers of misinformation.
Parameters To Analyse AI-Generated Responses
The WHO has critically underlined that healthcare professionals analyse the risks of these systems before employing their services, especially when using them for diagnostic purposes in underdeveloped areas. Such measures are critical "to protect people's health and to reduce inequity". Even for health information or a decision support tool, the LLMs should be used cautiously.
Despite expressing their enthusiasm regarding the new technology, the organisation is concerned about certain critical areas in order to protect the sanctity of healthcare workers, researchers, scientists and patients.
The WHO has asked people to judge the LLMs based on five parameters
- Transparency: how transparent and reliable the information generated needs to be ascertained.
- Inclusion: since it deals with a critical area like public health it needs to be all-inclusive, taking into consideration a global audience
- Expert Supervision: The information generated needs to be validated by experts from the field or done under the supervision of such experts.
- Public Engagement: There needs to be enough engagement with the public to generate authentic information.
- Evaluation: Lastly, the AI-generated information needs to be evaluated from time to time to ensure that the system is running as intended.
The Areas Of Concern
The WHO underlined the danger of using such untested systems without calculating their risks and calibrating them for the public healthcare system.
The primary concern here is the issue of the mistrust it can generate between healthcare professionals and patients, particularly in poor and underdeveloped areas which lack adequate medical resources.
In such situations, a minor mistake by healthcare workers could harm the patients. Also, this could trigger a fear of AI in people and erode the trust of healthcare workers in adopting AI. Thereby, hindering the long-term benefits of employing the services of such tools.
The WHO has highlighted some concerning areas regarding this:
- When AI is trained to generate information it can be done in a biased way by selecting specific data. This could lead to inaccurate and misleading information which is a health risk as well as inclusiveness and equity risk.
- When AI generates information it can be projected in a "plausible and authoritative" manner, triggering a definitive action for the user. But in reality, medical information is not objectively definitive in action. These responses could be erroneous when it comes to healthcare.
- Also, AI can use data without taking consent in order to generate responses. Such information violates data sensitivity especially when it comes to the health data of a person.
- These platforms can be easily misused to distort data and spread misinformation which might look convincing to the public. Such AI-generated texts, videos or audio content might be difficult to differentiate from "authentic health content".
Evidence Accessibility, The Way Out
The WHO has further suggested that policymakers take into account these areas of concern and formulate "patient safety and protection" plans for new technologies like AI and Digital Health as technology firms commercialise this.
The press release underlined that evidence needs to be collected from widespread use of this technology and such evidence should be accessible to the general public, policymakers and public healthcare workers and administrators.
In the end, the WHO reiterated its commitment to the six principles of public health which are
- Autonomy
- Human well-being, public safety and public interest
- Transparency, intelligibility and explainability
- Accountability and responsibility
- Equity and inclusiveness
- Sustainable and responsive AI
The WHO drew attention to their guidelines of "AI for health" which clearly states that "ethics and governance" are the primary areas to be tackled while employing the services of AI in healthcare.
© Copyright IBTimes 2024. All rights reserved.