Researchers find AI can be used to generate volumes of misleading content on critical health topics, call for vigilance – ET HealthWorld

New Delhi: A new study has shown that currently accessible AI tools can be used to produce over 100 misleading blogs, 20 deceptive images, and a convincing deep-fake video about vaping and vaccines in just over an hour that can be used to spread health disinformation. The video could even be adapted into 40 languages, thereby amplifying its potential harm, said the medical researchers from Flinders University, Australia.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing, attention-grabbing titles,” said Bradley Menz, a researcher at the university and first author of the study published in the Journal of the American Medical Association (JAMA) Internal Medicine.

The researchers investigated OpenAI‘s GPT Playground, a large language model (LLM), for its capacity to facilitate the generation of large volumes of health-related disinformation. The LLM is an AI algorithm trained on massive textual datasets and thus, is capable of recognising, translating, predicting and generating textual content, which are known as natural language processing tasks.

The research team also explored publicly available generative AI platforms, like DALL-E 2 and HeyGen, for their studies.

Using GPT Playground, the researchers generated 102 distinct blog articles, containing more than 17,000 words of disinformation related to vaccines and vaping, in just 65 minutes, they reported in their study.

Further, in under five minutes, the team generated a concerning deepfake video featuring a health professional promoting disinformation about vaccines, using AI avatar technology and natural language processing, they said. The video could then be manipulated in over 40 different languages.

Along with illustrating concerning scenarios, the study findings underscore an urgent need for robust AI vigilance, the researchers said.

The findings also highlighted the important role that healthcare professionals can play in proactively minimising and monitoring risks related to AI-generated misleading health information, they said.

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” said Menz.

  • Published On Nov 16, 2023 at 02:15 PM IST

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETHealthworld App

  • Get Realtime updates
  • Save your favourite articles


Scan to download App
health barcode

Source link

Leave a comment