Please enable JavaScript to experience the full functionality of GMX.

AI chatbots give users inaccurate and inconsistent medical advice

AI chatbots give users inaccurate and inconsistent medical advice

AI chatbots give inaccurate and inconsistent medical advice to users.

Research from the University of Oxford found that people using artificial intelligence (AI) for healthcare guidance were given a mixture of good and bad responses, making it difficult to identify what advice they should trust.

Dr Rebecca Payne, lead medical practitioner on the study, argues that it could be "dangerous" for people to ask chatbots about their medical symptoms.

The researchers gave 1,300 people a scenario, such as suffering from a severe headache or being a new mother who felt constantly exhausted.

They were divided into two groups, with one using AI to help figure out what they might have and decide what to do next.

The experts then assessed whether people correctly identified what could be wrong, and if they should see a GP or go straight to A+E.

They found that people who used AI often didn't know what to ask and were given a range of different answers depending on how they worded their question.

Dr Adam Mahdi, senior author on the study, explained that while AI was able to give medical information, users "struggle to get useful advice from it".

He told the BBC: "People share information gradually.

"They leave things out, they don't mention everything. So, in our study, when the AI listed three possible conditions, people were left to guess which of those can fit.

"This is exactly when things would fall apart."

Lead author Andrew Bean said that the results show how interacting with human beings poses a challenge "even for top" Ai models.

He said: "We hope this work will contribute to the development of safer and more useful AI systems."

Meanwhile, Dr Amber W. Childs, of the Yale School of Medicine, explained that chatbots face the problem of repeating biases that have been "baked into medical practices for decades" because they have been trained on existing medical practices and data.

She said: "A chatbot is only as good a diagnostician as seasoned clinicians are, which is not perfect either."

However, The Medical Futurist editor Dr Bertalan Mesko claims there are developments to come in the space.

He explained that two major AI developers, OpenAI and Anthropic, had released health-dedicated versions of their general chatbot, which he believes will "definitely yield different results in a similar study".

Sponsored Content

Related Headlines