Please enable JavaScript to experience the full functionality of GMX.

Friendly AI chatbots more likely to make mistakes

Friendly AI chatbots more likely to make mistakes

AI chatbots trained to be friendly are more likely to be inaccurate.

Oxford Internet Institute (OII) experts analysed over 400,000 responses from five AI systems that had been tweaked to communicate in a more empathetic way.

Friendlier answers contained more errors - from giving inaccurate medical advice to reaffirming a user's false beliefs, the research found.

The findings place further scrutiny on the trustworthiness of AI models, which are often purposely designed to be warm and human-like in a bid to increase engagement.

These concerns are exacerbated by chatbots being used for support and even intimacy in some cases, with developers attempting to widen their appeal.

The study's authors explained that, while the results may vary across AI models in real-world settings, the bots mimic humans by making "warmth-accuracy trade-offs" to prioritise friendliness.

Lead researcher Lujain Ibrahim told the BBC: "When we're trying to be particularly friendly or come across as warm we might struggle sometimes to tell honest harsh truths.

"Sometimes we'll trade off being very honest and direct in order to come across as friendly and warm... we suspected that if these trade-offs exist in human data, they might be internalised by language models as well."

The study saw researchers deliberately make five AI models of differing sizes more friendly through a process known as "fine-tuning".

Models tested included two from Meta and one from the French company Mistral, with Alibaba's Qwen model and OpenAI's controversial GPT4-o system also assessed.

These were subsequently prompted with queries researchers said had "objective, verifiable answers, for which inaccurate answers can pose real-world risk".

Tasks included were based on topics including medical knowledge, trivia and conspriacy theories.

Professor Andrew McStay, of the Emotional AI Lab at Bangor University in Wales, said it was important for people to remember the context in which individuals may turn to chatbots for emotional support.

He said: "This is when and where we are at our most vulnerable - and arguably our least critical selves."

The boffin noted recent findings from the Emotional AI Lab that showed a rise in teenagers in the UK resorting to chatbots for advice and companionship.

Professor McStay said: "Given the OII's findings, this very much calls into question the efficacy and merit of the advice being given.

"Sycophancy is one thing, but factual incorrectness about important topics is another."

Sponsored Content

Related Headlines