
Chatbots are widely used in the medical field: one in six Americans uses them every month to get health advice. However, according to a study conducted by the University of Oxford, users are not using AI chatbots correctly. The study used ChatGPT (GPT-4o), Cohere Command R+, and Meta Llama 3 models. Participants completed tasks such as diagnosing health conditions and taking action (seeing a doctor, calling an ambulance) based on scenarios written by doctors.
Findings:
- Chatbots did not achieve better results for users in making accurate diagnoses than traditional methods (Google, personal experience).
- Chatbots often mixed recommendations — helpful and harmful advice was given together.
- Users made bad decisions because they gave the chatbots incorrect information or were unable to correctly interpret the answers.
“The use of chatbots in medicine needs to be tested in real-world settings — just like clinical trials of new drugs,” says study co-author Adam Mahdi.
Why it’s worth noting
Tech giants (Apple, Amazon, Microsoft) are actively working on the use of AI in healthcare. But experts, including the American Medical Association, are not yet recommending the use of chatbots for clinical decision-making.
Leave a Reply