AI chatbots miss urgent issues in queries about women's health
AI models such as ChatGPT and Gemini fail to give adequate advice for 60 per cent of queries relating to women’s health in a test created by medical professionals

Many women are using AI for health information, but the answers aren’t always up to scratch
Oscar Wong/Getty Images
Commonly used AI models fail to accurately diagnose or offer advice for many queries relating to women’s health that require urgent attention.
A group of 17 women’s health researchers, pharmacists and clinicians from the US and Europe drew up an initial list of 345 medical queries across five areas, including emergency medicine, gynaecology and neurology. These experts then reviewed the answers provided by a randomly chosen AI model for each question. Those that led to inaccurate responses were collated into a benchmarking test of AI models’ medical expertise that included 96 queries.
This test was then used to assess 13 large language models, produced by the likes of OpenAI, Google, Anthropic, Mistral AI and xAI. Across all the models, some 60 per cent of questions were answered in a way the human experts had previously said wasn’t sufficient for medical advice. GPT-5 performed best, failing on 47 per cent of queries, while Ministral 8B had the highest failure rate of 73 per cent.
“I saw more and more women in my own circle turning to AI tools for health questions and decision support,” says team member Victoria-Elisabeth Gruber at Lumos AI, a firm that helps companies evaluate and improve their own AI models. She and her colleagues recognised the risks of relying on a technology that inherits and amplifies existing gender gaps in medical knowledge. “That is what motivated us to build a first benchmark in this field,” she says.
The rate of failure surprised Gruber. “We expected some gaps, but what stood out was the degree of variation across models,” she says.
The findings are unsurprising because of the way AI models are trained, based in human-generated historical data that has built-in biases, says Cara Tannenbaum at the University of Montreal, Canada. They point to “a clear need for online health sources, as well as healthcare professional societies, to update their web content with more explicit sex and gender-related evidence-based information that AI can use to more accurately support women’s health”, she says.
Jonathan H. Chen at Stanford University in California says 60 per cent failure rate touted by the researchers behind the analysis is somewhat misleading. “I wouldn’t hang on the 60 per cent number, since it was a limited and expert-designed sample,” he says. “[It] wasn’t designed to be a broad sample or representative of what patients or doctors regularly would ask.”
Chen also points out that some of the scenarios that the model tests for are overly conservative, with high potential failure rates. For example, if postpartum women complain of a headache, the model suggests AI models fail if pre-eclampsia isn’t immediately suspected.