ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money
About sixty percent of American adults have turned to AI like ChatGPT for health or healthcare in the past three months. Instead of seeing that as an indictment of the state of US healthcare, OpenAI sees an opportunity to shape policy.
A study published by OpenAI on Monday claims more than 40 million people worldwide ask ChatGPT healthcare-related questions each day, accounting for more than five percent of all messages the chatbot receives. About a quarter of ChatGPT's regular users submit healthcare-related prompts each week, and OpenAI understands why many of those people are users in the United States.
"In the United States, the healthcare system is a long-standing and worsening pain point for many," OpenAI surmised in its study.
Studies and first-hand accounts from medical professionals bear that out. Results of a Gallup poll published in December found that a mere 16 percent of US adults were satisfied with the cost of US healthcare, and only 24 percent of Americans have a positive view of their healthcare coverage.
It's not hard to see why. Healthcare spending has skyrocketed in recent years, and with Republican elected officials refusing to extend Affordable Care Act subsidies, US households are due to see another spike in insurance costs in 2026. Based on Gallup's findings, it seems that American insureds, who pay the highest per capita healthcare costs in the world, don't think they're getting their money's worth.
According to OpenAI, more Americans are turning to its AI to close healthcare gaps, and the company doesn't seem at all troubled by that.
"For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes," OpenAI said in its study.
According to the report, which used a combination of a survey of ChatGPT users and anonymized message data, nearly 2 million messages a week come from people trying to navigate America's labyrinthine health insurance ecosystem, but they're still not the majority of US AI healthcare answer seekers.
Fifty-five percent of US adults who used AI to help manage their health or healthcare in the past three months said they were trying to understand symptoms, and seven in ten healthcare conversations in ChatGPT happened outside normal clinic hours.
Individuals in "hospital deserts," classified in the report as areas where people are more than a 30-minute drive from a general medical or children's hospital, were also frequent users of ChatGPT for healthcare-related questions.
In other words, when clinic doors are closed or care is hard to reach, care-deprived Americans are turning to an AI for potentially urgent healthcare questions instead.
A slippery slope of medical misinformation
As The Guardian reported last week, relying on AI for healthcare information can lead to devastating outcomes.
The Guardian's investigation of healthcare-related questions put to Google AI Overviews found that inaccurate answers were frequent, with Google AI giving incorrect information about the proper diet for cancer patients, liver function tests, and women's healthcare.
- Doctors get dopey if they rely too much on AI, study suggests
- AI won't replace radiologists anytime soon
- 'It looks sexy but it's wrong' – the problem with AI in biology and medicine
- AI models hallucinate, and doctors are OK with that
OpenAI rebuffed the idea that it could be providing bad information to Americans seeking healthcare information in an email to The Register. A spokesperson told us that OpenAI has a team dedicated solely to handling accurate healthcare information, and that it works with clinicians and healthcare professionals to safety-test its models, suss out where risks might be found, and improve health-related results.
OpenAI also told us that GPT-5 models have scored higher than previous iterations on the company's homemade healthcare benchmarking system. It further claims that GPT-5 has greatly reduced all of its major failure modes (i.e., hallucinations, errors in urgent situations, and failures to account for global healthcare contexts).
None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however.
What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future.
Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once."
OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage … AI services that support doctors." ®