ChatGPT Health wants your sensitive medical records so it can play doctor
Could a bot take the place of your doctor? According to OpenAI, which launched ChatGPT Health this week, an LLM should be available to answer your questions and even examine your health records. But it should stop short of diagnosis or treatment.
"Designed in close collaboration with physicians, ChatGPT Health helps people take a more active role in understanding and managing their health and wellness – while supporting, not replacing, care from clinicians," the company said, noting that every week more than 230 million people globally prompt ChatGPT for health- and wellness-related questions.
ChatGPT Health arrives in the wake of a study published by OpenAI earlier this month titled "AI as a Healthcare Ally." It casts AI as the panacea for a US healthcare system that three in five Americans say is broken.
The service is currently invitation-only and there's a waitlist for those undeterred by at least nine pending lawsuits against OpenAI alleging mental health harms from conversations with ChatGPT.
ChatGPT users in the European Economic Area, Switzerland, and the United Kingdom are ineligible presently and medical record integrations, along with some apps, are US only.
ChatGPT Health in the web interface takes the form of a menu entry labeled "Health" on the left-hand sidebar. It's designed to allow users to upload medical records and Apple Health data, to suggest questions to be asked of healthcare providers based on imported lab results, and to offer nutrition and exercise recommendations.
A ChatGPT user might ask, OpenAI suggests, "Can you summarize my latest bloodwork before my appointment?"
The AI model is expected to emit a more relevant set of tokens that it might otherwise have through the availability of personal medical data – bloodwork data in this instance.
"You can upload photos and files and use search, deep research, voice mode and dictation," OpenAI explains. "When relevant, ChatGPT can automatically reference your connected information to provide more relevant and personalized responses."
- OpenAI putting bandaids on bandaids as prompt injection problems keep festering
- Google pushing Gemini into Gmail, but you can turn it off
- Linus Torvalds: Stop making an issue out of AI slop in kernel docs – you're not changing anybody's mind
- IBM's AI agent Bob easily duped to run malware, researchers show
OpenAI insists that it can adequately protect the sensitive health information of ChatGPT users by compartmentalizing Health "memories" – prior conversations with the AI model. The AI biz says "Conversations and files across ChatGPT are encrypted by default at rest and in transit as part of our core security architecture," and adds that Health includes "purpose-built encryption and isolation" to protect health conversations.
"Conversations in Health are not used to train our foundation models," the company insists.
The Register asked OpenAI whether the training exemption applies to customer health data uploaded to or shared with ChatGPT Health and whether company partners might have access to conversations or data.
A spokesperson responded that by default ChatGPT Health data is not used for training and third-party apps can only access health data when a user has chosen to connect them; data is made available to ChatGPT to ground responses to the user's context. With regard to partners, we're told only the minimum amount of information is shared and partners are bound by confidentiality and security obligations. And employees, we're told, have more restricted access to product data flows based on legitimate safety and security purposes.
OpenAI currently has no plans to offer ads in ChatGPT Health, a company spokesperson explained, but the biz, known for its extravagant datacenter spending, is looking at how it might integrate advertising into ChatGPT generally.
As for the encryption, it can be dissolved by OpenAI if necessary, because the company and not the customer holds the private encryption keys. A federal judge recently upheld an order requiring OpenAI to turn over a 20-million–conversation sample of anonymized ChatGPT logs to news organizations including The New York Times as part of a consolidated copyright case. So it's plausible that ChatGPT Health conversations may be sought in future legal proceedings or demanded by government officials.
While academics acknowledge that AI models can provide helpful medical decision-making support, they also raise concerns about "recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy."
For example, a 2024 case study, "Delayed diagnosis of a transient ischemic attack caused by ChatGPT," describes it as "a case where an erroneous ChatGPT diagnosis, relied upon by the patient to evaluate symptoms, led to a significant treatment delay and a potentially life-threatening situation."
The study, from The Central European Journal of Medicine, describes how a man went to an emergency room, concerned about double vision following treatment for atrial fibrillation. He did so on the third onset of symptoms rather than the second – as advised by his physician – because "he hoped ChatGPT would provide a less severe explanation [than stroke] to save him a trip to the ER." Also, he found the physician's explanation of his situation "partly incomprehensible" and preferred the "valuable, precise and understandable risk assessment" provided by ChatGPT.
The diagnosis ultimately was transient ischemic attack, which involves symptoms similar to a stroke though it's generally less severe.
The study implies that ChatGPT's tendency to be sycophantic, common among commercial AI models, makes its answers more appealing.
"Although not specifically designed for medical advice, ChatGPT answered all questions to the patient's satisfaction, unlike the physician, which may be attributable to satisfaction bias, as the patient was relieved by ChatGPT's appeasing answers and did not seek further clarification," the paper says.
The research concludes by suggesting that AI models will be more valuable in supporting overburdened healthcare professionals than patients. This may help explain why ChatGPT Health "is not intended for diagnosis or treatment." ®