ChatGPT Health: how AI can support better care without replacing clinical judgement

Written by
Harpreet Sood, MD MPH
Estimated Read time
15 mins

Mr L, a 57 year old male, reached out to me to get urgent advice. He had been away over the weekend with a couple of friends and they had all developed similar symptoms including vomiting.. He typed these symptoms into ChatGPT and was informed that he could possibly have pancreatitis, acute appendicitis or severe gastritis but could not rule out complete bowel obstruction or an acute bleed, even though these were less likely now. Alarmed and distressed and alarmed, he needed to know what to do. 

Scenarios like these are becoming increasingly common. Millions of people now ask AI tools to explain symptoms, interpret test results, or make sense of health data, often without context, safeguards, or professional guidance. Left unaddressed and without appropriate safeguards, these prompts create understandable anxiety and risk.

The formal launch of ChatGPT Health is therefore a massive step forward. It brings this reality into a clearer focus, an opportunity to add structure, responsibility, and clinical framing to these behaviours. Used thoughtfully, it can support better understanding, better decision making and improve outcomes.

Why this launch reflects a broader shift in care

Healthcare no longer sets the pace for when people engage with health information. Access now happens continuously, shaped by search engines, online communities, and increasingly AI tools, often before a clinician is involved. The effects of this shift are increasingly visible. Patients often arrive with AI-generated interpretations of blood results, imaging reports, or medication information. Some of these explanations are directionally helpful. Many, however, lack the clinical context required for accurate interpretation, leading to partial, and at times misleading, conclusions.

What concerns me most is not curiosity, but certainty without grounding. A single number interpreted without context. A symptom read without history. A wearable trend viewed without understanding normal variation. That gap between information and interpretation is where confusion, false reassurance, and unnecessary alarm take root.

ChatGPT Health represents an attempt to acknowledge this reality and bring some order to it. Not by pretending AI is a doctor, but by making its role clearer, its limits more explicit, and its use more responsible when embedded within clinical thinking.

AI is already part of how people engage with their health

Across the NHS, and increasingly in private and international care settings, it is now routine for patients to use AI tools to explore symptoms ahead of appointments, to make sense of results afterwards, or to reflect on decisions when timely clinical access is limited. I see this across different contexts: parents seeking clarity overnight when a child is unwell, executives reviewing fatigue or sleep data while travelling, and family caregivers navigating complex, multi-generational health needs.

In clinical practice, the effects of this are mixed. Some patients use AI to structure their thinking and arrive with clearer questions, which can make consultations more focused and productive. Others present with significant anxiety, having drawn firm conclusions from generic outputs that fail to account for personal history, risk factors, or recent investigations.

The implication is straightforward. Ignoring this pattern does not reduce risk, and advising people not to use AI is neither realistic nor effective. A safer and more constructive approach is to influence how these tools are designed, framed, and integrated into care, so that they support understanding and informed discussion, rather than inadvertently undermining them.

What ChatGPT Health changes in practical terms

The most consequential change introduced by ChatGPT Health relates to how health information is framed and constrained. General-purpose AI tools respond to whatever is asked of them, often without visibility of what relevant information is missing. Clinical interpretation, by contrast, depends heavily on context. Medications, past history, timelines, risk factors, and individual goals all shape meaning. By prompting users to provide richer context, and by being clearer about scope and limitations, ChatGPT Health begins to address some of the risks associated with de-contextualised responses.

Boundary setting is equally important. Support with information must be clearly distinguished from the provision of medical advice, and that distinction needs to be explicit, consistent, and reinforced through design choices. Clear communication of uncertainty, visible safety prompts, and guidance on when to seek professional input should be treated as core features rather than optional safeguards.

From experience developing and evaluating national digital health tools within the NHS, one principle has remained consistent. Technology tends to become safer and more trustworthy when equal attention is given to defining its limits as to expanding its capabilities.

Where AI genuinely adds value for patients

AI adds the most value when it improves understanding and preparation, rather than attempting to replace judgement.

In practice, that often shows up in three areas. First, health literacy. Medical information is dense, technical, and often poorly explained under time pressure. AI can translate test results, reports, and guidelines into plain language, giving people the space to understand what they are looking at before they discuss it with a clinician.

Second, preparation for consultations. Patients who use AI to clarify their priorities or frame questions tend to have more productive appointments. The conversation starts at a higher level, with less time spent decoding and more time spent deciding.

Third, continuity between appointments. People live their health outside clinic rooms. AI can support understanding by helping explain what a result may indicate, placing it in broader context, and drawing attention to points that may be worth discussing at a subsequent review, without attempting to influence or substitute for clinical decision-making.

A common example I see involves wearable data. Someone notices a gradual rise in resting heart rate or a decline in sleep efficiency. AI can help them understand possible contributors and decide whether it is worth discussing, rather than jumping straight to worst-case conclusions or ignoring it entirely.

Why shared decision making still sits at the centre

Better health decisions happen when insight and judgement work together.

AI is well suited to handling volume. It can analyse longitudinal trends, integrate wearable data, and surface patterns across large datasets. Clinicians, by contrast, are responsible for interpretation, prioritisation, and accountability. They weigh probabilities, manage uncertainty, and help patients decide what matters most to them.

Shared decision making depends on three things: clear explanation of options, honest discussion of uncertainty, and alignment with a patient’s values and goals. No algorithm can do that alone.

In my own practice, the most effective consultations are those where patients arrive informed but open, curious rather than convinced. AI can support that state when it is used as a tool for understanding, not authority. That is also why clinician-led platforms matter. At Skai Health, for example, AI-enabled tools are used to translate complex biomarker and wearable data into insights that clinicians then review and contextualise with patients. Intelligence supports the conversation. It does not close it.

Addressing safety, misinformation, and over-reliance

Public concern about safety is reasonable. It is also addressable.

The risks are well known. Over-confidence in outputs. False reassurance. Unnecessary alarm. Use of AI outside regulated care pathways. We saw similar patterns during earlier waves of digital health innovation.

Experience from national digital health programmes has been consistent in this regard. Technologies introduced without clear governance frameworks tend to introduce new forms of risk, while those developed within defined clinical, regulatory, and operational boundaries are more likely to improve safety.

In practice, effective governance includes transparency about limitations, explicit communication of uncertainty, and clearly defined pathways for escalation to professional care. It also requires a proportionate response to innovation. Restricting or delaying adoption does not, in itself, reduce risk if people continue to use unregulated tools in parallel. Designing systems that support safer, more informed use has proven to be the more reliable approach.

Understanding what AI cannot do

Knowing the limits of AI is as important as understanding its strengths.

AI does not diagnose independently. It does not fully understand nuance without context. It does not hold legal, ethical, or professional accountability. Those responsibilities remain firmly with clinicians and health systems.

Clear boundaries protect patients from over-trust, clinicians from inappropriate liability, and systems from fragmented care. In my experience, confidence in digital tools grows when their limits are explicit rather than implied.

Responsible adoption is a system design challenge

The real determinant of impact is not the model itself, but how it is integrated.

For AI to contribute safely and effectively, it needs to operate within established health ecosystems, supported by clinical leadership, defined escalation pathways, and ongoing mechanisms for audit and evaluation. Where digital transformation has been successful in the NHS, it has tended to be less about novelty and more about integration, with tools embedded into everyday workflows, accompanied by appropriate training, and aligned with clear clinical responsibility.

The same considerations apply here. AI is most effective when it strengthens the work of care teams and fits within existing models of accountability, rather than functioning as a separate or ungoverned layer alongside them.

Looking ahead to a more proactive model of care

Used well, AI can help shift healthcare away from reactive episodes and toward continuous understanding.

When combined with wearables, remote monitoring, and longitudinal records, it becomes easier to spot early signals, personalise care, and intervene earlier. The long-term benefit is not faster answers, but better conversations over time.

That vision aligns closely with how I believe healthcare should evolve. Prevention over reaction. Insight over guesswork. Technology in service of human judgement, not in competition with it.

ChatGPT Health is neither a threat nor a cure. It is a tool whose value depends entirely on how we choose to use it.

We should resist both hype and panic. Patients should feel supported in using AI to engage more thoughtfully with their health, not to bypass care. Clinicians and health leaders should actively shape adoption, rather than standing on the sidelines.

If we get this right, AI can support safer information, better preparation, and more meaningful shared decisions. That is not a revolution. It is a sensible evolution, grounded in experience and guided by clinical leadership.

If you would like to understand how clinician-led digital health tools can support prevention, personalisation, and long-term health outcomes in practice, you can explore how we approach this at Skai Health.

Precision healthcare that moves with you.

Your health isn’t a once-a-year event. With a Skai membership, you get a medical team that interprets your body’s signals, guides your decisions, and helps you stay ahead of changes long before symptoms appear. 

Explore Skai Memberships
Explore Skai Memberships

Because Tomorrow’s Health Starts with Today’s Decision

Phone
+971 4 436 8222
clinic Location
Residential Building 5, City Walk, Dubai

The most successful people don’t wait for change, they anticipate it. Begin your journey toward clarity and longevity with a conversation that could redefine how you live your next decade.

Book Consultation
Book Consultation