MarketingMedicalPolicy,Access,Value, andEvidenceConsultingData,Technology,and MediaMarketingMedicalPolicy,Access,Value, andEvidenceConsultingData,Technology,and MediaIntroducing
murmuration of birds
Previous
Next

thought leadership

ChatGPT Health: turning the page on healthcare, or writing a new set of challenges?

Julia Trett, Medical Copywriter, Global Marketing, and Sarah Wood, Senior Medical Copywriter, Global Marketing | January 28, 2026

With the announcement of ChatGPT Health on 7 January 2026, we may be standing at the threshold of a new era in healthcare. For the first time, a generative artificial intelligence (AI) platform used by hundreds of millions of people each week is being formally shaped around health and wellbeing. From understanding if side effects are normal, to deciphering test results, driving health literacy, and directing patients to life-saving treatment, the promise is significant, but so are the questions it raises for patients, healthcare providers, and health systems worldwide.

 

OpenAI reports that over 230 million people globally already ask healthcare-related questions on ChatGPT every week, making the launch of a dedicated health experience feel less like a surprise and more like inevitability. ChatGPT Health is positioned as a way for people to better understand elements of their care, including test results, appointments, diet and exercise, and medical insurance options. Additionally, in the United States, patients will be able to upload their private medical records and health app data to gain more insight into their health.

Crucially, OpenAI has emphasized that ChatGPT Health is not intended to replace medical care. Instead, it aims to support understanding, preparation, and engagement. Whether it fulfils that ambition or becomes a new source of friction will depend on how thoughtfully it is adopted.

From “Dr Google” to AI health partner

For patients, ChatGPT Health feels like the natural successor to “Dr Google.” Preparing for appointments, understanding results, and making sense of complex medical language have long been pain points – and they sit at the heart of ChatGPT Health’s promise.1

AI has the potential to reduce barriers such as poor health literacy and fragmented access to health data, helping patients feel more informed and more in control of their care. However, we cannot ignore that AI has previously been implicated in misinformation in the healthcare space. ChatGPT has previously been found to provide unsafe advice for infections, and failed to ask clarifying questions. More recently, investigations into AI-generated health summaries by Google AI Overviews have revealed potentially dangerous inaccuracies.

ChatGPT Health’s core value, therefore, may not lie in giving answers, but in supporting better questions, better conversations, and better engagement with clinicians.

Healthcare’s new literacy challenge

National Academy of Medicine suggests that developing skills in AI health literacy (i.e. health literacy specific to AI-generated responses) will be paramount to meaningfully shift power toward patients by enabling them to explore treatment options, cross-check recommendations, and consider multiple perspectives before making decisions. Without AI health literacy skills, patients may misinterpret information, delay seeking care, or become anxious about benign findings.

There is an art to accurate prompting – without the correct context, AI could misdirect patients to incorrect guidelines, leading them down the wrong path when navigating their disease. This presents a new dynamic in healthcare. For years the healthcare sector has worked to improve patients’ health literacy, transforming complex science into simple, accessible language, running disease education campaigns, and providing innovative support programs. However, in this new era, prompt literacy is taking center stage. For biopharmaceutical companies, this creates an additional layer of support to consider.

A more informed patient or a more complex consultation?

For healthcare professionals, ChatGPT has the potential to be both an enabler and a source of new complexity. On the positive side, it helps patients arrive at appointments better prepared, which can support more focused consultations and shared decision-making.

On the downside, it may create extra work for time-strapped and overworked clinicians who must fact-check outputs, myth-bust misconceptions, and reassure anxious patients and caregivers. The confident tone of AI responses can also challenge professional authority when recommendations conflict with clinical judgement.

Only a day after OpenAI announced the rollout of ChatGPT Health, they also announced ChatGPT for Healthcare, designed to streamline workflows, summarize evidence, and support clinical reasoning within healthcare organisations.

The perceived benefit of ChatGPT for Healthcare is clear – it can reduce healthcare professionals’ time spent on documentation, diagnostics, retrieving treatment pathways, and administrative tasks, allowing clinicians to focus on direct care.

At the same time, such AI introduces governance, privacy, and bias risks, as both the patient and clinician-facing AI rely on sensitive health data. There is also the question of safety; although OpenAI promises ‘answers grounded in relevant medical sources,’ we must be wary of AI’s predisposition to ‘hallucinations’ and unsubstantiated content generation, hich could lead to misdiagnosis and misdirection to inappropriate medical advice. As a result, the use of AI by healthcare practitioners must always be underpinned with human experience and expertise.

Systemic efficiency gains and new risks

At a system level, ChatGPT presents significant opportunities. Enterprise AI tools such as ChatGPT for Healthcare can streamline operations, reduce administrative costs, and support population health initiatives.

Yet there are also potential risks. AI can increase health inequalities and contribute to overburdening healthcare systems – patients may demand further tests, second opinions or see a doctor for ultimately harmless ailments.

The challenge for healthcare systems will be to leverage AI’s efficiency gains while mitigating risks. Strong oversight, integration into established clinical workflows, and training for staff on the safe use of both patient-facing and enterprise AI will be essential to ensure ChatGPT enhances care rather than introducing new inefficiencies or disparities. 

Keeping people at its heart

AI will increasingly shape how healthcare is delivered, but people remain central to the system. The value of tools like ChatGPT Health and ChatGPT for Healthcare lies not in replacing human judgement, but in supporting patients, clinicians, and decision-makers to make better, more informed choices.

For patients, AI can improve understanding and engagement. For clinicians and health systems, it can enhance efficiency and insight. But AI has limitations. It is people – patients asking the right questions, clinicians applying experience and expertise, and organizations providing oversight – who ensure those tools are used safely and effectively.

 

Is the need for ChatGPT Health derived from a need for better patient support?

There’s a gap between what science can achieve and what the experience allows. In this void, patients are left behind and the need for alternative sources of health information emerges. In our latest article on patient support programs, Alex Hope and Rachel Kelly explore why ‘patient-centered care’ is not enough. They argue why rethinking patient support as the human scaffolding around treatment is integral for stretching the value of consultations, overcoming misinformation, guiding patients at the moments they need reassurance the most.

Read the article

 

AI is changing our lives and yours

Discover how our marketers intelligently pair AI with human expertise to drive deeper insight and sharper relevance, thoughtfully uncovering new opportunities to reach patients everywhere.

Get in touch today

 

 

We care about your privacy

By clicking "Accept all cookies", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.

Manage my preferences

How we use cookies

Cookies are files saved on your phone, tablet or computer when you visit a website.

We use cookies to store information about how you use the Avalere Health website, such as the pages you visit.

For more information, check the cookie statement

32E8261E-A569-4B66-ACC6-3CB772DDEDEE

Cookie settings

We use 4 types of cookies. You can choose which cookies you're happy for us to use.

Cookies that measure website use.

We use Google Analytics to measure how you use the website so we can improve it based on user needs. We do not allow Google to use or share the data about how you use this site.

  • how you got to the site
  • the pages you visit on avalerehealth.com, and how long you spend on each page
  • what you click on while you're visiting the site

Cookies that help with our communications and marketing

These cookies may be set by third party websites and do things like measure how you view Vimeo videos that are on avalerehealth.com.

Cookies that remember your settings

These cookies do things like remember your preferences and the choices you make, to personalize your experience of using the site.

Strictly necessary cookies

These essential cookies do things like remember your progress through a form (eg. Registering for new content alerts).

These cookies always need to be on.