thought leadership
Medical communications has a new job: make evidence AI-visible
Elena Garonna, Business Director, Medical Communications; Cristina Teles, Director, Scientific Strategy; and Liam Campbell, Scientific Automation and Digital Lead | April 17, 2026
AI is changing how healthcare stakeholders discover and interpret scientific information. In this environment, scientific credibility alone is no longer sufficient. Evidence must also be structured to be retrievable, interpretable, and accurately represented in AI-generated outputs, because science only matters when it reaches those who need it most, when they need it most. At Avalere Health, we work with organisations to embed this thinking into their scientific communications strategy from the ground up, so medical breakthroughs have the best chance of benefiting every patient possible.
How AI is reshaping health information journeys
Healthcare stakeholders increasingly use AI tools to find, compare, and summarise scientific information. These tools are quickly becoming one of the first places people go with a question. The opportunity is faster discovery and more personalised access to complex science.
This shift is already visible at the patient level. In April 2026, the BBC reported on a woman who, after years of misdiagnosis, used ChatGPT to explore her symptoms. This led to a new line of investigation and ultimately to a rare disease diagnosis, confirmed through genetic testing after discussion with her provider. This does not prove chatbots are reliable diagnosticians, but it does show how AI is already shaping where some health information journeys begin.
That makes the risks harder to ignore. Recent research suggests the problem is not just whether chatbots can generate plausible answers, but whether people can use those answers to make better decisions. In a controlled study, participants using general-purpose large language models (LLMs) were no better than a control group at identifying relevant conditions or choosing the right course of action.
The implication is immediate. It is no longer only the manuscript, poster, or press release that matters, but the retrievable claim. This is the part of the evidence base that an AI system can access, interpret, and accurately rephrase when a stakeholder asks a question.
New risks in AI-mediated discovery
The visibility of information now depends on more than traditional dissemination and search engine optimization. Publisher and infrastructure choices increasingly influence what content AI systems can reach. In practice, this can mean that AI systems retrieve only titles, abstracts, or fragmented summaries rather than the full context of an article. When partial information forms the basis of a synthesized response, the risk of distortion and incomplete representation increases.
Publishers are starting to respond to the copyright and licensing barriers that limit what AI can reliably access. For example, OpenEvidence uses licensed full-text and multimedia partnerships with journals like JAMA and NEJM to ground answers with citations. However, these approaches only shift visibility at scale when access and adoption become widespread.
Access is only part of the problem. AI can amplify weak science that looks plausible, mistaking repetition for legitimacy. A recent Nature investigation showed how fabricated claims about a fake disease spread across accessible sources and were then echoed back by AI systems as if they were real.
Complex scientific writing creates a further challenge. Highly technical language, implicit assumptions, and heavy reliance on figures or supplementary materials can all hinder accurate interpretation by LLMs. When important insights are embedded in ways that are clear to subject matter experts but not to AI systems, the resulting outputs are more likely to be incomplete, oversimplified, or inaccurate.
Together, these factors shape not only what evidence is visible, but which evidence is misrepresented. Even well-established practices like open access, plain language summaries, or visual abstracts don’t guarantee that content will be fully visible or accurately conveyed in AI-generated responses. The question is no longer just how research is published, but whether it can be accessed and accurately interpreted by AI.
From clicks to claims: the impact of structure and representation
These dynamics extend beyond healthcare into the broader digital ecosystem. In April 2026, the BBC reported that HubSpot had lost about 140 million website visits in a year as AI tools and AI overviews changed discovery patterns, reducing the need for users to click through to source sites. This reflects a broader shift, content is increasingly consumed within AI-generated responses rather than on the publisher-owned platforms, reducing direct engagement and limiting control over how information is presented.
In response, many sectors are adapting both how they create and how they distribute content, shifting from a click-driven model to one focused on visibility, citation, and influence within AI-generated outputs. This includes structuring content to support direct quotation (e.g., explicit claims, consistent terminology, concise summaries), but also prioritising content that directly answers user questions and can be understood across different levels of expertise.
As highlighted by Dr. Garth Graham, director and global head of healthcare and public health at Google and YouTube, this requires moving beyond brand-led messaging to address the questions audiences are actually asking, in formats that remain accessible and interpretable.
For scientific journals it’s a similar story, but the implications are even more significant. Content may increasingly be used without being directly accessed. That shifts the focus from readership alone to representation. Making content interpretable to AI is now a baseline requirement. But the bigger strategic advantage comes from being deliberate about where research is published and how it performs in AI-generated outputs.
That means teams should start asking new questions alongside the usual ones about journal quality and reach. Can the full text actually be retrieved by AI systems? Are citations preserved in generated outputs? Do the key claims survive summarisation intact? If not, even strong science may lose impact in practice.
What this shift means for medical communication strategy
AI discoverability is not a one-time fix. It requires an end-to-end approach that considers where content is published, how it is structured, and how representation is monitored over time.
That is because AI systems do not stand still. What they retrieve, prioritise, and repeat can change as models evolve, publisher permissions change, new sources enter the ecosystem, and scientific narratives gain traction. A piece of evidence that is visible today may be less visible tomorrow. A claim that is represented accurately in one context may be distorted in another. For healthcare organisations, this means shifting to actively shaping how scientific content is structured, accessed and interpreted in AI‑mediated environments.
At Avalere Health, our Medical team supports organisations’ ability to create strategic scientific communications that resonate with audiences and are responsive to a world in which AI increasingly mediates discovery. As these systems become more central to how information is found and understood, protecting the visibility, attribution, and integrity of evidence will become an increasingly important part of medical communication strategy.
Adapt your medical communications strategy to AI-mediated discovery
Get in touch to learn how your team can not only adapt, but thrive, in this fast-changing evironment
Thank you to Rhiannon Meaden and Amanda Tollen for your contributions to this article.
