Google has removed its AI-generated health summaries from search results after discovering they provided inaccurate medical information, particularly concerning liver function tests. An investigation revealed that the AI tool, known as “AI Overviews,” delivered misleading health information, prompting concerns about the tool’s reliability and potential risks to users.

The AI Overviews feature, which aims to offer quick information snapshots at the top of search results, was found to be presenting significant inaccuracies. This was particularly alarming in cases where users sought clarification on liver function tests. Medical experts pointed out that the AI provided users with incorrect normal ranges for these tests, which could have serious implications for patients.

In one notable instance, when users searched for the normal range of liver blood tests, Google’s AI displayed a series of numbers without adequate context. The summaries neglected crucial variables such as the patient’s nationality, sex, ethnicity, and age—factors that significantly influence test results. Experts warned that this oversight could lead patients to misinterpret abnormal results as normal, potentially resulting in skipped healthcare follow-ups that are critical for managing serious conditions.

Following the investigation, Google acted by removing AI Overviews for queries related to “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” A spokesperson for the company stated that while Google does not comment on specific content removals, it is committed to enhancing the quality of AI-generated information and takes action when necessary.

Concerns About Misinformation Persist

Sue Farrington, chair of the Patient Information Forum, an organization dedicated to promoting evidence-based health information, welcomed the removal of the misleading summaries. Yet, she emphasized that significant concerns about the AI feature remain. Farrington described the removal as a positive step but insisted that further measures are needed to restore trust in Google’s health-related search results.

She noted that millions of adults around the world struggle to find reliable health information. With this existing challenge, it is vital for Google to direct users to well-researched health resources from trusted organizations. Farrington expressed that users must have access to accurate data to make informed decisions about their health.

The investigation also uncovered additional AI Overviews that continue to be available on the platform, including summaries related to cancer and mental health. Experts have criticized these summaries as inaccurate and potentially dangerous. Despite these concerns, Google explained that the remaining summaries link to reputable sources and that an internal team of clinicians reviewed the materials, concluding that much of the information was accurate and supported by high-quality websites.

As the search giant navigates these challenges, the ongoing scrutiny of AI-generated health information highlights the urgent need for technology companies to prioritize accuracy and clarity in their health-related content. The implications of misinformation in healthcare can be profound, affecting patient outcomes and public trust in digital health resources.