2 minute read
Every day, millions of people Google their symptoms and take whatever pops up as “it must be right”. Totally normal human behavior and a horribly bad idea..
A Guardian investigation earlier this year caught Google’s AI Overviews, those convenient little AI answer boxes perched a the top of your search results, handing out crappy health advice. Advice that doctors described as anywhere from “misleading” to “could actually kill someone.” And it happened multiple times. Across multiple health topics. Confident sounding advice that was super bad.
Google’s AI Got What Wrong?
The examples are genuinely alarming. Pancreatic cancer patients asking about diet were told to avoid high-fat foods. Sounds reasonable, right? Except that’s the exact opposite of what doctors recommend. Patients with pancreatic cancer are already at serious risk of malnutrition and actually need more calories and fat, not less. Pancreatic Cancer UK called it “really dangerous” and warned patients who follow that advice may not be strong enough to survive chemotherapy or surgery. On 4/19 I tried this search and sure enough Google AI is still giving this bad advice. Why Google?!
The AI also fumbled liver function tests, spitting out long lists of numbers without any context for age, sex, or ethnicity, all factors that significantly change what a “normal” result actually means. Someone with serious liver disease could read those results and wrongly conclude they’re fine. Women searching about cancer screenings got information experts called “completely wrong,” potentially leading people to dismiss real symptoms. The mental health guidance got flagged too, with Mind charity calling it “incorrect, harmful” advice that could push vulnerable people away from getting help.
Why This is a Problem
Google controls roughly 90% of the global search market. So when its AI confidently gets something wrong, it’s not small issue, it’s potentially the first thing hundreds of millions of scared, confused people see when they’re trying to make a real medical decision. You’ve probably heard gomments like “don’t trust Dr. Google,” or the memes that say if you google your symptoms it will might tell you you’re dying immediately. Well multiply that by 10x or more due to the magic and power of AI advice.
A 2025 Annenberg Public Policy Center survey found that about 80% of Americans are likely to search online for health answers. Of those who had encountered AI-generated summaries, 63% found them at least somewhat reliable. That alone should make us all shudder with fear, concern, or maybe a little of both.
For an even scarier data point: a February 2026 Canadian Medical Association survey found that people who followed AI health advice were five times more likely to experience harm than those who did not. So, don’t do it people. Talk to your doctor please.
Google’s Fix Is… Not Great
After The Guardian published its findings, Google pulled AI Overviews for the specific search phrases flagged in the report. Problem solved, right? Not quite. Reporters found that slightly rephrasing those same searches, using different wording or medical abbreviations, still triggered the bad summaries. It is basically whack-a-mole with a system that generates answers on the fly rather than pulling from anything vetted.
Google’s response was that the examples were “incomplete screenshots” and that summaries link to reputable sources. The Canadian Medical Association responded by calling for urgent government regulation to protect patients from AI-generated misinformation. Go Canada! Any anyone else trying to put a stop to this.
What You Should Actually Do
Use AI health summaries the same way you use WebMD at 2am: as a starting point that might induce some panic, not a medical conclusion. For anything involving your actual test results, a real diagnosis, medications, or dietary restrictions tied to a serious condition, call a Doctor who actually went to medical school to help people.
If you see an AI health summary that looks wrong, hit the feedback button. And please make a doctor’s appointment instead.
Sources: The Guardian, Annenberg Public Policy Center (April 2025 survey), Canadian Medical Association, Pancreatic Cancer UK, Mind charity, Euronews.
Want to read more AI stories?
- Real OpenClaw Use Cases That Actually Matter

- When Algorithms Fail Us: 4 Times AI thought it knew better but didn’t

- The Big Tech Opt-Out: A Guide to Running AI Privately on Your Computer
