From Dr. Google to Dr. AI: The Hidden Risks of Outsourcing Your Health

Luke Taylor
08/10/2025

Typing symptoms into Google was roulette, spin the wheel and you’d usually land on worst-case scenarios, spiralling anxiety, and late-night doomscrolling. Now we’re in the age of Dr. AI, where chatbots deliver polished, confident answers that sound like diagnoses.

That shift feels revolutionary. AI can play doctor, coach, nutritionist, or therapist, and give you structured insights in seconds. It feels like science fiction.

But here’s what most people miss: confidence does not equal accuracy.

As a clinician who uses AI daily in practice, I constantly see answers that are flat out wrong, inconsistent, or hallucinated. Sometimes I test it intentionally with misleading prompts, and the outputs are not just shaky, they’re dangerously convincing. To a trained eye, the cracks are obvious. To most people, they wouldn’t be.

That’s the real risk. Outsourcing health to AI doesn’t just risk misinformation; it risks false certainty, and that’s far more dangerous than the roulette of Dr Google.

In my last post, I showed how AI dulls critical thinking and traps leaders in echo chambers. This time, I want to zoom in on health, where the stakes are higher, the pitfalls sharper, and the margin for error much smaller.


The Four Traps of AI Health

When people use AI for health, the risks fall into four traps:

  • False Confidence: AI delivers polished, confident answers even when it’s wrong.
  • Fragmented Data: It processes parts of your history but misses the whole arc.
  • Missing Human Nuance: It can’t see lived experience, stress, or physical cues unless explicitly told.
  • Boundaries and Safety: It can hallucinate, cite outdated research, and carries no accountability.

On their own, each is a problem. Together, they compound into a cycle of distorted judgment. To see how this works in practice, let’s look at one example.


Sarah’s Story: When the Four Traps Collide

Sarah, 38, is a senior executive running on deadlines, poor sleep, and too much coffee. Curious and concerned about her low energy, she uploads years of labs and notes into an AI tool, asking: “My most recent test flagged a single fasting glucose marker, what does this mean?” What begins as a simple request for context soon escalates into worry about diabetes, as the AI frames her results in clinical terms she isn’t trained to interpret.

Trap 1: False Confidence

The AI spots a fasting glucose of 110 mg/dL and confidently declares “prediabetes.” It even generates lifestyle recommendations. What it misses: her HbA1c is 33 mmol/mol, a normal long-term average (HbA1c is a measure of average blood sugar control over 2–3 months). A clinician would know a single late meal, poor sleep, or stress could explain the spike. According to the American Diabetes Association (2025), impaired fasting glucose is defined as a fasting blood glucose level of 100–125 mg/dL; however, diagnosis also requires an HbA1c level of ≥39 mmol/mol (5.7%), which Sarah does not meet.

Earlier work shows language models can produce confident but incorrect medical explanations (Liévin, Hother, and Ghassemi, 2022). Newer systems are improved, yet they still tend to agree with a user’s framing rather than challenge it, a bias known as sycophancy (Sharma et al., 2025).

Trap 2: Fragmented Data

Sarah uploaded years of records, but the AI can only “see” part of her history at once. It latches onto fasting glucose while underweighting HbA1c and skipping over lifestyle notes about stress. This limitation is built in: models have context windows and process data in fragments. Unstructured inputs like PDFs increase the chance of omission or distortion (Jiang, Daneshjou, & Beam, 2023; Thirunavukarasu, Hassan, & Fagherazzi, 2023).

Trap 3: Missing Human Nuance

In clinic, I’d notice Sarah’s pale skin, dark circles, and shortness of breath, cues that point to stress and sleep deprivation as the root issue. These embodied signals are invisible to AI, and in many cases, the individual is also unaware of them. Therefore, they are not accounted for in the analysis.  And they matter: sleep restriction and circadian disruption impair glucose regulation (Spiegel, Leproult, and Van Cauter, 1999; Knutson and Van Cauter, 2008). Without this context, the model interprets isolated numbers as disease.

Trap 4: Boundaries and Safety

When Sarah asks for studies, the AI cites a “world-class trial” that doesn’t exist. Fabricated references in medical contexts are documented (Bender, Gebru, McMillan, Major, and Mitchell, 2021). If she acted on that advice, she could waste money, add unnecessary stress, or even harm her health. Because AI carries no license, no regulation, and no malpractice responsibility, Sarah bears the cost of any mistake.


The Human Consequences

Encouraged by the AI, Sarah cuts carbs aggressively. At first, it feels proactive. But the restriction adds friction to her work and family life. She skips dinners, meal preps late at night, and feels constantly on edge. Stress climbs, sleep worsens, and energy dips further. The diet pushes her into a calorie deficit, disrupting her menstrual cycle, a common issue when low-carb or keto diets are taken too far without clinical guidance.

Worse, her elevated stress load, already visible in her posture, skin, and sleep pattern, is compounded by the physiological stress of dietary restriction. Cortisol dysregulation occurs, recovery is impaired, and glucose control deteriorates. The very opposite of what she set out to fix.

Ironically, the intervention meant to “fix” her makes her more unwell.

In a clinical setting, the plan would be different: reassurance, plus support for sleep and stress. The outcome? Confidence, not anxiety, and progress at the root cause, not superficial fixes.


Where AI Helps (If Used Well)

AI isn’t useless in health. Used carefully, it can sharpen care:

Do:

  • Use it for rapid retrieval of guidelines and research.
  • Ask it to summarise labs or spot trends across data.
  • Request plain-language explanations of complex medical terms.
  • Explore scenarios before seeing a clinician (“What if I improved sleep?”).
  • Use it to draft notes or summaries, saving time for deeper conversations with your provider.

Don’t:

  • Treat polished AI outputs as diagnoses.
  • Rely on it for interpreting raw health data without clinical context.
  • Accept sources at face value without verification.
  • Share sensitive data without disabling training/data-sharing settings.

Core Takeaway

Sarah’s story shows how falling into the Four Traps, false confidence, fragmented data, missing nuance, and absent safety nets, doesn’t just create misleading outputs. It can actively push people down the wrong path.

AI can scan your records, but only a clinician can integrate your story. Use AI to inform, not decide.


Rules for Safer Use of AI in Health

Before You Upload

  • Use clean formats (CSV/XLSX) that are native to AI, not scans.
  • Strip identifiers and irrelevant notes.
  • Break history into chunks, don’t dump everything.
  • (See Data Privacy section below for guidance on protecting sensitive health information).

While You Chat

  • Summarise after each upload and carry summaries forward.
  • Anchor prompts: “Based on the thyroid summary above…”
  • Ask for contradictions.
  • Request multiple options with pros/cons/unknowns.
  • Demand and verify sources.
  • Watch for hype words like “genius” or “world-class.”
  • Reset when chats drift.
  • Keep oversight human.

Data Privacy: 3 Rules for Safer Data Use

  • Disable data sharing: Most AI tools use your inputs for training unless you opt out.
  • Choose the right model: Use enterprise or healthcare-specific versions when possible; they have stricter safeguards.
  • Protect before upload: Once data is shared, it may be reused or stored unless explicitly protected.

Bonus: Universal Anti-Sheep Wrapper Prompt

How you ask matters as much as what you ask. If you frame your health question badly, AI will mirror that bias back to you. That’s why I often use my Universal Anti-Sheep Wrapper Prompt (which you can find in my previous blog post here); it forces balance, counterarguments, and context into every response.

Final Word

AI can scan your records in seconds. It cannot see you. Treat it like a sharp tool in skilled hands, not a substitute for clinical judgment.

At Taylored Health, we work with high performers every day who face similar challenges to Sarah. These tools can empower or mislead depending on how they’re used. That’s why we put mindset first, because how you frame the question is as important as the data itself.


References

  • American Diabetes Association. (2025). Standards of Medical Care in Diabetes, 2025. Diabetes Care, 48(Supplement 1), S1–S200. https://doi.org/10.2337/dc25-SINT
  • Bender, E. M., Gebru, T., McMillan Major, A., and Mitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT ’21 Proceedings, 610–623. https://doi.org/10.1145/3442188.3445922
  • Jiang, J., Daneshjou, R., & Beam, A. L. (2023). Opportunities and risks of large language models in clinical care: a review. Nature Medicine, 29, 193–203. https://doi.org/10.1038/s41591-023-02437-x
  • Knutson, K. L., and Van Cauter, E. (2008). Associations between sleep loss and increased risk of obesity and diabetes. Annals of the New York Academy of Sciences, 1129(1), 287–304. https://doi.org/10.1196/annals.1417.033
  • Liévin, V., Hother, C. E., and Ghassemi, M. (2022). Can large language models reason about medical questions? arXiv:2207.08143. https://arxiv.org/abs/2207.08143
  • Sendak, M., Elish, M. C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., and Balu, S. (2020). The human body is not a black box: Addressing the problem of AI transparency in healthcare. Health Informatics Journal, 26(3), 1461–1478. https://doi.org/10.1177/1460458220928189
  • Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., … and Perez, E. (2025). Towards understanding sycophancy in language models. arXiv:2310.13548. https://doi.org/10.48550/arXiv.2310.13548
  • Spiegel, K., Leproult, R., and Van Cauter, E. (1999). Impact of sleep debt on metabolic and endocrine function. The Lancet, 354(9188), 1435–1439. https://doi.org/10.1016/S0140-6736(99)01376-8
  • Thirunavukarasu, A. J., Hassan, T., & Fagherazzi, G. (2023). Large language models in medicine. npj Digital Medicine, 6, 119. https://pubmed.ncbi.nlm.nih.gov/37460753/

FAQ: Using AI in Health Safety

For readers seeking quick answers, I’ve included a brief FAQ below. Each response connects back to sections in the full blog where you can dive deeper.

Q1. Can AI accurately diagnose medical conditions?
A: No. AI can sound confident but still give wrong answers, miss context, or overlook key health factors. It should never replace a professional diagnosis. (See “The Four Traps of AI in Health.”)

Q2. Can AI replace my doctor for interpreting blood test results?
A: No. AI can highlight patterns but lacks clinical judgment. Only a clinician can interpret results in full context. (See “The Four Traps of AI in Health.”)

Q3. Why does AI sometimes flag a health risk that my doctor says isn’t a problem?
A: AI often focuses on one number without considering history, lifestyle, or stress. Doctors connect the full picture. (See “False Confidence.”)

Q4. How do I know if AI is trustworthy?
A: Be cautious of apps that give polished answers without sources, generic advice, or fake references. Transparency and evidence matter. (See “Boundaries & Safety.”)

Q5. Is my data safe if I upload labs into AI?
A: Not always. Many public models use your inputs for training. Disable data sharing and use healthcare-specific versions. (See “Data Privacy Rules.”)

Q6. What are safe ways to use AI for health?
A: Use AI for research summaries, spotting patterns, and preparing doctor questions—not for diagnosis. (See “Where AI Can Help.”)

Q7. What’s the biggest risk of relying on AI for health?
A: False certainty. AI can sound correct even when wrong, leading to poor decisions. (See “Core Takeaway.”)

Q8. How can I use AI without putting myself at risk?
A: Use AI as an assistant, not an authority. Ask for multiple perspectives and always confirm with a clinician. (See “Rules for Safer Use of AI in Health.”)

Luke Taylor

Luke Taylor

Luke Taylor, PGDip SpSciHP, is a human performance specialist and executive energy coach with deep expertise in diagnostics, human optimization, and leadership resilience. Drawing on decades of hands-on experience, Luke works directly with leaders to build tailored systems that fuel energy, sharpen focus, and extend leadership stamina — all rooted in real-world performance, not theory.
Luke & Rachel

Luke & Rachel

Executive Health & Performance Experts

We help leaders optimize their energy, sharpen their focus, and prevent burnout using science-backed strategies. Our work combines biometric tracking, precision health, and high-performance coaching to ensure executives stay at the top of their game—without sacrificing longevity.

Recent Posts

Find Your Edge

Get our exclusive, science-backed insights newsletter for high-performance leadership - straight to your inbox.

Follow Us

0 Comments