The Next-Era WayFinding™ Experience is Here with New Agentic Capabilities

AI Myth Busters: Separating Fact from Fiction in Health and Benefits

blog hero image

By: Benjamin Nguyen, MD, Director of Product Management, Transcarent

If you’re an HR leader, you’re under pressure to do more with less, control costs, support wellbeing, manage risk, and adapt to rapid change ... a tall order for anyone. So, when AI enters the benefits conversation, it’s natural to feel both curious and cautious, and have questions like:

  • Is this safe?

  • Will this actually help employees?

  • And how do I separate meaningful innovation from hype?

In healthcare and benefits, trust matters. Employees rely on HR leaders to make smart, responsible decisions and partner with organizations that prioritize people. It's essential to distinguish fact from fiction and cut through the noise surrounding AI myths with clarity and confidence.

AI isn’t here to replace human support. It’s here to reduce administrative burdens, make health and other benefits easier to understand, help employees make informed decisions about their care, and deliver a better care experience that gets people where they need to be.

Start with the basics

Before addressing three common AI myths related to data, privacy, capability, and value, let’s clarify what “AI” means. Traditional AI models are trained for specific purposes, such as recommending health content based on past claims. These systems are “narrow” because if you want them to do something new, like predict outcomes based on claims and medication history, you must gather new data and retrain the AI model with it. They are also “narrow” in that they do one job: make recommendation predictions.

Generative AI, specifically Large Language Models (LLMs) like GPT, differs from traditional systems. Trained to understand and generate human language, LLMs predict the next best response to what you say to them. They process natural written language, enabling employees to ask questions as if they were speaking to a colleague or the benefits team, and LLMs to read, interpret, and respond conversationally.

Since language is how we communicate about everyday tasks and LLMs can adapt and perform these tasks through language manipulation, AI leaders, like Jensen Huang (CEO of Nvidia), have started to say, “Human language is the new programming language.” LLMs enable more precise guidance, better information, and personalized experiences. However, in healthcare and benefits, these advantages matter only if AI is developed and used responsibly, prioritizing safety, privacy, and transparency.

With that foundation in place, let’s take a closer look at three common AI myths—and what’s true.

Myth #1: “You must give up your users’ data to train AI models.”

Data privacy is often the first concern HR leaders raise, and rightly so. You’re responsible for protecting sensitive employee information and ensuring compliance. The reality: responsible AI does not require handing over employee data to train models.

LLMs are pretrained on massive text data before interacting with enterprise data. Users send messages (like ChatGPT), and the LLM responds. It isn’t trained on user messages but uses inference—applying its pretraining or 'knowledge'—to understand and respond. This allows LLMs to perform tasks without specific training on user data.

Best practice is to securely store user data separately from the LLM. When interacting with a user, we can expose only necessary data for inference, enabling the LLM to perform, talk, and personalize. Afterward, the data is purged from the LLM’s memory, ensuring it’s never used to train the model. It’s like giving the AI a cheat sheet, then removing it to prevent memorization. Avoid training LLMs on user data unless necessary, as useful tasks can be done with privacy controls that meet HIPAA and HITRUST standards.

When evaluating AI partners, it’s important to ask clear, direct questions, such as:

  1. Do you train your models on our data? If so, for what purpose?

  2. Is any personal or protected health information used for training?

  3. Are you HITRUST-certified?

  4. How is data stored, and how long is it retained?

  5. Ask questions to help you gauge whether the partner has a reliable AI governance program, such as: Do you have an AI Risk Management policy, and is it based on a specific framework?

Myth #2: “AI can’t handle complex tasks.”

Many believe AI is only useful for simple, repetitive work, but its capabilities are advancing rapidly. The complexity of tasks AI agents can manage doubles roughly every seven months. 1 While most people interact with LLMs as chatbots, this is just the surface. The true power lies in integrating LLMs with traditional software engineering to create agents that perform complex, often behind-the-scenes, tasks within business workflows.

Today’s LLM agents perform reasoning, data synthesis, context, adaptability, and API interaction, often for extended periods. This stems from better models and improved systems, safeguards, and workflows. The question isn’t if AI can handle complex tasks - but how it’s deployed, the guidelines, and human involvement. Powerful AI agents, especially in healthcare, need validation and strong guardrails.

When assessing an AI solution, consider asking:

  1. What clinical or administrative tasks has your AI been validated to perform?

  2. How do you evaluate their performance?

  3. How do you reduce errors or hallucinations?

  4. When does the system escalate to a human?

Myth #3: “95% of AI pilots fail.”

You may have heard the statistic that most AI pilots fail and taken it as proof that AI doesn’t deliver real value. 2 But this is misleading.  

First, many pilots are expected to fail as they test boundaries and learn quickly. This is part of selection bias. Since they are experimental projects, failure is common. Second, most failures come from generic AI tools not being designed for enterprise needs, like using a motorcycle when a truck is required. AI must be tailored for enterprise workflows because off-the-shelf chatbots won’t suffice. Third, failures occur when organizations lack resources to customize, integrate, and govern AI systems effectively, which is expected since building useful AI requires specialized talent.

The most successful AI deployments are domain-specific, responsibly designed, and deeply integrated into existing workflows. They’re built with safety, compliance, and real-world use in mind.

When evaluating AI solutions, it’s worth asking:

  1. How does this AI integrate with our existing systems?

  2. How is it tailored to our specific needs?

  3. What safety, compliance, and performance guardrails are in place?

Truth: The right AI partner makes all the difference

AI doesn’t have to feel risky or complicated. With the right partner, it helps employees get answers faster, reduces confusion, and boosts support without harming trust or compliance.

Responsible AI in health and benefits is about helping people make better decisions in moments that matter. And for HR leaders, choosing the right partner is crucial.

To learn more, explore Transcarent’s study, AI That Delivers: HR Leaders on Health, Trust, and Better Benefits, and discover how leading organizations are using AI to support their people—confidently and responsibly.

Sources

Transcarent is committed to providing accurate, evidence-based information to help you make informed health decisions.

1. Measuring AI Ability to Complete Long Tasks, Model Evaluation & Threat Research (METR), 2025

2. "MIT Finds 95% Of GenAI Pilots Fail Because Companies Avoid Friction," Jason Snyder, August 26, 2025

Authored by
Benjamin Nguyen, MD headshot
Benjamin Nguyen, MD
Director of Product Management
December 23, 2025 - 5 MIN READ
AI
Thought Leadership
Share this article
Share on XShare on LinkedInEmail icon
Copy icon
Sign up for updates from Transcarent
Stay connected with us!
Sign up to receive expert insights, personal stories, and the best ways to support your employees!