Scaling Healthcare AI Safely: Why Workforce AI Literacy Matters

Morgan Jeffries
Insights from Morgan JeffriesDecember 29, 2025

As health systems scale AI, workforce AI literacy has become a material enterprise risk. Without a clear understanding of AI failure modes, bias, and safe use, even well governed programs can expose organizations to patient harm and compliance issues. In a HealthAI Collective lightning talk, Dr. Morgan Jeffries explains how Geisinger is building AI safety and stewardship training to enable responsible scale.

Key Takeaways

  • AI education is essential for safety. Most healthcare workers lack structured understanding of AI risks, making literacy programs as vital as technical guardrails.
  • Governance without awareness fails. Checklists and policies don’t work if program owners view them as “paperwork”. For that, cultural change is needed.
  • Guardrails aren’t enough. Like driving, users must demonstrate baseline competency before “operating” AI in clinical settings.
  • Bias and fairness must be taught. Concepts like dialectic bias and automated discrimination can directly impact health equity.
  • Training must be role specific. Geisinger’s dual-track program, which includes AI Safety for all users and AI Stewardship for program owners, aligns education with responsibility.
  • Measurable ROI still exists. Responsible AI can increase efficiency. For example, Geisinger’s ambient documentation tools save clinicians 45 to 60 minutes per day.

Why AI Education Is the New Safety Imperative in Healthcare

Healthcare organizations have embraced AI, but workforce readiness has not kept pace. Dr. Morgan Jeffries, a neurohospitalist and Medical Director for AI at Geisinger, argues that patient safety and enterprise risk management now depend on AI literacy.

Most clinicians, administrators, and even program owners do not know what they do not know. They lack the vocabulary to recognize risks such as dialectic bias or automated discrimination. Without this foundation, AI oversight is perceived as administrative burden rather than patient protection.

“There’s a widely held sentiment that human in the loop is enough. But AI can fail randomly, and that assumption does not always hold.”

The challenge, Dr. Jeffries explains, is not purely technical. It is cultural. Health systems must build a shared understanding that AI safety is inseparable from clinical quality and operational accountability.

How Geisinger Built a Governance Framework for AI Risk

To operationalize AI safety, Geisinger established a multi-tier governance framework requiring every AI program to:

  1. Assess potential harm to patients, employees, and learners.
  2. Conduct equity assessments to detect bias or unintended discrimination.
  3. Define monitoring and escalation plans in case AI outputs cause harm.

However, early versions of this process revealed a gap. Many program owners could not explain their systems or identify possible harms. Responses like “N/A” to risk questions showed that teams lacked the knowledge to even recognize risk.

The solution: replace forms with dialogue.

Instead of static questionnaires, Geisinger now uses one-on-one discovery sessions to guide teams through structured discussions about harm scenarios, monitoring strategies, and escalation workflows.

Why Guardrails Alone Can’t Ensure Safety

Despite calls for “guardrails,” Dr. Jeffries points out that technical limits have blind spots.For example, when clinicians use AI assistants like ChatGPT or Gemini, Geisinger cannot directly monitor chat logs for privacy reasons and that makes it impossible to automatically flag harmful outputs.

Moreover, “harm” depends heavily on context:

  • A neurologist can spot an inaccurate answer.
  • An administrative assistant asking about a headache might not.

This variability reinforces the need for user education, not just system controls.

"Even in the real world, guardrails don't stop you from crashing, so drivers still need training. We need that same basic competency for AI users."

How Geisinger Operationalized AI Literacy at Scale

Geisinger’s approach is structured around two mandatory courses paired with optional advanced learning resources for deeper exploration.

CourseAudienceFocus Areas
AI Safety 101All employeesOverview of AI types, common errors, human-AI interaction, bias, sycophancy, data handling, and confirmation bias awareness.
AI StewardshipProgram owners & leadersBias/fairness deep dives, failure case studies, regulatory insights, and how to collaborate with AI safety teams on monitoring and escalation.

Beyond these, the organization plans curated learning paths to keep users current on new risks and best practices. The courses will be delivered through Geisinger’s Learning Management System and integrated into onboarding for relevant roles.

Article image

Balancing AI Governance and Innovation

During the Q&A, technologists asked whether all these guardrails and checks slow innovation. Dr. Jeffries' answer was that it would not, if done right.

He cited Geisinger’s ambient documentation system, which automatically generates clinician notes from patient conversations. Despite early oversight concerns, it now saves users about two minutes per note, which is roughly 45 to 60 minutes per day, while also reducing burnout.

Still, he cautions against the “light touch of governance” mindset. Without structured oversight, small errors can escalate into patient harm or reputational crises.

Responsible governance, he argues, amplifies innovation by creating confidence in AI tools rather than fear or resistance.

From Risk to Readiness: How Health Systems Can Replicate This Approach

Health systems seeking to replicate Geisinger’s model can start with three steps:

  1. Map current AI touchpoints – where and how staff interact with AI tools.
  2. Define a minimum competency standard – a “driver’s license” for AI users.
  3. Integrate safety education into governance – every approval or rollout should include training.
Article image

This shift transforms AI safety from compliance to culture, empowering clinicians and administrators to become responsible stewards of technology.

About the Speaker

Dr. Morgan Jeffries is a neurohospitalist and the Medical Director for AI at Geisinger, where his work primarily involves implementing AI at scale across the health system. His recent focus has been on education (mostly by choice), governance/safety (largely by necessity), and now on the connection between them. Outside of his professional responsibilities, he’s deeply interested in the nature of human and machine cognition and the differences between them.

Watch the Full Talk

Workforce Education and AI Safety