
As health systems scale AI, workforce AI literacy has become a material enterprise risk. Without a clear understanding of AI failure modes, bias, and safe use, even well governed programs can expose organizations to patient harm and compliance issues. In a HealthAI Collective lightning talk, Dr. Morgan Jeffries explains how Geisinger is building AI safety and stewardship training to enable responsible scale.
Healthcare organizations have embraced AI, but workforce readiness has not kept pace. Dr. Morgan Jeffries, a neurohospitalist and Medical Director for AI at Geisinger, argues that patient safety and enterprise risk management now depend on AI literacy.
Most clinicians, administrators, and even program owners do not know what they do not know. They lack the vocabulary to recognize risks such as dialectic bias or automated discrimination. Without this foundation, AI oversight is perceived as administrative burden rather than patient protection.
The challenge, Dr. Jeffries explains, is not purely technical. It is cultural. Health systems must build a shared understanding that AI safety is inseparable from clinical quality and operational accountability.
To operationalize AI safety, Geisinger established a multi-tier governance framework requiring every AI program to:
However, early versions of this process revealed a gap. Many program owners could not explain their systems or identify possible harms. Responses like “N/A” to risk questions showed that teams lacked the knowledge to even recognize risk.
The solution: replace forms with dialogue.
Instead of static questionnaires, Geisinger now uses one-on-one discovery sessions to guide teams through structured discussions about harm scenarios, monitoring strategies, and escalation workflows.
Despite calls for “guardrails,” Dr. Jeffries points out that technical limits have blind spots.For example, when clinicians use AI assistants like ChatGPT or Gemini, Geisinger cannot directly monitor chat logs for privacy reasons and that makes it impossible to automatically flag harmful outputs.
Moreover, “harm” depends heavily on context:
This variability reinforces the need for user education, not just system controls.
Geisinger’s approach is structured around two mandatory courses paired with optional advanced learning resources for deeper exploration.
| Course | Audience | Focus Areas |
|---|---|---|
| AI Safety 101 | All employees | Overview of AI types, common errors, human-AI interaction, bias, sycophancy, data handling, and confirmation bias awareness. |
| AI Stewardship | Program owners & leaders | Bias/fairness deep dives, failure case studies, regulatory insights, and how to collaborate with AI safety teams on monitoring and escalation. |
Beyond these, the organization plans curated learning paths to keep users current on new risks and best practices. The courses will be delivered through Geisinger’s Learning Management System and integrated into onboarding for relevant roles.

During the Q&A, technologists asked whether all these guardrails and checks slow innovation. Dr. Jeffries' answer was that it would not, if done right.
He cited Geisinger’s ambient documentation system, which automatically generates clinician notes from patient conversations. Despite early oversight concerns, it now saves users about two minutes per note, which is roughly 45 to 60 minutes per day, while also reducing burnout.
Still, he cautions against the “light touch of governance” mindset. Without structured oversight, small errors can escalate into patient harm or reputational crises.
Responsible governance, he argues, amplifies innovation by creating confidence in AI tools rather than fear or resistance.
Health systems seeking to replicate Geisinger’s model can start with three steps:

This shift transforms AI safety from compliance to culture, empowering clinicians and administrators to become responsible stewards of technology.
Dr. Morgan Jeffries is a neurohospitalist and the Medical Director for AI at Geisinger, where his work primarily involves implementing AI at scale across the health system. His recent focus has been on education (mostly by choice), governance/safety (largely by necessity), and now on the connection between them. Outside of his professional responsibilities, he’s deeply interested in the nature of human and machine cognition and the differences between them.
Workforce Education and AI Safety