AI Won't De-Skill Palliative Care

The de-skilling threat is real and asymmetric across specialties. Palliative care sits on the protected side of that asymmetry—because the highest-value things we do are the things machines are worst at.

Share
AI Won't De-Skill Palliative Care
Photo by ROCCO STOPPOLONI / Unsplash

But It Will Expose What We Never Trained For

Jesse Pines recently wrote a piece in Forbes asking whether AI will slowly erode the mastery that defines medicine. He's citing real evidence—not speculation, not thought experiments, not the usual "AI is coming for your job" keynote filler. The data are early. They are also unsettling.

And they sent me down a different path than the one Pines walked. Because the de-skilling question lands very differently depending on what your skills actually are.


The Evidence Worth Taking Seriously

The headline study is a multicenter observational trial from Poland, published in The Lancet Gastroenterology & Hepatology last August. Nineteen experienced endoscopists—each with over 2,000 colonoscopies under their belt—began using AI-assisted polyp detection. When performing colonoscopies without AI after three months of regular AI use, their adenoma detection rate dropped from 28% to 22%. A 6-percentage-point absolute decline. In cancer screening, that gap has downstream consequences.

The researchers called it the first real-world clinical evidence of a de-skilling effect from AI in medicine.

Now, caveats. This is one study. It's observational, not randomized to the de-skilling question specifically. The centers saw increased colonoscopy volume during the study period, which could mean fatigue rather than skill decay. And the task being measured—visual pattern recognition during a procedural scan—is among the most automatable categories of clinical work. It is not generalizable to all of medicine.

But the signal matters: when clinicians outsource a cognitive task to a machine, the muscle for performing that task without the machine can atrophy. Fast. Even in experienced hands.

If you're a radiologist, a pathologist, or a dermatologist reading AI-flagged images all day, this should keep you up at night.

If you're in palliative care, it should make you think—but not panic. Let's dig into why.


The Skills AI Can't Erode Are the Ones We Were Built On

De-skilling is a pattern-recognition problem. The colonoscopy study measured whether human eyes stayed sharp after an algorithm started doing part of the looking. That framing maps cleanly onto fields where the core clinical act is detection: find the lesion, flag the abnormality, classify the image.

Palliative care's core clinical act is not detection. It's interpretation—of suffering, of values, of family systems, of what a person means when they say "I don't want to be a burden" versus what their daughter means when she says "do everything."

I wrote in What Physicians Are (Actually) For on Specialist Palliative Care Teams that our role is counter-cultural: we make space, translate, and selectively intervene—then get out of the way. That's not a pattern-recognition task. It's a relational-interpretive one. You cannot de-skill someone out of the capacity to sit with a family in crisis and help them find coherence in chaos. You cannot automate the clinical judgment that says this is the moment to name what's happening versus this is the moment to stay quiet and let the silence do its work.

No algorithm will replicate what I described in Beyond Mandatory Autonomy—the shift from transactional informed consent to the relational, iterative, values-laden process that serious illness communication actually requires. That work lives in the space between people. It is irreducibly human.

This is not self-congratulation. This is structural analysis. The de-skilling threat is real and asymmetric across specialties. Palliative care sits on the protected side of that asymmetry—because the highest-value things we do are the things machines are worst at.


Where Augmented Intelligence Should Be Doing Our Busywork

Protected from de-skilling is not the same as exempt from transformation.

In my year-end piece on augmented intelligence, I argued that the real story isn't scribes and documentation shortcuts—it's structural redesign. That argument sharpens here.

Palliative care clinicians spend enormous portions of their cognitive bandwidth on tasks that augmented intelligence should absorb:

  • Prognostic estimation. We still rely on blunt instruments—PPS, PPI, the "surprise question"—when dynamic models integrating labs, comorbidities, and utilization patterns can update in real time. We explored this in Precision Symptom Management: PGx-guided prescribing and predictive analytics are not threats to clinical judgment. They are the scaffolding that frees judgment to operate at a higher level.
  • Symptom surveillance at scale. NLP models are already flagging uncontrolled symptoms buried in clinical notes and patient messages. One study detected symptom-driven visits with 95% accuracy. Imagine reallocating nursing and social work resources based on that intelligence—before suffering escalates rather than after it's been endured for days.
  • Documentation and regulatory compliance. This is the low-hanging fruit everyone loves to talk about. Fine. Let the ambient scribe write the note. Let the model auto-populate the HOPE assessment fields. That is not the revolution; it's the table stakes. But it does give time back—and in a field where hospice is buckling under structural choices we made, every hour returned to the bedside matters.
  • Triage and identification. Epic's predictive tools are already helping teams like ours at UCSD proactively find patients with unmet palliative needs. This is where the Palliative Care 3.0 vision becomes operational: tiered delivery models powered by augmented intelligence, where Tier 1 screening and outreach is algorithm-assisted and Tier 3 specialist care is preserved for the complexity that demands a full interdisciplinary team.

The pattern here is consistent: offload the automatable, protect the relational, and use the freed capacity to deliver the kind of care the evidence says actually changes outcomes.


Your Skeptical Colleagues Are Not Wrong

Here's where I want to speak directly to the social workers, chaplains, and nurses reading this—because the physician enthusiasm for AI is palpable and, frankly, a little self-serving (81% of us now report to the AMA that we're using it in our practices!).

Physicians love augmented intelligence because it promises to solve physician problems: documentation burden, cognitive overload, diagnostic uncertainty. Those are real. But the palliative care team is not a physician with support staff. It is an interdisciplinary unit where the chaplain's assessment of spiritual distress and the social worker's read on family dynamics are clinical data, not soft add-ons.

And the people who do that relational work have every reason to be skeptical when they hear "AI will free us up to do more of what matters." They've heard that before. Usually it means the budget gets tighter, the team gets smaller, and the humans who hold the relational core of the work are the first to be "optimized" out of the room.

That skepticism is not technophobia (although The Luddites did get a very unfair wrap). It is pattern recognition of a different kind—the recognition that systems built without us tend to externalize suffering onto the people least able to bear it.

If augmented intelligence in palliative care becomes a physician efficiency tool that further marginalizes the interdisciplinary team, we will have replicated the monster we built in a shinier package. The same outputs without the team, the training, or the time.

The chaplains or social workers asking hard questions about AI aren't slowing us down. They're doing exactly what palliative care is supposed to do: interrogating assumptions, centering the humans in the system, and refusing to let efficiency masquerade as excellence.

We should be listening more and reassuring less.


Fellowship Training Is Not Ready for This

I teach fellows but I'm not a program director, so what follows is a musing from someone who watches trainees arrive, learn, and leave—and who works to ensure that what we're teaching matches the world they're entering.

The current HPM fellowship curriculum was designed for a world where the core competency stack was: communication skills, symptom management, prognostication, goals-of-care facilitation, and team leadership. Those remain essential. None of them are going away.

But we are sending fellows into a clinical environment where augmented intelligence tools are already embedded in their EHR, where predictive models are generating patient lists, where ambient scribes are drafting their notes, and where the organizations hiring them expect fluency in value-based care metrics that are increasingly AI-informed. And we are not systematically preparing them for any of it.

I'm not talking about adding another online module to the pile. Physicians are drowning in mandatory digital busywork as it is, and bolting "AI literacy" onto the existing curriculum as a box to check would miss the point entirely.

What I am talking about is something more fundamental: teaching fellows to be critical consumers of augmented intelligence rather than passive users. That means:

  • Understanding what a predictive model is actually telling you and where its confidence intervals collapse—so you don't defer to the algorithm when your clinical instinct says otherwise.
  • Recognizing algorithmic bias in palliative-specific domains: pain management across ethnicity, prognostic models trained on populations that don't look like your patients, advance care planning tools that embed cultural assumptions about autonomy.
  • Knowing when to trust the tool, when to override it, and how to explain that decision to a family that just asked ChatGPT their prognosis and got a different answer.
  • Thinking structurally about how augmented intelligence reshapes the team, not just the physician workflow—because if the fellow only learns AI as a personal productivity hack, they'll reproduce the physician-centric model we should be dismantling.

This is not a curriculum overhaul. It's a posture shift. And it starts with program directors, fellowship faculty, and ACGME recognizing that the AI-infused clinical environment is not a future state. It arrived yesterday.

🚨
Dear HPM Fellowship Program Directors: if your trainees' first real encounter with augmented intelligence in clinical practice happens after graduation, we've failed them. And more importantly, we've failed the patients and families they'll be navigating complexity alongside.

Where I Might Be Wrong

  • The asymmetry argument assumes palliative care's relational core remains central to how the field is practiced. If the Palliative Care 3.0 trajectory continues toward protocol-driven, Tier 1 screening-style delivery, the skills being practiced at scale will become more automatable—and de-skilling risk rises accordingly.
  • Three months of AI exposure in one procedural domain is not sufficient evidence to generalize across medicine. It's possible that de-skilling is a procedural-skills phenomenon that doesn't translate to cognitive or relational domains at all. We don't know yet.
  • I may be underestimating how much augmented intelligence will reshape communication itself. If patients arrive having already received AI-generated prognostic estimates and AI-drafted care plans, the starting point of the serious illness conversation changes. That's not de-skilling, but it is a fundamental shift in what the clinician is being asked to do—and we haven't begun to think about it carefully enough.
  • The enthusiasm for offloading busywork onto AI carries an implicit assumption that the time returned will be reinvested in patient care. History suggests it will be reinvested in throughput. That's not an AI problem; it's a health system incentive problem. But it's naive to ignore.

The Work Ahead

The de-skilling question is a gift, if we treat it honestly. It forces a discipline-level reckoning: What are we actually good at? What should we protect? What should we gladly hand over? And are we training the next generation for the answer?

For palliative care, the answers are clearer than for most fields—and that's not comfort, it's obligation.

We are good at the hardest, most human work in medicine: helping people make meaning under duress, managing suffering that resists easy fixes, holding complexity without collapsing it into false simplicity. Augmented intelligence cannot do that. It should not try.

We should protect the interdisciplinary team, the relational core, and the clinical autonomy to override the algorithm when the person in front of us doesn't fit the model.

We should gladly hand over prognostic busywork, documentation burden, symptom surveillance at scale, and the regulatory compliance load that is consuming clinical hours we can't afford to lose.

And we should train fellows who walk into their first attending job understanding that augmented intelligence is not a threat, not a savior, and not optional—it's infrastructure. Like the EHR before it, except this time we have the chance to shape how it's built rather than having it built for us.

If that sounds familiar, it should. It's the same argument I've been making about payment architecture, workforce design, and who gets to define what palliative care is.

Design it, or someone else will. And they won't ask the chaplain first.


I am a palliative care physician, educator, and professional strategery expert. Known for turning rounds into rants and rants into teaching points. Rounds & Rants represents my views — not those of any institution or professional membership organization where I hold a role. I don't write on their behalf and they don't vet what I publish.