Augmented Intelligence in Palliative Care—Redesigning Care Delivery

Dr. Whyte's right—and in palliative care, the stakes are even higher.

Augmented Intelligence in Palliative Care—Redesigning Care Delivery
Photo by Alex Escu / Unsplash

Dr. John Whyte, CEO of the American Medical Association, rightly challenges the narrow view of augmented intelligence as a documentation tool. The real story isn’t about digitizing old workflows—it’s about structural redesign. For palliative care, this shift is critical: continuous data, predictive analytics, and proactive interventions will redefine how we deliver serious illness care.

#ai #aihealth #ama #digitalhealth | John Whyte | 45 comments
Year-End Reflection: AI in Healthcare Isn’t About What Everyone Thinks It Is !!! As 2025 winds down, I’m struck by how narrow the conversation around AI in healthcare has become. We keep circling the same points: documentation relief, scheduling efficiency, smoother workflows. These are helpful, yes. But let’s not confuse efficiency gains with paradigm shifts. These tools solve pain point. Thet do not redefine healthcare. 1. AI’s real power isn’t in the hospital setting. It will come from the continuous, context-rich, patient-generated data that never enters the hospital at all. When care happens at home—when conditions are monitored passively and interventions are nudged in real time—AI becomes less of a convenience and more of a clinical force multiplier. Hospitals won’t be the center of gravity. Homes and clinics will. 2. The scribe narrative is a distraction. Documentation has become the mascot of AI in healthcare because it’s easy to visualize and easy to celebrate. But is better notes what we are striving for? If AI is only good for writing the note, we’ve missed the point. The real capability lies in synthesizing signals, detecting patterns long before symptoms surface, and shaping decisions—not summarizing visit summaries. Scribes help clinicians do the work. AI will eventually help clinicians rethink the work. 3. Experience and wisdom are about to be redefined. We’re entering an era where every clinician will walk into the room with not just their training—but an AI tool that has absorbed millions of trajectories, outcomes, cases. So what becomes of “clinical intuition”? Ai should not be viewed as a threat. It’s a rebalancing. Physicians will bring judgment, empathy, nuance, and the ability to navigate uncertainty. AI will bring breadth, precision, and relentless recall. It will augment our decision-making – not replace it. The blend—not the competition—will define excellence. 4. The critical question for 2026: What is the role of the physician? Not “will AI replace us?” That’s too simplistic. The more nuanced question is: What uniquely human functions will clinicians double down on, and what will we gladly offload? Medicine has always been a human profession supported by tools. In 2026, we’ll need to actively decide which parts remain human—and which parts are simply better performed by systems built for pattern recognition at scale. It’s a tough discussion – yet, it’s one we fundamentally must have. We’re stepping into the most important transition in modern medicine. Not because AI will replace clinicians, but because it will reveal what is the role of the physician in this evolving ecosystem. #ai #aihealth #ama #digitalhealth | 45 comments on LinkedIn

Whyte’s Core Insights

  • Efficiency gains are not paradigm shifts.
  • The real power lies in continuous, context-rich data outside the hospital.
  • Augmented intelligence should help clinicians rethink the work, not just document it.
  • The future isn’t about replacement—it’s about redefining what remains uniquely human.

Meanwhile, Over at 8VC

They agree that U.S. healthcare is at an inflection point: rising costs, physician burnout, and patient dissatisfaction all demand systemic change.

  • AI’s transformative role is not incremental efficiency—it’s structural redesign.
  • Future healthcare will integrate:
    • Continuous, real-time data streams from home and community settings.
    • Predictive models for early intervention.
    • Decision-support systems that complement clinician judgment.

The physician’s role evolves then toward empathy, ethics, and complex decision-making, while systems handle pattern recognition and scale.

This analysis deserves amplification in the context of serious illness care.


What This Means for Specialist Palliative Care

1. From Episodic to Continuous Care | Palliative care still operates in bursts—consults, family meetings, symptom check-ins. Augmented intelligence enables a structural shift: real-time monitoring and predictive analytics integrated into care delivery. Passive symptom tracking and dynamic risk models can anticipate crises before they occur. This isn’t theoretical; early studies confirm machine learning can flag patients for palliative input weeks earlier than traditional workflows. This is the foundation for value-based care—reducing avoidable hospitalizations and improving outcomes.

2. Prognostication That Learns | Our current prognostic tools are blunt instruments. Augmented intelligence offers dynamic models that update as conditions evolve, integrating labs, comorbidities, and utilization patterns. Kaiser Permanente’s recent work on mortality prediction models illustrates how responsible design can support—not supplant—clinical judgment. For families, this means more accurate timelines for planning and less of the “surprise” ICU admission in the last week of life. Here at UCSD, we use Epic's own predictive analytic tools to proactively identify patients with needs our specialist teams can address. Static prognostic tools are relics of a reactive system. Augmented intelligence introduces adaptive models that continuously update with real-world data—labs, comorbidities, utilization patterns—creating a living forecast. This isn’t just better prediction; it’s a redesign of planning and resource allocation. Families gain clarity, clinicians gain foresight, and systems align with value-based care incentives.

3. Symptom Management at Scale | Language models and NLP are being tested to flag uncontrolled symptoms buried in clinical notes or patient messages. One study using a small language model detected symptom-driven visits with 95% accuracy, revealing a hidden burden of unmanaged symptoms. Imagine reallocating resources based on that intelligence—before suffering escalates.

4. Ethical Guardrails Matter | Augmented intelligence in palliative care raises hard questions: autonomy, bias, privacy, and the risk of dehumanization. Reviews emphasize the need for transparency, cultural sensitivity, and clinician oversight. If we ignore these, we erode trust—the very currency of palliative care.


AND There Are Hidden Costs of Scaling AI

Energy and emissions from large models rival hundreds of households daily.

  • AI in healthcare significantly increases energy consumption and carbon emissions. LLMs used for clinical tasks can consume energy equivalent to charging a smartphone 11 times per query, and global emissions from AI systems rival hundreds of U.S. households daily. Meanwhile, Healthcare already accounts for 4–5% of global emissions; scaling AI without sustainability measures compounds this footprint.
  • Algorithmic bias in palliative care is well-documented. Studies show LLMs perpetuate disparities in pain management, access to care, and advance care planning, especially across ethnicity and age axes. Without deliberate bias mitigation and inclusive design, AI risks amplifying existing inequities in end-of-life care. Trust-building and fluency in evaluating bias and governance frameworks are essential for safe deployment, particularly in emotionally complex spaces like palliative care.
  • Access equity is non-negotiable. Will foundation models democratize AI or deepen disparities? For palliative care, we must design augmented intelligence tools that work in lower-resourced settings and actively guard against models that favor well-funded systems.
💡
Future-ready palliative care means demanding sustainable infrastructure and inclusive datasets.

The Human Work Ahead

Whyte asks: What uniquely human functions will clinicians double down on? In palliative care, the answer is clear:

  • Compassion and strong objectivity amongst complexity – standing firmly with our strong back and strong front.
  • Meaning-making in the face of mortality.
  • Exquisitely personalized symptom management
  • Navigating values and trade-offs.
  • Holding space for suffering when no algorithm can.
  • Preserving dignity and trust in tech-infused environments.

Augmented intelligence will handle pattern recognition at scale. We will handle presence, empathy, moral reasoning...and accountability. We must advocate for sustainability in AI practices and equity audits. Excellence will come from redesign that honors both human dignity and planetary health. Excellence will come from the blend.


Bottom Line Augmented intelligence isn’t about scribes. It’s about shifting from reactive to anticipatory care, from episodic to continuous support. For palliative care, this is not a tech story—it’s a story about dignity, timing, and trust. And that's not even to begin to imagine what AI could bring to hospice particularly in the realm of easing documentation and regulatory burden.

AND scaling these systems demands energy, water, and rare resources, raising urgent questions about healthcare’s carbon footprint. While bias in algorithms threatens to deepen inequities in end-of-life care. If we ignore these tradeoffs, we risk progress that comes at the expense of the planet and the most vulnerable patients.

Credit to Dr. Whyte for reframing the conversation. In 2026, let’s make sure specialist palliative care is not a late adopter. The patients we serve deserve better.