Join Marco Ciappelli and Sean Martin as they convene healthcare leaders Dr. Robert Pearl, Rob Havasy (HIMSS), John Sapp (Texas Mutual Insurance), Jim StClair (Altarum), and Robert Booker (HITRUST) for an urgent exploration of AI in healthcare. Who truly benefits from diagnostic algorithms and automated care? Who bears the financial and ethical costs? In our Hybrid Analog Digital Society, this panel confronts the critical questions about governance, liability, and the future of human-centered medicine.
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society
🎙️ EXPERT PANEL Hosted By Marco Ciappelli & Sean Martin
I had one of those conversations recently that reminded me why we do what we do at ITSPmagazine. Not the kind of polite, surface-level exchange you get at most industry events, but a real grappling with the contradictions and complexities that define our Hybrid Analog Digital Society.
This wasn't just another panel discussion about AI in healthcare. This was a philosophical interrogation of who benefits, who pays, and who's at risk when we hand over diagnostic decisions, treatment protocols, and even the sacred physician-patient relationship to algorithms.
The panel brought together some of the most thoughtful voices in healthcare technology: Dr. Robert Pearl, former CEO of the Permanente Medical Group and author of "ChatGPT, MD"; Rob Havasy from HIMSS; John Sapp from Texas Mutual Insurance; Jim StClair from Altarum; and Robert Booker from HITRUST. What emerged wasn't a simple narrative of technological progress or dystopian warning, but something far more nuanced—a recognition that we're navigating uncharted territory where the stakes couldn't be higher.
Dr. Pearl opened with a stark reality: 400,000 people die annually from misdiagnoses in America. Another half million die because we fail to adequately control chronic diseases like hypertension and diabetes. These aren't abstract statistics—they're lives lost to human error, system failures, and the limitations of our current healthcare model. His argument was compelling: AI isn't replacing human judgment; it's filling gaps that human cognition simply cannot bridge alone.
But here's where the conversation became truly fascinating. Rob Havasy described a phenomenon I've noticed across every technology adoption curve we've covered—the disconnect between leadership enthusiasm and frontline reality. Healthcare executives believe AI is revolutionizing their operations, while nurses and physicians on the floor are quietly subscribing to ChatGPT on their own because the "official" tools aren't ready yet. It's a microcosm of how innovation actually happens: messy, unauthorized, and driven by necessity rather than policy.
The ethical dimensions run deeper than most people realize. When Marco—my co-host Sean Martin and I—asked about liability, the panel's answer was refreshingly honest: we don't know. The courts will eventually decide who's responsible when an AI diagnostic tool leads to harm. Is it the developer? The hospital? The physician who relied on the recommendation? Right now, everyone wants control over AI deployment but minimal liability for its failures. Sound familiar? It's the classic American pattern of innovation outpacing regulation.
John Sapp introduced a phrase that crystallized the challenge: "enable the secure adoption and responsible use of AI." Not prevent. Not rush recklessly forward. But enable—with guardrails, governance, and a clear-eyed assessment of both benefits and risks. He emphasized that AI governance isn't fundamentally different from other technology risk management; it's just another category requiring visibility, validation, and informed decision-making.
Yet Robert Booker raised a question that haunts me: what do we really mean when we talk about AI in healthcare? Are we discussing tools that empower physicians to provide better care? Or are we talking about operational efficiency mechanisms designed to reduce costs, potentially at the expense of the human relationship that defines good medicine?
This is where our Hybrid Analog Digital Society reveals its fundamental tensions. We want the personalization that AI promises—real-time analysis of wearable health data, pharmacogenetic insights tailored to individual patients, early detection of deteriorating conditions before they become crises. But we're also profoundly uncomfortable with the idea of an algorithm replacing the human judgment, intuition, and empathy that we associate with healing.
Jim StClair made a provocative observation: AI forces us to confront the uncomfortable truth about how much of medical practice is actually procedure, protocol, and process rather than art. How many ER diagnoses follow predictable decision trees? How many prescriptions are essentially formulaic responses to common presentations? Perhaps AI isn't threatening the humanity of medicine—it's revealing how much of medicine has always been mechanical, freeing clinicians to focus on the parts that genuinely require human connection.
The panel consensus, if there was one, centered on governance. Not as bureaucratic obstruction, but as the framework that allows us to experiment responsibly, learn from failures without catastrophic consequences, and build trust in systems that will inevitably become more prevalent.
What struck me most wasn't the disagreements—though there were plenty—but the shared recognition that we're asking the wrong question. It's not "AI or no AI?" but "What kind of AI, governed how, serving whose interests, with what transparency, and measured against what baseline?"
Because here's the uncomfortable truth Dr. Pearl articulated: we're comparing AI to an idealized vision of human medical practice that doesn't actually exist. The baseline isn't perfection—it's 400,000 annual misdiagnoses, burned-out clinicians spending hours on documentation instead of patient care, and profound healthcare inequities based on geography and economics.
The question isn't whether AI will transform healthcare. It already is. The question is whether we'll shape that transformation consciously, ethically, and with genuine concern for who benefits and who bears the risks.
Listen to the full conversation and subscribe to stay connected with these critical discussions about technology and society.
Links:
In this inaugural ITSPmagazine Thought Leadership Webinar, Marco Ciappelli and Sean Martin convene a powerhouse panel of healthcare and technology leaders to examine one of the most pressing questions of our time: as artificial intelligence transforms healthcare delivery, who truly benefits, who bears the financial burden, and who's left vulnerable to new risks? The discussion moves beyond surface-level optimism to confront the governance gaps, ethical dilemmas, and cultural resistance that define AI's integration into medical practice.
Marco Ciappelli: "I feel like a lot of people do not use it properly because they're afraid that this data that they share with the agents is going to be obviously feeding the algorithm, feeding the knowledge base, also ended up in the hands of marketers or people that are gonna use it in the wrong way."
"I don't want it to be, in an ideal world, an economic decision. And that's tough in our society. So I love the round from you guys about the ethics and how we could avoid a wild west where people are just gonna go and mine gold and come back rich."
Dr. Robert Pearl: "We have 400,000 people who die annually from misdiagnoses. We have probably another half million who die from the fact that we control hypertension, diabetes, heart failure, nowhere near as well as we should be able to do so. And this technology is sitting on the edge of being able to provide that."
"I think the challenge we have is we're holding a technology to a higher standard that we hold humans. I think we have to ask not is it perfect? Not, is there risk? But is the risk more or less."
"Remember, this technology's doubling in power every year. It's gonna get twice as good in a year. It's gonna get 30 times better in five years. And I just wanna make sure that we're looking into the future, not trying to manage what exists today."
"If I could offer a controversial view, I would separate this into two kinds of risk. There's the technological risk, the ones we've been talking about. And that's not gonna be any different in generative AI than other forms of AI... But I wanna focus a little bit on medical risk."
"We have to be objective and not allow human bias to cloud our thinking when it comes to this technology. We have to acknowledge the mistakes that people make, the opportunities of the technology, and we have to make certain, based upon good scientific studies, that the technology is superior to humans."
Rob Havasy (HIMSS): "Where healthcare leaders and c-level executives are saying agents are there to start revolutionizing the process and we have a whole governance process to deploy them. And the people on the floor are saying, oh my God, these things are terrible. And yes, I used AI, but I actually have a ChatGPT subscription I don't tell my boss about because the tools they're handing me just aren't ready yet."
"We're beginning to close that information asymmetry that's always been behind a lot of the doctor-patient relationship, right? Because doctors go to school for a very long time and most patients don't go for those same subjects for that amount of time, and AI can help them close the knowledge gap."
"The position of many physicians is that they want full control and input in how AI is developed, but none of the liability if it goes wrong. The point of the developers is that they can't accept liability for someone misusing their products."
"When I hear nurses, doctors say, I'm worried that AI is going to take my job, what they really mean is, I'm afraid my boss is gonna replace me with AI. And that's not AI's fault. That's not an intrinsic characteristic of AI. That's how organizations choose to use it."
John Sapp (Texas Mutual Insurance): "My job is to enable the secure adoption and responsible use of AI in the business... How do I enable the secure adoption and responsible use of AI technology?"
"We can't let fear of what may or may not, what may go wrong, prevent us from advancing forward... We live in a litigious society, you know, and America's probably the worst of them all. But we can't let that be the thing that causes us to go, oh my God, wait, wait, we gotta stop and wait."
"There's a difference between anonymization and de-identification. They are not one and the same... People think safety, they think physical safety. No, it's not just physical safety."
"Governance is about visibility so that an informed risk-based decision can be made. And so it is, how do I frame that up in a way that an informed risk-based decision can be made, based on the risk appetite of the organization."
Robert Booker (HITRUST): "I don't know that I want an agent to be the relationship with me and my physician. I want that to be a relationship with my physician. I'm happy if they have an agent to help them."
"Consumers are gonna go where they, you know, the consumers give their information to providers that we all could name here every day. And you really question the promiscuous nature of their habits... Because I get free email, I get free mapping software, I get, you know, whatever."
"Organizations that are being responsible, saying, you know, I'm going to take the initiative to manage my risk and to have transparency around my use of these tools and to try to do good with them, but also do good in a way that I can explain what I'm doing."
Jim StClair (Altarum): "We can narrow down a personalized decision that you needed intervention. And that intervention may not apply to 15 other people, but it applies to you under these circumstances, right? Because of the algorithmic tuning to be able to tell you that."
"AI opens the door for realizing so many things are process or procedure or a billing code or a prescription... How many diagnoses in the ER and how many cases could go through just in following a process flow around a trauma or a particular presentation of complaints and symptoms that, you know, AI could step in."
"Organizations that are gonna be reckless are always gonna be reckless... But they tend to flame out, I guess, once in a while."