Watch the video below, and then think about your responses to the discussion questions below. Make notes for our discussion in class.

https://youtu.be/FqsvgFTQv8w?si=CxmXjVVE4Z6-asJt

Part A: The Doctor's New Partner

Introduction

Artificial intelligence is no longer a futuristic concept in healthcare; it's a foundational change in the medical toolkit1. While early AI helped with tasks like transcribing notes, the new wave of generative AI is supercharging the core work of medicine. One of the most promising areas is diagnostics, where AI is helping clinicians interpret complex tests faster and more accurately than ever before2.

A prime example is Yale's ECGGPT, a tool that takes an image of an electrocardiogram (ECG) and generates a full diagnostic report, a task that traditionally requires a trained cardiologist. The AI is able to detect subtle signals in the ECG data that a human reader might miss, identifying conditions like heart failure from the electrical recording alone3. However, the goal of this technology is not to replace the doctor, but to serve as a powerful assistant. As the researchers note, the hope is that the tool can provide "very accurate reads that are available all over that eventually clinicians can confirm and use in their care"4. This creates a new paradigm of human-AI collaboration, but it also raises critical questions: How do we measure the success of such a tool, and how do we ensure it is safe, fair, and effective?

Discussion Questions

  1. Who are the key stakeholders for a tool like ECGGPT (e.g., patients, cardiologists, rural doctors, hospitals, insurance companies, regulators)? How might their definitions of a "successful" implementation differ?
  2. Using the frameworks from this chapter, would you classify the development of a diagnostic tool like ECGGPT as an act of Preservation, Optimization, or Experimentation? Justify your answer by defining the project's core objective (is it known or emerging?) and its approach (is it established or innovative?).
  3. Chapter 6 argues that we must move beyond simple metrics to evaluate AI. For a tool like ECGGPT, why is "overall accuracy" an insufficient measure of success? What other performance indicators (e.g., speed, clinician confidence) and result indicators (e.g., patient outcomes, cost savings) would be necessary to prove its value?
  4. The video transcript emphasizes that "there is still a real doctor involved in the diagnosis". How does the intended use of ECGGPT exemplify a Human-in-the-Loop (HITL) evaluation strategy? In this partnership, what is the primary role of the AI, and what is the irreplaceable role of the human clinician?
  5. The researchers in the video stress the need for "stage gates and reviews" to ensure AI tools are safe before they reach patients6. How could the "AI Shepherding Framework" mechanisms from Chapter 6 be applied here?

Part B: The Last Clinic in America

Introduction

In the not-so-distant future, the familiar corner clinic has gone the way of the video store—a relic of a bygone era. Healthcare is now dominated by a handful of mega tech companies. The system is hyper-efficient: every individual's health is constantly monitored by data-streaming implants, with AI algorithms predicting illness months or years in advance. Treatments are perfectly personalized through gene-sequenced medicine, and complex procedures are handled by robotic surgeons with a 99.999% success rate. For terminal cases, the ultimate service is offered: a cryogenic brain-freeze and consciousness upload, allowing family members to interact with a digital version of their loved ones forever, effectively promising an end to death itself. There are no more lines, no more misdiagnoses, and no more uncertainty. But what was lost when the last clinic in America closed its doors?

Discussion Questions

  1. In the world of "The Last Clinic," healthcare has been transformed into a fully automated system managed by a few for-profit tech companies. Who holds the power in this new ecosystem? What happens to the social contract between doctor and patient when the "doctor" is a proprietary algorithm?