Medical tech science, innovative iot global healthcare ai technology, World health day with doctor on telehealth, telemedicine service analyzing online on EHR, EMR patient digita data on tablet in lab
Credit: Chinnapong/Getty Images

In a new paper published this week in the journal JCO Oncology Practice, bioethics researchers from the Dana-Farber Cancer Institute have urged medical societies, government leaders, clinicians, and researchers to collaborate to ensure patient-facing AI technologies preserve patient autonomy and respects human dignity.

AI is increasingly integrated into cancer care, offering patients a range of tools and services, from scheduling appointments to monitoring their health, and providing information about their condition and treatment. The Dana-Farber team note that the potential pitfalls of AI in cancer care is the possibility that it could depersonalize care as well as eroding the relationship between patients and their caregivers. The researchers say that while many previous reports have examined the effects AI may have for clinicians and researchers, their analysis is the first that addresses potential concerns regarding the use of technology with embedded AI by patients.

“To date, there has been little formal consideration of the impact of patient interactions with AI programs that haven’t been vetted by clinicians or regulatory organizations,” says the paper’s lead author, Amar Kelkar, MD, a stem cell transplantation physician at Dana-Farber Cancer Institute. “We wanted to explore the ethical challenges of patient-facing AI in cancer, with a particular concern for its potential implications for human dignity.”

The authors noted that, to date, direct interface with AI for patients has been limited, though that is expected to change as clinicians and researchers more regularly turn to it, to help diagnose cancer, chose medications, and predict outcomes.

Three areas that have potential for patients to engage with AI today or in the near future include telehealth visits, as patient care moves to remote settings; remote monitoring of patient health, which may be enhanced by AI on patient-reported information, or data collected from wearables; and health coaching, where improvements are expected to make these applications more human-like, but has the potential to eliminate person-to-person contact that defines traditional delivery of healthcare.

The ethical challenges related to patient interactions with AI-enable systems are clear, the study’s author note and include potential risks to patient privacy, as well as more impersonal care. To address these areas of concern, the authors propose several guiding principles for the development and adoption of AI in patient-facing situations including:

  • Human dignity: AI cannot replicate the empathy, compassion, and cultural understanding provided by human caregivers, making it essential to prevent overdependence on AI.
  • Patient autonomy: Patients should understand the limits of AI-generated recommendations and be able to differentiate between physician advice and algorithmic suggestions.
  • Equity and justice: AI models should be trained on diverse data to ensure they reflect the racial, ethnic, and socioeconomic diversity of the population.
  • Regulatory oversight: There should be clear regulations in place to govern the use of AI in healthcare, protecting patient rights and privacy.
  • Collaboration: Various stakeholders in oncology must work together to ensure AI technology enhances patient autonomy and dignity, rather than undermining them.

The researchers noted there is “massive potential” for patient-facing AI to positively impact patient care, but that there are few safeguards in place. They hope their paper can “help lay the groundwork for conscientious and patient-focused integration of these technologies.”

Also of Interest