Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots
1
Zitationen
13
Autoren
2025
Jahr
Abstract
Uses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-based, accountable, accurate, and patient-centred. This conceptual, practice-informed Perspective reflects on engaging patients and non-academic partners for the responsible integration of LLMs, grounded in the co-construction of MARVIN (for people living with HIV) and in an emerging collaboration with MIT Critical Data. Organised by the Software Development Life Cycle, we describe: conception/needs assessment with patient partners to identify use cases, acceptable trade-offs, and privacy expectations; development that prioritises grounding via vetted sources, structured human feedback, and data-validation committees including patient partners; testing and evaluation using patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) chosen in collaboration with patients to capture usability, acceptability, trust, and perceived safety, alongside task performance and harmful-output monitoring; and implementation via diverse governance boards, knowledge-mobilisation materials to set expectations, and risk-management pathways for potentially unsafe outputs. Based on our experience with MARVIN, we recommend early and continuous engagement of patients and non-academic partners, fair compensation, shared decision-making power, transparent decision logging, and inclusive, adaptable governance that can evolve with changing models and standards. These lessons highlight how patient partnership can directly shape chatbot design and oversight, helping teams align LLM-enabled tools with patient-centred goals while building accountable, safe, and equitable systems. Health chatbots powered by large language models (LLMs) can make medical information more accessible, but most are developed without meaningful input from the people who will use them. This risks unsafe answers, hidden bias, and tools that mainly work for privileged groups. Our team built a chatbot called MARVIN to support people living with HIV, and we are now adapting it for cancer care and children’s health. Patients, caregivers, and community partners shaped what MARVIN should do, chose which sources it should trust, and tested early versions. Their feedback led to concrete improvements including clearer language, more relevant features, and safeguards against misinformation. We are also partnering with MIT Critical Data, which brings patients, members of the public, clinicians, engineers, and policymakers together at events to find and fix bias in medical AI. We have learned that technical fixes alone are not enough: trust, fairness, and accountability require active involvement of diverse users at every stage. Based on these lessons, we recommend: (1) including patients and non-academic partners from the start so their insights can shape core design decisions; (2) compensating them fairly so participation is sustainable; (3) giving them real decision-making power so their input is not tokenistic; and (4) being transparent about the limits of AI so expectations are realistic. In our experience, responsible health AI depends on the lived expertise of the people it serves.
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.024 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.665 Zit.
COVID-19 and mental health: A review of the existing literature
2020 · 3.703 Zit.
Cognitive Therapy and the Emotional Disorders
1977 · 2.931 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.786 Zit.
Autoren
Institutionen
- McGill University Health Centre(CA)
- McGill University(CA)
- Polytechnique Montréal(CA)
- Montreal Children's Hospital(CA)
- Mila - Quebec Artificial Intelligence Institute(CA)
- Université de Montréal(CA)
- Canadian Patient Safety Institute(CA)
- Massachusetts Institute of Technology(US)
- National Patient Safety Foundation(US)
- Beth Israel Deaconess Medical Center(US)
- Harvard–MIT Division of Health Sciences and Technology(US)