Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) has transformative potential in public health, but its impact is limited by models that implicitly prioritize a single stakeholder perspective and do not make explicit and tunable trade-offs between community and clinician endorsement. To address this gap, we introduce Participatory-informed Preference Optimization (PiPrO), a large language model embedding-based calibration framework that generates a single clinical outcome prediction while explicitly accounting for differences between community and physician interpretations of the same scenario. PiPrO takes as input two embeddings derived from a large language model representing a community-facing context and a physician-facing context. It then applies a shared lightweight feedforward predictor to produce per-stakeholder scores which are then mixed using a single global mixing weight (alpha). Alpha controls how strongly the final prediction reflects the community versus physician responses and is learned using a policy-gradient update driven by an abundant but noisy community text and a sparse and biased physician text. PiPrO reliably learned stable alpha values and a consistent reward signal. Alpha shifts systematically toward physician weighting as community feedback becomes noisier and shifts toward community weighting as physician feedback becomes more biased. Our results suggest PiPrO's potential to produce more transparent, and context-sensitive AI-driven healthcare recommendations. Future research should validate this approach using real-world community inputs to ensure generalizability and practical impact.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.