Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Implementing generative artificial intelligence in precision oncology: safety, governance, and significance
0
Zitationen
27
Autoren
2026
Jahr
Abstract
The paramount challenge in precision oncology lies in further improving quality of life and response rates for individual patients. Efforts toward these goals are steadily expanding the scope of clinical implementation, despite ongoing challenges such as standardization, cost-effectiveness, and data harmonization. Building upon this maturing foundation, generative AI-which has evolved dramatically in recent years-is particularly valuable at this stage of advancing efficiency and adoption as an auxiliary technology linking literature, guidelines, trial protocols, and patient data. Specifically, through mutation interpretation, trial eligibility matching, and tumor board support, it is expected to contribute to advancing standardization, improving cost-effectiveness, accelerating data harmonization, and further accelerating human-centered decision-making. Accordingly, this review surveys the development history of generative AI and its current healthcare applications, organizing its implementation potential for precision oncology along three axes: (1) generative AI-based interpretation of genetic mutations and estimation of their pathological significance; (2) generative AI-driven verification of clinical trial eligibility; and (3) multimodal foundation models for imaging and pathology that compute "tumor phenotypes" using real-world data, contributing to report drafting and molecular surrogate estimation. In response, we propose a strategy centered on retrieval-augmented generation (RAG) and human-in-the-loop (HITL) workflows, encompassing data preparation based on OMOP, mCODE, and FHIR; multicenter prospective evaluation; auditable logs and governance aligned with Good Manufacturing Practice (GMP) and the EU AI Act; and a synthetic data strategy including differential privacy. Ultimately, this approach validates value through real-world outcomes and charts a path toward "learning oncology," accelerating patient-centered decision-making and clinical trial development.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.
Autoren
- Ryuji Hamamoto
- Takafumi Koyama
- Satoshi Takahashi
- Tomohiro Yasuda
- Kazuma Kobayashi
- Yu Akagi
- Nobuji Kouno
- Kazuki Sudo
- Makoto Hirata
- Kuniko Sunami
- Takashi Kubo
- H. Katayama
- Atsuo Takashima
- Tomonori Taniguchi
- Hiromi Matsumoto
- Ryota Shibaki
- Ken Asada
- Masaaki Komatsu
- Syuzo Kaneko
- Masayoshi Yamada
- Hidehito Horinouchi
- Katsuya Tanaka
- Yasushi Goto
- Koji Kato
- Yutaka Saito
- Kenichi Nakamura
- Noboru Yamamoto
Institutionen
- RIKEN Center for Advanced Intelligence Project(JP)
- National Cancer Research Institute(GB)
- Tokyo National Hospital(JP)
- The University of Tokyo(JP)
- Kyoto University(JP)
- National Cancer Center(US)
- National Cancer Center Hospital East(JP)
- Japan Clinical Cancer Research Organization(JP)
- National Cancer Centre Japan(JP)