Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An intriguing vision for transatlantic collaborative health data use and artificial intelligence development
20
Zitationen
1
Autoren
2024
Jahr
Abstract
Our traditional approach to diagnosis, prognosis, and treatment, can no longer process and transform the enormous volume of information into therapeutic success, innovative discovery, and health economic performance. Precision health, i.e., the right treatment, for the right person, at the right time in the right place, is enabled through a learning health system, in which medicine and multidisciplinary science, economic viability, diverse culture, and empowered patient's preferences are digitally integrated and conceptually aligned for continuous improvement and maintenance of health, wellbeing, and equity. Artificial intelligence (AI) has been successfully evaluated in risk stratification, accurate diagnosis, and treatment allocation, and to prevent health disparities. There is one caveat though: dependable AI models need to be trained on population-representative, large and deep data sets by multidisciplinary and multinational teams to avoid developer, statistical and social bias. Such applications and models can neither be created nor validated with data at the country, let alone institutional level and require a new dimension of collaboration, a cultural change with the establishment of trust in a precompetitive space. The Data for Health (#DFH23) conference in Berlin and the Follow-Up Workshop at Harvard University in Boston hosted a representative group of stakeholders in society, academia, industry, and government. With the momentum #DFH23 created, the European Health Data Space (EHDS) as a solid and safe foundation for consented collaborative health data use and the G7 Hiroshima AI process in place, we call on citizens and their governments to fully support digital transformation of medicine, research and innovation including AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.