Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
医学人工智能的算法黑箱问题:伦理挑战与化解进路
1
Zitationen
2
Autoren
2023
Jahr
Abstract
<p indent="0mm">Artificial intelligence (AI) is gradually becoming an important force driving the development of biomedicine. However, as AI increasingly relies on complex and opaque machine learning algorithms, a critical ethical issue known as the “algorithmic black-box” problem has emerged. Despite the development of several explainability tools, they have not been widely adopted in the medical field due to their inability to provide satisfactory explanations in clinical practice. Furthermore, different stakeholders including algorithm experts, medical professionals, patients, and the general public have varying requirements for explainability and transparency. As a result, this has created a series of internal, internal-external interaction, and external-level ethical issues in data, algorithmic, and social dimensions. In the ethical challenges associated with the increasing complexity and opacity of medical artificial intelligence, constructing medical artificial moral agents has been proposed as a viable solution. To implement ethical frameworks in this domain, three approaches have been identified: The top-down approach, the bottom-up approach, and the hybrid approach. The top-down approach prioritizes moral design based on specific ethical principles. However, this approach faces difficulties in responding appropriately to complex ethical situations due to the lack of consensus among ethical experts, contradictions between ethical principles and practical goals, and the abstract nature of moral principles. The bottom-up approach, on the other hand, requires medical artificial intelligence to develop a set of operating methods that align with human moral intuition in a series of case-based reinforcement learning scenarios. Nonetheless, this approach is only effective in retrospective regulation, and converging moral reasoning to a certain pattern remains challenging. In light of the current state of artificial intelligence development, it is imperative to adopt a “hybrid approach” that integrates both top-down and bottom-up approaches throughout the process of developing medical artificial intelligence. This involves establishing a flexible ethical framework via the top-down approach that takes into account contextual factors to enhance algorithm transparency, and leveraging the strengths of medical artificial intelligence to develop diverse models of moral reasoning through the bottom-up approach that incorporates multiple contextual information. While some scholars may argue that the hybrid approach is redundant, given the contemporary demand for moral pluralism and contextualism, this path toward reflective equilibrium can better address moral disagreements in the real world, ensuring that the ethical behavior of medical artificial intelligence aligns with the value judgments of relevant stakeholders. From an internal perspective, the hybrid approach involves algorithm engineers developing tools for explainability that are independent of the underlying machine learning models, assessing ethical risks, or constructing algorithmic models with self-explanatory capabilities to enhance the explainability of the algorithm. From external and internal-external interaction perspectives, the hybrid approach entails stakeholders actively participating in the algorithm design process, incorporating diverse viewpoints through an open and participatory research and develop model to enhance the interpretability of medical artificial intelligence, and establishing a “humanistic” ethical framework for medical artificial intelligence. While the hybrid approach cannot entirely eliminate the algorithmic black-box phenomenon at present, the development of “algorithmic gray-box” with local transparency represents a feasible goal for designing current medical artificial moral agents.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.