OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.05.2026, 06:40

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

How to make sense of the ethical issues raised by artificial intelligence in medicine

2023·6 Zitationen·Internal Medicine JournalOpen Access
Volltext beim Verlag öffnen

6

Zitationen

2

Autoren

2023

Jahr

Abstract

Recent developments in artificial intelligence (AI), including large language models such as ChatGPT, have generated intense interest, as well as urgent expressions of concern at all levels of society.1 Alarm has been expressed about possible implications for the way we conduct our lives, our personal relationships and how we might make decisions. Questions have been asked about whether machines will seize control of human affairs, compromise freedom and eliminate the domain of ethical decision-making.2 There has been speculation about the possibility of dangerous, unpredictable consequences for the species and the environment.3 The concerns have become so intense that several high-profile business people, scientists, engineers and others have called for a moratorium on the development of AI to allow a public debate to take place about its future direction and regulation.4 Medicine is among the many areas in which questions about the implications of AI have been raised.5 What effect, for example, will ChatGPT have on clinical care? Will it improve or undermine the quality and efficiency of decision-making? Will it increase or erode individual agency? Will it enhance or obstruct communication between doctors and patients? Will it support a fairer allocation of resources or will it merely exacerbate existing inequities? Ultimately, will it generate better or worse outcomes? None of these questions can be answered with confidence at the present time. In part, this is because our experience with AI in the clinic remains relatively limited.6 To be sure, decision support algorithms have been in use for some time to assist with diagnosis in specialised contexts, robots have been used in surgery for many years and search engines and databases have long provided access to information where required.7 However, the powerful recent developments have for the most part so far failed to penetrate the clinic,8 and their possible implications remain to be fully elaborated. Our ability to respond is also limited because of the sheer complexity of the questions. The small industry that has sprung up around the ethical uncertainties provoked by AI provides little relief.9 The mere listing of familiar issues, such as ‘autonomy’, ‘consent’, ‘justice’ and ‘privacy’, and the routine application of standardised ethical principles are incapable of providing compelling answers. The call for a moratorium by billionaire businessmen does little to increase confidence, instead merely raising further questions about who is to undertake the supposed ‘public debate’ and according to whose values or what standards the ultimate decisions will be made. To respond effectively to the challenges of AI we need to undertake a more fundamental reflection about what we most value about medicine and what we consider to be its main points of vulnerability. It will also be important to clarify what aspects of medicine ChatGPT and other AI structures may seek to supplement or replace. It is possible that this process of questioning will itself give rise to useful insights, about not only AI but also the nature of clinical practice and the values, goals and purposes we believe we should be serving. The reflections that are required are inherently ethical. This will be in no way surprising to clinicians, whose everyday practices are inexorably saturated with discussions about values. It is a basic fact of clinical life that decisions are never purely ‘technical’ in nature but always need to consider the broader contexts of the lives of patients and their families, including their values, cultural norms, religious beliefs, loyalties, deep emotional attachments and fears, hopes and aspirations. Such decisions are arrived at through dialogue with physicians, who inhabit complex lifeworlds of their own, impregnated with culture, experience and values.10 All clinical encounters are premised on large-scale, value-laden assumptions about relationships, health and illness, equity and justice. They are always inscribed within frameworks that set out professional obligations and duties and limit or regulate certain forms of conduct. They take for granted irreducible bonds of trust and responsibility underlying every human interaction and process of communication. In addition, all forms of medical practice involve bodies – bodies in pain, bodies with desire, bodies in need, bodies facing death, bodies appealing to other bodies. The activity of ‘patient-centring’,11 the promotion of which has become a modern cliché, is not merely a computational or cerebral activity: it is the result of an active sharing of value-saturated experiences in real time by two or more embodied individuals. In the complex and dynamic context of the clinic, all of these issues are routinely negotiated, often with vividness and intensity. In every case, the validity of the conclusions that are drawn, and the decisions that are made, are tested in relation not to ends and consequences but to the integrity of the processes by which they were generated – that is, to the quality of the dialogue, the depth of the reflection, the openness to compromise and the degree of trust and respect established among the parties. It is in this intense cauldron of existential and ethical extremes that we need to distinguish what AI has to offer. There can be no doubt that the technologies it makes available can usefully expedite access to knowledge and research data, help avoid errors of fact, clarify risk factors for adverse events and identify possible drug interactions.12 The technologies may even be able to remind us of the need to consider the broader ethical contexts described above. But whether, in any domain that includes ethics, AI can provide us with more than rudimentary guidance remains highly doubtful. To go further than this, the inherent nature of ethical discourse as a process that involves direct and open dialogue between human beings would seem to present a formidable obstacle. The question of the contribution of AI to the ethical dimension of the clinical encounter will therefore require careful clarification of which of its components can be meaningfully delegated to automated technologies and formalised and calculated, and which simply cannot. We will need to ask whether the structures of mind embedded in the AI thinking processes themselves incorporate or enforce biases about values or value systems, and whether they can engage in critical scrutiny about difficult issues, such as sexuality, racial differences, politics and religion. When we have achieved this clarification, potential participants in clinical relationships will have to decide together to what extent they are prepared to place unconditional trust in particular mechanised disembodied thinking processes. While it is likely that the agonising about AI will be protracted and challenging, it is also possible that this very process may facilitate unexpected novel insights into the deep structures of the clinic and its foundational values. If AI can expose and illuminate hidden assumptions, it may enrich and deepen our ethical experience, even if it itself remains strictly excluded from that territory.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAutopsy Techniques and OutcomesEthics in Clinical Research
Volltext beim Verlag öffnen