Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT in the Indian healthcare scenario: Look before you leap
1
Zitationen
2
Autoren
2023
Jahr
Abstract
The transformational power of artificial intelligence (AI)-based technologies in healthcare over the past few years is evident, although their actual use in healthcare delivery remains limited.[1,2] The most widely used type of data by AI-based technologies in healthcare have been radiological images,[3] followed by clinical images, to identify pathological changes.[4] However, there has also been an increasing use of natural language processing (NLP) approaches to mine electronic health records to gain valuable clinical insights.[5] ChatGPT is a large language model (LLM)-based chatbot developed by OpenAI, which can interact in a conversational way. It has been trained using large amounts of data available on the Internet and has been refined by iterative human feedback, using reinforcement learning. It has already begun to find wide applications in different fields, with technology professionals leading the way.[6] Studies have shown that the acceptance of digital technology by healthcare professionals is usually slow, particularly in developing countries.[7] Therefore, understanding the attitudes of healthcare professionals toward new potentially transformative digital technologies such as ChatGPT is important. Hence, the article by Parikh et al.[8] comparing the perceptions related to ChatGPT between healthcare professionals and professionals from other backgrounds is timely. The article highlights that although fewer healthcare professionals had actually used ChatGPT themselves, in general, they were more favorably inclined and optimistic about the impact that it would have on their profession as compared to other professionals. These findings, though interesting, must be interpreted with caution due to a range of methodological issues with the study.[9] These include the absence of a clear sampling frame, a small sample size, lack of information on the survey response rate, and the background of the participants who responded to the survey. These issues make it difficult to draw conclusions about whether the findings of the study truly represent the perceptions of healthcare professionals. Furthermore, the reasons for the differences in the perceptions of healthcare professionals as compared to other professionals were not adequately explored or discussed. They could include differences in socio-demographic variables (such as age, gender, and educational background) or technology-related factors (such as exposure and familiarity with digital technology). Hence, attributing these differences to the profession would be an erroneous conclusion. In fact, it is possible that a lack of experience with the use of ChatGPT may actually have contributed to the favorable outlook toward the same among healthcare professionals, who may not yet have become aware of the pitfalls of such technology.[10] The survey should have also tried to gather information about the contexts in which healthcare professionals have already used ChatGPT and the areas of healthcare where it may find utility in the future. This could have been achieved by a specific exploration of the impact on different aspects of healthcare, including clinical care, teaching, research, and administration. The ethical considerations associated with the use of ChatGPT in healthcare warrant further exploration. These aspects could have been addressed and investigated in the survey.[11] As with other disruptive technologies, opinions regarding ChatGPT among the medical fraternity continue to remain polarized. However, a PubMed search using the keyword “ChatGPT” among titles and abstracts threw up 380 articles on May 10, 2023—a mere six months after its launch. This points to the fact that ChatGPT has already made an impact on healthcare. It is the responsibility of healthcare professionals to rigorously study and evaluate this impact, so that one could potentially find an appropriate use for it in healthcare. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.