Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Identification of ChatGPT‐Generated Abstracts Within Shoulder and Elbow Surgery Poses a Challenge for Reviewers
16
Zitationen
11
Autoren
2024
Jahr
Abstract
Purpose To evaluate the extent to which experienced reviewers can accurately discern between artificial intelligence (AI)–generated and original research abstracts published in the field of shoulder and elbow surgery and compare this with the performance of an AI detection tool. Methods Twenty‐five shoulder‐ and elbow‐related articles published in high‐impact journals in 2023 were randomly selected. ChatGPT was prompted with only the abstract title to create an AI‐generated version of each abstract. The resulting 50 abstracts were randomly distributed to and evaluated by 8 blinded peer reviewers with at least 5 years of experience. Reviewers were tasked with distinguishing between original and AI‐generated text. A Likert scale assessed reviewer confidence for each interpretation, and the primary reason guiding assessment of generated text was collected. AI output detector (0%‐100%) and plagiarism (0%‐100%) scores were evaluated using GPTZero. Results Reviewers correctly identified 62% of AI‐generated abstracts and misclassified 38% of original abstracts as being AI generated. GPTZero reported a significantly higher probability of AI output among generated abstracts (median, 56%; interquartile range [IQR], 51%‐77%) compared with original abstracts (median, 10%; IQR, 4%‐37%; P < .01). Generated abstracts scored significantly lower on the plagiarism detector (median, 7%; IQR, 5%‐14%) relative to original abstracts (median, 82%; IQR, 72%‐92%; P < .01). Correct identification of AI‐generated abstracts was predominately attributed to the presence of unrealistic data/values. The primary reason for misidentifying original abstracts as AI was attributed to writing style. Conclusions Experienced reviewers faced difficulties in distinguishing between human and AI‐generated research content within shoulder and elbow surgery. The presence of unrealistic data facilitated correct identification of AI abstracts, whereas misidentification of original abstracts was often ascribed to writing style. Clinical Relevance With rapidly increasing AI advancements, it is paramount that ethical standards of scientific reporting are upheld. It is therefore helpful to understand the ability of reviewers to identify AI‐generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.693 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.598 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.124 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.871 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Johnson University(US)
- Monmouth Medical Center(US)
- University of Utah(US)
- Oregon Research Institute(US)
- Cleveland Shoulder Institute(US)
- Peachtree Orthopaedic Clinic(US)
- Rush University Medical Center(US)
- Duke University(US)
- Duke University Hospital(US)
- Boca Raton Regional Hospital(US)
- Rothman Institute(US)
- Mayo Clinic(US)
- Mayo Clinic in Arizona(US)
- Mayo Clinic in Florida(US)
- University of California Davis Medical Center(US)
- University of California, Davis(US)