Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT’s Potential for Quantitative Content Analysis: Categorizing Actors in Public Debates
0
Zitationen
4
Autoren
2024
Jahr
Abstract
We assess ChatGPT's ability to identify and categorize actors in news media articles into different societal groups. We conducted three experiments to evaluate different models and prompting strategies. In experiment 1, testing gpt-3.5-turbo, we found that using the original codebooks created for manual content analysis is insufficient. However, combining named entity recognition with an optimized prompt (NERC pipeline) yielded an acceptable macro-averaged F1-score of .79. Experiment 2 compared gpt-3.5-turbo, gpt-4o, and gpt-4-turbo: the latter achieved the highest macro-averaged F1-score of .82 using the NERC pipeline. Challenges remained in classifying nuanced actor categories. Experiment 3 demonstrated high retest reliability for different gpt-4o releases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.