Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Acting Humanly: Identification and Analysis of Logical Reasoning Biases Exhibited by ChatGPT versus Undergraduate Students
0
Zitationen
4
Autoren
2024
Jahr
Abstract
Definitions of Artificial Intelligence (AI) include characterizing algorithms as those that: thinking humanly, thinking rationally, acting humanly and acting rationally. On the one hand, Logic, as a formal framework, allows for the creation of algorithms capable of thinking rationally by expressing real world situations in a language that enables valid and rigorous reasoning. On the other hand, Large Language Models, such as ChatGPT, represent algorithms that acting humanly, especially in tasks involving understanding and generating natural language text. However, these models can exhibit logical reasoning biases, which are tendencies that impair the ability to reason logically. This article aims to identify and analyze the logical reasoning biases exhibited by ChatGPT in comparison to those exhibited by Information Technology Undergraduate Students, beginners in the Logic course.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.