Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
What Should a Robot Do? Comparing Human and Large Language Model Recommendations for Robot Deception
3
Zitationen
8
Autoren
2024
Jahr
Abstract
This study compares human ethical judgments with Large Language Models (LLMs) on robotic deception in various scenarios. Surveying human participants and querying LLMs, we presented ethical dilemmas in high-risk and low-risk contexts. Findings reveal alignment between humans and LLMs in high-risk scenarios, prioritizing safety, but notable divergences in low-risk situations, reflecting challenges in AI development to accurately capture human social nuances and moral expectations.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.721 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.510 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.200 Zit.