Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Cloud-based machine learning systems are increasingly used in sectors such as healthcare, finance, and public services, where they influence decisions with significant social consequences. While these technologies offer scalability and efficiency, they raise significant concerns regarding security, privacy, and compliance. One of the central issues is algorithmic bias, which can emerge from data, design choices, or system interactions, and is often amplified when deployed at scale through cloud infrastructures. This study examines the relationship between algorithmic bias, social equity, and cloud-based innovation. Drawing on a survey of public perceptions, we find strong recognition of the risks posed by biased systems, including diminished trust, harm to vulnerable populations, and erosion of fairness. Participants overwhelmingly supported regulatory oversight, developer accountability, and greater transparency in algorithmic decision-making. Building on these findings, this paper proposes measures to integrate fairness auditing, representative datasets, and bias mitigation techniques into cloud security and compliance frameworks. We argue that addressing bias is not only an ethical responsibility but also an essential requirement for safeguarding public trust and meeting evolving legal and regulatory standards.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.612 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.431 Zit.
Fairness through awareness
2012 · 3.292 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.