Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Close to Refuge: Integrating AI and Human Insights for Intervention and Prevention: A Conversation With Seema Iyer
1
Zitationen
3
Autoren
2023
Jahr
Abstract
Harvard Data Science Review’s Founding Editor-in-Chief, Xiao-Li Meng, and Media Feature Editor, Liberty Vittert, engage in a conversation with Dr. Seema Iyer, the Senior Director of the Hive at USA for UNHCR, the UN Refugee Agency. The Hive is the innovation lab responsible for bringing data science, machine learning, and new technologies into the organization’s operations to address the needs of refugees around the world.The conversation between Dr. Iyer and HDSR revolves around the global refugee crisis and the pivotal role of data science in addressing it. Dr. Iyer delves into the types of data gathered to understand the needs of refugees, the challenges in utilizing this data, and the potential role of AI in facilitating new approaches. She provides specific examples about the use of AI for pro bono legal work and speedier processing of refuge statutes and aiding communication to raise awareness about the refugee crisis. Dr. Iyer reflects on the inaugural #Innovate4Refugees convening hosted by The Hive in September to create a space to share insights about the complexities of data collection in the refugee space, emphasizing the need for a broad view of what constitutes data and the importance of creative approaches to make sense of the information gathered. The discussion also addresses the challenges of misinformation and disinformation in the digital age, with a focus on the amplification of misinformation through social media and the efforts to create safer platforms for refugees. The interview concludes with reflections on the role of AI in communication, education, and legal aspects related to refugees, pointing towards the potential of generative AI in transforming how information is disseminated and understood.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.