Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Proteção de dados pessoais e Chatgpt
0
Zitationen
3
Autoren
2023
Jahr
Abstract
The article deals with the use of personal data by ChatGPT. It aims to ascertain whether the right to data protection is respected when using the language model. It’s considered that the platform, still in the testing phase, may have several biases that attest to its launch to the public in a hasty way, without taking into account the immaturity and unpreparedness of the community to deal with such an innovative tool. To carry out the research, the hypothetical-deductive method was used, seeking information in books, scientific articles, the Brazilian legal system and the virtual assistant ChatGPT. The survey results indicate that ChatGPT doesn’t clearly demonstrate compliance with data protection legislation, so the right to personal data protection is vulnerable as long as the software is fully functioning and open to the public.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.553 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.444 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.943 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.