Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Data Security and Privacy in GPT Models: Techniques and Challenges
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The rapid advancement of Generative Pre-trained Transformer (GPT) models has led to their widespread adoption across applied domains such as healthcare, finance, education, and enterprise software engineering. However, the large-scale data requirements and generative capabilities of these models introduce significant challenges related to data security, privacy preservation, and regulatory compliance. This paper presents a systematic literature review conducted in accordance with the PRISMA 2020 guidelines, analyzing 60 peer-reviewed empirical studies published between 2020 and 2025 in Q1 and Q2 journals indexed in the Web of Science Core Collection. The review examines the evolution of GPT architectures and evaluates state-of-the-art security and privacy techniques, including encryption, differential privacy, federated learning, data anonymization, model distillation, and secure deployment mechanisms. Key challenges identified include unintended memorization of sensitive data, adversarial prompt-based attacks, and performance degradation resulting from privacy-preserving constraints, with reported accuracy reductions ranging from 5% to 20% depending on the applied technique. Additionally, the analysis highlights increased computational overhead, in some cases exceeding 30–40% training or inference cost when advanced cryptographic methods are employed. Regulatory and ethical implications are assessed in relation to frameworks such as GDPR, CCPA, HIPAA, and the proposed EU Artificial Intelligence Act. The findings emphasize the need for privacy-by-design approaches and scalable governance strategies to support secure and trustworthy deployment of GPT models in applied real-world environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.