OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 11:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

On protecting the data privacy of Large Language Models (LLMs) and LLM agents: A literature review

2025·36 Zitationen·High-Confidence ComputingOpen Access
Volltext beim Verlag öffnen

36

Zitationen

7

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) are complex artificial intelligence systems, which can understand, generate, and translate human languages. By analyzing large amounts of textual data, these models learn language patterns to perform tasks such as writing, conversation, and summarization. Agents built on LLMs (LLM agents) further extend these capabilities, allowing them to process user interactions and perform complex operations in diverse task environments. However, during the processing and generation of massive data, LLMs and LLM agents pose a risk of sensitive information leakage, potentially threatening data privacy. This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding. Specifically, we conduct an in-depth survey about privacy threats, encompassing passive privacy leakage and active privacy attacks. Subsequently, we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness. Finally, we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.

Ähnliche Arbeiten

Autoren

Themen

Privacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and EducationTopic Modeling
Volltext beim Verlag öffnen