OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 09:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Securing local LLMs for academic research: a human-system integration analysis and evolution of TAUCHI-GPT

2025·0 Zitationen·Human-Intelligent Systems IntegrationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Abstract The application of Large Language Models (LLMs) in academic research faces unique challenges of privacy and workflow integration. This paper introduces TAUCHI-GPT, a novel, open-source AI assistant whose evolution informs our analysis. We detail its two versions: a cloud-based V1 using GPT-4 and reflection cycles, and a local, privacy-preserving V2 with RAG architecture. Based on empirical findings from two user studies, we present a critical Human-System Integration (HSI) analysis of the security vulnerabilities and alignment challenges inherent in local LLM deployments. We examine how recent development trends—such as model distillation and reward-model learning—and the complexities of internal model mechanisms exacerbate risks like prompt injection, RAG data failures, and unfaithful explanations that impact user trust. Drawing from HCI principles and mechanistic interpretability insights, we propose and discuss a multi-layered mitigation strategy. This work contributes significantly to HSI and AI by presenting an evaluated system, a rigorous analysis of local deployment risks from a sociotechnical perspective, and actionable, stakeholder-specific guidelines for the secure and responsible utilization of LLMs in academia.

Ähnliche Arbeiten