Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Self-Evolving Agents: A Dual-Process Framework for Continual Context Refinement
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) have become essential for interactive AI systems, yet they remain fundamentally static after deployment: they cannot update their parameters from interaction feedback and often repeat the same mistakes across long interaction streams. We propose Dual-Process Agent (DPA), a framework for continual context refinement that enables learning without modifying a frozen model backbone. Inspired by dual-process theory from cognitive science, DPA decomposes each interaction episode into two complementary processes: a fast System 1 that retrieves compact, relevant context from an explicit long-term memory and generates responses, and a slow System 2 that reflects on outcomes and writes curated updates back into memory. To prevent memory degradation over extended interactions, DPA maintains bulletized memory entries with utility statistics and employs a conservative curator gate that filters generic, redundant, or conflicting insertions. Experiments on six diverse benchmarks demonstrate that DPA consistently outperforms vanilla prompting and competitive baselines on both GPT-5.1 and Llama-3.1-8B backbones, achieving the best overall performance across multiple reasoning and knowledge-intensive tasks.
Ähnliche Arbeiten
MizAR 60 for Mizar 50
2023 · 75.248 Zit.
ImageNet: A large-scale hierarchical image database
2009 · 61.031 Zit.
Microsoft COCO: Common Objects in Context
2014 · 41.562 Zit.
Fully convolutional networks for semantic segmentation
2015 · 36.585 Zit.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.800 Zit.