SimpleMem: Efficient Lifelong Memory for LLM Agents
Abstract
To support reliable long-term interaction in complex environments, LLM agents require memory systems that efficiently manage historical experiences. Existing approaches either retain full interaction histories via passive context extension, leading to substantial redundancy, or rely on iterative reasoning to filter noise, incurring high token costs. To address this challenge, we introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization: (1) Semantic Structured Compression, which applies entropy-aware filtering to distill unstructured interactions into compact, multi-view indexed memory units; (2) Recursive Memory Consolidation, an asynchronous process that integrates related units into higher-level abstract representations to reduce redundancy; and (3) Adaptive Query-Aware Retrieval, which dynamically adjusts retrieval scope based on query complexity to construct precise context efficiently. Experiments on benchmark datasets show that our method consistently outperforms baseline approaches in accuracy, retrieval efficiency, and inference cost, achieving an average F1 improvement of 26.4% while reducing inference-time token consumption by up to 30-fold, demonstrating a superior balance between performance and efficiency. Code is available at https://github.com/aiming-lab/SimpleMem.
Community
We introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization.
📄 Paper: https://arxiv.org/abs/2601.02553
🔗 Code: https://github.com/aiming-lab/SimpleMem
📦 Website:https://aiming-lab.github.io/SimpleMem-Page/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning (2026)
- ENGRAM: Effective, Lightweight Memory Orchestration for Conversational Agents (2025)
- A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents (2025)
- O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents (2025)
- MemR$^3$: Memory Retrieval via Reflective Reasoning for LLM Agents (2025)
- Hindsight is 20/20: Building Agent Memory that Retains, Recalls, and Reflects (2025)
- Rhea: Role-aware Heuristic Episodic Attention for Conversational LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper