Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line Interfaces
Abstract
Terminal-Bench 2.0 presents a challenging benchmark with 89 terminal-based tasks to evaluate AI agents' capabilities in real-world scenarios.
AI agents may soon become capable of autonomously completing valuable, long-horizon tasks in diverse domains. Current benchmarks either do not measure real-world tasks, or are not sufficiently difficult to meaningfully measure frontier models. To this end, we present Terminal-Bench 2.0: a carefully curated hard benchmark composed of 89 tasks in computer terminal environments inspired by problems from real workflows. Each task features a unique environment, human-written solution, and comprehensive tests for verification. We show that frontier models and agents score less than 65\% on the benchmark and conduct an error analysis to identify areas for model and agent improvement. We publish the dataset and evaluation harness to assist developers and researchers in future work at https://www.tbench.ai/ .
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts (2026)
- Real-Time Procedural Learning From Experience for AI Agents (2025)
- Benchmarking LLM Agents for Wealth-Management Workflows (2025)
- ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development (2026)
- Dr.Mi-Bench: A Modular-integrated Benchmark for Scientific Deep Research Agent (2025)
- SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution Scenarios (2025)
- The Hierarchy of Agentic Capabilities: Evaluating Frontier Models on Realistic RL Environments (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper