OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
Abstract
An LLM-powered agentic system for transparent, evidence-based novelty assessment in peer review that retrieves and analyzes prior work through semantic search and hierarchical taxonomy construction.
Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, OpenNovelty grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ARISE: Agentic Rubric-Guided Iterative Survey Engine for Automated Scholarly Paper Generation (2025)
- SemanticCite: Citation Verification with AI-Powered Full-Text Analysis and Evidence-Based Reasoning (2025)
- WisPaper: Your AI Scholar Search Engine (2025)
- AI-Augmented Bibliometric Framework: A Paradigm Shift with Agentic AI for Dynamic, Snippet-Based Research Analysis (2025)
- OmniScientist: Toward a Co-evolving Ecosystem of Human and AI Scientists (2025)
- AstroReview: An LLM-driven Multi-Agent Framework for Telescope Proposal Peer Review and Refinement (2025)
- Resolving Evidence Sparsity: Agentic Context Engineering for Long-Document Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper