Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense Paper • 2510.07242 • Published 23 days ago • 30
Clean First, Align Later: Benchmarking Preference Data Cleaning for Reliable LLM Alignment Paper • 2509.23564 • Published Sep 28 • 7
LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals Paper • 2509.21875 • Published Sep 26 • 9
Understanding Language Prior of LVLMs by Contrasting Chain-of-Embedding Paper • 2509.23050 • Published Sep 27 • 13