Papers
arxiv:2510.22733

E^2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker

Published on Oct 26
ยท Submitted by Qi Liu on Oct 28
Authors:
,
,
,
,
,

Abstract

A unified framework extends a single text embedding model to perform both retrieval and listwise reranking, achieving state-of-the-art results with low latency.

AI-generated summary

Text embedding models serve as a fundamental component in real-world search applications. By mapping queries and documents into a shared embedding space, they deliver competitive retrieval performance with high efficiency. However, their ranking fidelity remains limited compared to dedicated rerankers, especially recent LLM-based listwise rerankers, which capture fine-grained query-document and document-document interactions. In this paper, we propose a simple yet effective unified framework E^2Rank, means Efficient Embedding-based Ranking (also means Embedding-to-Rank), which extends a single text embedding model to perform both high-quality retrieval and listwise reranking through continued training under a listwise ranking objective, thereby achieving strong effectiveness with remarkable efficiency. By applying cosine similarity between the query and document embeddings as a unified ranking function, the listwise ranking prompt, which is constructed from the original query and its candidate documents, serves as an enhanced query enriched with signals from the top-K documents, akin to pseudo-relevance feedback (PRF) in traditional retrieval models. This design preserves the efficiency and representational quality of the base embedding model while significantly improving its reranking performance. Empirically, E^2Rank achieves state-of-the-art results on the BEIR reranking benchmark and demonstrates competitive performance on the reasoning-intensive BRIGHT benchmark, with very low reranking latency. We also show that the ranking training process improves embedding performance on the MTEB benchmark. Our findings indicate that a single embedding model can effectively unify retrieval and reranking, offering both computational efficiency and competitive ranking accuracy.

Community

Paper submitter

๐Ÿ“Š Highlights:

  • SOTA on BEIR reranking benchmark
  • Competitive results on BRIGHT reasoning-intensive ranking
  • Faster than RankGPT-style listwise rerankers
  • Embedding quality remains strong on MTEB

One unified model. One scoring function. Retrieval and ranking, together at last.

๐Ÿ”— Project Website: https://alibaba-nlp.github.io/E2Rank

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.22733 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.22733 in a Space README.md to link it from this page.

Collections including this paper 1