Add link to technical report
Browse files
README.md
CHANGED
|
@@ -5854,11 +5854,7 @@ license: mit
|
|
| 5854 |
|
| 5855 |
## E5-mistral-7b-instruct
|
| 5856 |
|
| 5857 |
-
|
| 5858 |
-
|
| 5859 |
-
Some highlights for preview:
|
| 5860 |
-
* This model is only fine-tuned for less than 1000 steps, no contrastive pre-training is used.
|
| 5861 |
-
* For a large part of training data, the query, positive documents and negative documents are all generated by LLMs.
|
| 5862 |
|
| 5863 |
This model has 32 layers and the embedding size is 4096.
|
| 5864 |
|
|
|
|
| 5854 |
|
| 5855 |
## E5-mistral-7b-instruct
|
| 5856 |
|
| 5857 |
+
[Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5858 |
|
| 5859 |
This model has 32 layers and the embedding size is 4096.
|
| 5860 |
|