Update README.md
Browse files
README.md
CHANGED
|
@@ -5940,6 +5940,26 @@ On the other hand, there is no need to add instructions to the document side.
|
|
| 5940 |
|
| 5941 |
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
|
| 5942 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5943 |
## Limitations
|
| 5944 |
|
| 5945 |
Using this model for inputs longer than 4096 tokens is not recommended.
|
|
|
|
| 5940 |
|
| 5941 |
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
|
| 5942 |
|
| 5943 |
+
## Citation
|
| 5944 |
+
|
| 5945 |
+
If you find our paper or models helpful, please consider cite as follows:
|
| 5946 |
+
|
| 5947 |
+
```bibtex
|
| 5948 |
+
@article{wang2023improving,
|
| 5949 |
+
title={Improving Text Embeddings with Large Language Models},
|
| 5950 |
+
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
|
| 5951 |
+
journal={arXiv preprint arXiv:2401.00368},
|
| 5952 |
+
year={2023}
|
| 5953 |
+
}
|
| 5954 |
+
|
| 5955 |
+
@article{wang2022text,
|
| 5956 |
+
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
|
| 5957 |
+
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
|
| 5958 |
+
journal={arXiv preprint arXiv:2212.03533},
|
| 5959 |
+
year={2022}
|
| 5960 |
+
}
|
| 5961 |
+
```
|
| 5962 |
+
|
| 5963 |
## Limitations
|
| 5964 |
|
| 5965 |
Using this model for inputs longer than 4096 tokens is not recommended.
|