Corrected a typo in the 'TatonkaHF/bge-m3_en_ru' model name in the Initialization section. (#2)
Browse files- Corrected a typo in the 'TatonkaHF/bge-m3_en_ru' model name in the Initialization section. (77d003a43dea3e92beab2cac4bffa4c74adc8d57)
Co-authored-by: Sergey <shlepakov-s-arlight@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -108,7 +108,7 @@ Also, you can use native [FlagEmbedding](https://github.com/FlagOpen/FlagEmbeddi
|
|
| 108 |
We follow the [`USER-base`](https://huggingface.co/deepvk/USER-base) model training algorithm, with several changes as we use different backbone.
|
| 109 |
|
| 110 |
|
| 111 |
-
**Initialization:** [`TatonkaHF/bge-
|
| 112 |
|
| 113 |
|
| 114 |
**Fine-tuning:** Supervised fine-tuning two different models based on data symmetry and then merging via [`LM-Cocktail`](https://arxiv.org/abs/2311.13534):
|
|
|
|
| 108 |
We follow the [`USER-base`](https://huggingface.co/deepvk/USER-base) model training algorithm, with several changes as we use different backbone.
|
| 109 |
|
| 110 |
|
| 111 |
+
**Initialization:** [`TatonkaHF/bge-m3_en_ru`](https://huggingface.co/TatonkaHF/bge-m3_en_ru) – shrinked version of [`baai/bge-m3`](https://huggingface.co/BAAI/bge-m3) to support only Russian and English tokens.
|
| 112 |
|
| 113 |
|
| 114 |
**Fine-tuning:** Supervised fine-tuning two different models based on data symmetry and then merging via [`LM-Cocktail`](https://arxiv.org/abs/2311.13534):
|