Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ license: mit
|
|
| 12 |
## Munin 7b e5
|
| 13 |
This model has 32 layers and the embedding size is 4096.
|
| 14 |
|
| 15 |
-
This model utilizes the lora adapter layer introduced in the paper [Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf) along with the [
|
| 16 |
|
| 17 |
|
| 18 |
## Usage
|
|
|
|
| 12 |
## Munin 7b e5
|
| 13 |
This model has 32 layers and the embedding size is 4096.
|
| 14 |
|
| 15 |
+
This model utilizes the lora adapter layer introduced in the paper [Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf) along with the [Danish Munin model](https://huggingface.co/danish-foundation-models/munin-7b-alpha) model.
|
| 16 |
|
| 17 |
|
| 18 |
## Usage
|