Update README.md
Browse files
README.md
CHANGED
|
@@ -121,7 +121,10 @@ model-index:
|
|
| 121 |
|
| 122 |
An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of the original model's responses, in particular for vision use cases*
|
| 123 |
|
| 124 |
-
<b>
|
|
|
|
|
|
|
|
|
|
| 125 |
|
| 126 |
## Notes & Methodology
|
| 127 |
* [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
|
|
|
|
| 121 |
|
| 122 |
An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of the original model's responses, in particular for vision use cases*
|
| 123 |
|
| 124 |
+
<b>Weighted (Importance Matrix) Quants available [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-iMat-GGUF)</b>
|
| 125 |
+
|
| 126 |
+
<b>Static (Legacy) quants available [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF)</b>
|
| 127 |
+
|
| 128 |
|
| 129 |
## Notes & Methodology
|
| 130 |
* [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
|