Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using huihui-ai/Llama-3.3-70B-Instruct-abliterated as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: "D:\\mergekit\\LORAs\\applied\\SUN_NIN_check1_r256_v2"
- model: "D:\\mergekit\\LORAs\\applied\\SUN_NIN_check2_r256_v2"
- model: schonsense/70B_8k_chunks_r256
- model: "D:\\mergekit\\LORAs\\applied\\SUN_NIN_MOON_FIN_r256"
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union