--- base_model: - Sakalti/Saka-14B - RDson/WomboCombo-R1-Coder-14B-Preview library_name: transformers tags: - mergekit - merge license: apache-2.0 language: - en pipeline_tag: text-generation --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B) * [RDson/WomboCombo-R1-Coder-14B-Preview](https://huggingface.co/RDson/WomboCombo-R1-Coder-14B-Preview) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Sakalti/Saka-14B merge_method: slerp dtype: bfloat16 parameters: t: - filter: self_attn value: 0.4 # Favor RDson/WomboCombo for self-attention - filter: mlp value: 0.7 # Favor Sakalti/Saka-14B for MLP layers - value: 0.5 models: - model: Sakalti/Saka-14B - model: RDson/WomboCombo-R1-Coder-14B-Preview tokenizer_source: Sakalti/Saka-14B chat_template: chatml name: reasoning_blend ```