Model Card for SWAP_LLM_v2
SWAP_LLM_v2 is a suite of supervised fine-tuned models developed for multi-step reasoning with large language models (LLMs). The framework encompasses two primary components: generator and discriminator.
Model Details
Generator
Base Model:
meta-llama/Meta-Llama-3-8B-InstructLoRA Configuration:
lora_alpha: 32r: 16target_modules:["up_proj", "down_proj", "gate_proj", "q_proj","k_proj", "v_proj", "o_proj"]bias:"none"
For additional information and implementation details, please refer to the SWAP GitHub repository.
Citation
@inproceedings{xiong-etal-2025-deliberate,
title = "Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model",
author = "Xiong, Siheng and
Payani, Ali and
Yang, Yuan and
Fekri, Faramarz",
editor = "Che, Wanxiang and
Nabende, Joyce and
Shutova, Ekaterina and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-long.1540/",
doi = "10.18653/v1/2025.acl-long.1540",
pages = "31900--31931",
ISBN = "979-8-89176-251-0"
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sxiong/SWAP_LLM_v2
Base model
meta-llama/Meta-Llama-3-8B-Instruct