Model Card for Tool-Star

This is the official checkpoint we trained using the Tool-Light framework, based on Qwen2.5-7B-Instruct. Details please refer to https://github.com/asilverlight/Tool-Light

Paper title and link

The model was presented in the paper Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning. Paper can be found in https://arxiv.org/pdf/2509.23285

Paper abstract

Tool-Integrated Reasoning (TIR) enables large language models (LLMs) to improve their internal reasoning ability by integrating external tools. However, models employing TIR often display suboptimal behaviors, such as insufficient or excessive tool usage and overthinking after tool calls. The challenge of incentivizing LLMs to perform TIR efficiently and accurately, while stabilizing the reasoning process, remains an open question. In this paper, we start by exploring the impact of tool calls on model reasoning from the perspective of information entropy. Our findings indicate that tool call results lead to a distinct change in the information entropy of subsequent reasoning, with the overall entropy of the reasoning chain varying based on the number of tool calls. Building on these insights, we propose Tool-Light, a framework designed to encourage LLMs to perform TIR efficiently and accurately. Our framework includes dataset construction and multi-stage fine-tuning. For dataset construction, we employ continuous self-evolved sampling using the fine-tuned model, integrating both vanilla sampling and entropy-guided sampling. Besides, we establish strict criteria for selecting positive-negative pairs during sampling. The training process involves a two-stage approach, comprising Supervised Fine-Tuning (SFT) and Self-Evolved Direct Preference Optimization (DPO). Experimental results on 10 datasets demonstrate the effectiveness of Tool-Light, significantly improving the model's efficiency in executing TIR tasks. The code is available at https://github.com/asilverlight/Tool-Light.

Downloads last month
4
Safetensors
Model size
8B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for zhangboguodong/Tool-Light-Qwen2.5-7B-it

Base model

Qwen/Qwen2.5-7B
Finetuned
(2817)
this model
Quantizations
2 models