Improve model card: Add pipeline tag, update license and expand descriptions (#1)
Browse files- Improve model card: Add pipeline tag, update license and expand descriptions (eca3fb3fd02b680e4f43b3657e1a66eb2a646f75)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,38 +1,43 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
license: other
|
| 4 |
base_model: QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
|
|
|
| 9 |
model-index:
|
| 10 |
- name: DiffScale-7B
|
| 11 |
results: []
|
| 12 |
---
|
| 13 |
|
| 14 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 15 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 16 |
-
|
| 17 |
Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
|
| 18 |
|
| 19 |
Code: https://github.com/QizhiPei/ScaleDiff
|
| 20 |
|
| 21 |
# DiffScale-7B
|
| 22 |
|
| 23 |
-
This model is a fine-tuned version of [QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k](https://huggingface.co/QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k) on the
|
| 24 |
|
| 25 |
## Model description
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
## Intended uses & limitations
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
| 32 |
|
| 33 |
## Training and evaluation data
|
| 34 |
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
## Training procedure
|
| 38 |
|
|
@@ -61,5 +66,4 @@ The following hyperparameters were used during training:
|
|
| 61 |
- Transformers 4.46.1
|
| 62 |
- Pytorch 2.4.0+cu121
|
| 63 |
- Datasets 3.1.0
|
| 64 |
-
- Tokenizers 0.20.3
|
| 65 |
-
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
base_model: QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k
|
| 3 |
+
library_name: transformers
|
| 4 |
+
license: apache-2.0
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
| 9 |
+
pipeline_tag: text-generation
|
| 10 |
model-index:
|
| 11 |
- name: DiffScale-7B
|
| 12 |
results: []
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
|
|
|
| 15 |
Paper: [ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning](https://arxiv.org/abs/2509.21070)
|
| 16 |
|
| 17 |
Code: https://github.com/QizhiPei/ScaleDiff
|
| 18 |
|
| 19 |
# DiffScale-7B
|
| 20 |
|
| 21 |
+
This model is a fine-tuned version of [QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k](https://huggingface.co/QizhiPei/Qwen2.5-Math-7B-Instruct-RoPE-300k) on the ScaleDiff-Math dataset.
|
| 22 |
|
| 23 |
## Model description
|
| 24 |
|
| 25 |
+
ScaleDiff-7B is a Large Reasoning Model (LRM) developed as part of the ScaleDiff pipeline, which is designed to scale the creation of challenging mathematical problems. This model, fine-tuned on the novel ScaleDiff-Math dataset, aims to enhance advanced mathematical reasoning capabilities by addressing the scarcity of high-quality, difficult training data. It leverages an adaptive thinking model for problem identification and a specialized generator (DiffGen-8B) for large-scale problem synthesis.
|
| 26 |
|
| 27 |
## Intended uses & limitations
|
| 28 |
|
| 29 |
+
ScaleDiff-7B is intended for advanced mathematical reasoning tasks, offering significant improvements in complex problem-solving. It is particularly useful for researchers and practitioners looking to benchmark and develop LRMs on difficult mathematical challenges.
|
| 30 |
+
|
| 31 |
+
**Limitations**: As a language model, its performance is dependent on the quality and scope of its training data. While designed for difficult problems, it may exhibit limitations in highly novel or out-of-distribution mathematical contexts. Further research is needed to fully understand its generalization capabilities beyond the specific benchmarks used in its evaluation.
|
| 32 |
|
| 33 |
## Training and evaluation data
|
| 34 |
|
| 35 |
+
ScaleDiff-7B was fine-tuned on the custom-created [ScaleDiff-Math dataset](https://huggingface.co/datasets/QizhiPei/ScaleDiff-Math). This dataset is generated through a three-step pipeline:
|
| 36 |
+
1. **Problem Selection**: Difficult problems are identified from the [AM-Distilled-Dataset](https://huggingface.co/datasets/a-m-team/AM-Qwen3-Distilled) using AdaptThink, an adaptive thinking model.
|
| 37 |
+
2. **Problem Generation**: A dedicated problem generator, DiffGen-8B, is trained on these selected difficult problems to produce new, challenging problems.
|
| 38 |
+
3. **Solution Distillation and Filtration**: Long Chain-of-Thought (CoT) solutions for the newly generated problems are distilled using Qwen3-8B as a teacher model and then filtered for quality and relevance.
|
| 39 |
+
|
| 40 |
+
The final ScaleDiff-Math dataset combines these new problem-solution pairs with an original dataset to provide a more effective training signal. Evaluation was conducted on a suite of difficult mathematical benchmarks including AIME'24, AIME'25, HMMT-Feb'25, BRUMO'25, and MATH500.
|
| 41 |
|
| 42 |
## Training procedure
|
| 43 |
|
|
|
|
| 66 |
- Transformers 4.46.1
|
| 67 |
- Pytorch 2.4.0+cu121
|
| 68 |
- Datasets 3.1.0
|
| 69 |
+
- Tokenizers 0.20.3
|
|
|