--- license: mit base_model: - inclusionAI/Ling-flash-base-2.0 pipeline_tag: text-generation library_name: transformers ---

🤗 Hugging Face   |   ðŸ¤– ModelScope

## Introduction Today, we are officially open-sourcing Ring-flash-2.0. This is a __high-performance thinking model, deeply optimized__ based on Ling-flash-2.0-base. Like Ling-flash-2.0, Ring-flash-2.0 has a total of 100B parameters, with only 6.1B activated per inference. Our independently developed __icepop algorithm__ has successfully addressed the challenge of training instability in reinforcement learning (RL) for MoE LLMs after cold-start Long-CoT SFT, enabling the model’s complex reasoning capabilities to continuously improve throughout extended RL training cycles. Ring-flash-2.0 demonstrates significant breakthroughs across multiple challenging benchmarks, including __math competitions__, __code generation__, and __logical reasoning__. Its performance not only surpasses that of SOTA dense models under 40B parameters but also rivals larger open-weight MoE models and closed-source high-performance thinking model APIs. ### leading-level performance in complex reasoning We selected __representative open-source thinking models__ and __closed-source APIs__ for comparison, including GPT-OSS-120B(medium), Qwen3-32B-Thinking, Seed-OSS-36B-Instruct, and Gemini-2.5-Flash. The benchmarking results demonstrate that Ring-flash-2.0 exhibits leading performance across multiple challenging general reasoning tasks, including: - __Math competitions__ (AIME 25, Omni-MATH), - __Code generation__ (LiveCodeBench, CodeForce-Elo), - __Logical reasoning__ (ARC-Prize). It also shows strong competitiveness in specialized domains such as: - __Scientific and medical reasoning__ (GPQA-Diamond, HealthBench). More surprisingly, although Ring-flash-2.0 is primarily designed for complex reasoning, it outperforms all other compared models in __creative writing__ (Creative Writing v3) and matches the creative capability of its "twin brother"—the non-thinking model Ling-flash-2.0.

### Efficient Architecture, High-Speed Inference

Building on the highly efficient MoE architecture of the Ling 2.0 series, and through structural optimizations such as a __1/32 expert activation ratio__ and __MTP layers__, Ring-flash-2.0 activates only 6.1B (4.8B non-embedding) parameters while delivering performance comparable to a ∼40B dense model. Thanks to its low activation and high sparsity design, Ring-flash-2.0 achieves a high generation speed of __200+ tokens/sec__ when deployed on just four H20 GPUs, significantly reducing inference costs for thinking models in high-concurrency scenarios. ## IcePop: Cooling Down Training-Inference Gaps in RL for MoE Models During the RL for MoE models, the discrepancy of precision between the training and inference engines is more pronounced compared to dense models. This gap widens progressively as sequence length and training steps increase—particularly during long-sequence generation and extended training cycles. A more critical issue is that the original GRPO algorithm begins to break down within a limited number of training steps. Specifically, the probabilistic discrepancy for the same token between training and inference phases gradually increases. When this relative difference exceeds 5%, training effectively fails, posing a significant challenge for long-horizon reinforcement learning with lengthy sequences. To address this issue, we introduced a key solution: __distribution calibration via masked bidirectional truncation, which effectively narrows the gap between training and inference__. - Bidirectional Truncation: We truncate not only tokens where the training probability is significantly higher than the inference probability but also the reverse scenario where the training probability is much lower. - Masking: Tokens with excessively large discrepancies are excluded from gradient computation. For detailed algorithm introduction, please refer to our technical blog: https://ringtech.notion.site/icepop ## SFT + RLVR + RLHF Multi-Stage Training To comprehensively enhance the capabilities of Ring-flash-2.0, we designed a Two-staged RL pipeline. First, lightweight Long-CoT SFT equips the Ling-flash-2.0-base model with diverse thinking patterns. This is followed by RL training with Verifiable Rewards (RLVR) to continually stimulate the model’s reasoning potential. Finally, an RLHF phase is incorporated to improve the model’s general abilities. During RL training, we compared directly combining RLVR and RLHF into joint training with the ultimately adopted Two-staged RL pipeline. Both approaches showed relatively similar effectiveness in our experiments. However, due to the differing difficulty levels of RLVR and RLHF tasks—with RLHF involving relatively shorter model rollouts—joint training resulted in more long-tail generations. From an engineering efficiency perspective, we ultimately adopted the Two-staged RL approach.

## Quickstart ### 🤗 Hugging Face Transformers Here is a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ring-flash-2.0" model = AutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=8192 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### 🤖 ModelScope If you're in mainland China, we strongly recommend you to use our model from 🤖 ModelScope. ## Deployment ### vLLM vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference. #### Environment Preparation Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below: ```bash git clone -b v0.10.0 https://github.com/vllm-project/vllm.git cd vllm wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch git apply bailing_moe_v2.patch pip install -e . ``` #### Offline Inference: ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-flash-2.0") sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384) llm = LLM(model="inclusionAI/Ring-flash-2.0", dtype='bfloat16') prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) outputs = llm.generate([text], sampling_params) ``` #### Online Inference: ```bash vllm serve inclusionAI/Ring-flash-2.0 \ --tensor-parallel-size 2 \ --pipeline-parallel-size 1 \ --use-v2-block-manager \ --gpu-memory-utilization 0.90 ``` To handle long context in vLLM using YaRN, we need to follow these two steps: 1. Add a `rope_scaling` field to the model's `config.json` file, for example: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` 2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service. For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/). ### SGLang #### Environment Preparation We will later submit our model to SGLang official release, now we can prepare the environment following steps: ```shell pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1 ``` You can use docker image as well: ```shell docker pull lmsysorg/sglang:v0.5.2rc0-cu126 ``` Then you should apply patch to sglang installation: ```shell # patch command is needed, run `yum install -y patch` if needed patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch ``` #### Run Inference BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following: - Start server: ```shell python -m sglang.launch_server \ --model-path $MODLE_PATH \ --host 0.0.0.0 --port $PORT \ --trust-remote-code \ --attention-backend fa3 ``` MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN` to start command. - Client: ```shell curl -s http://localhost:${PORT}/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}' ``` More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html) ### Finetuning We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ring](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). ## License This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE). ## Tip To facilitate academic research and downstream applications with customizable model naming, we did not conduct specific identity recognition training.