update README
Browse files
    	
        README.md
    CHANGED
    
    | @@ -1,6 +1,6 @@ | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            pipeline_tag: text-generation
         | 
| 3 | 
            -
            license:  | 
| 4 | 
             
            ---
         | 
| 5 |  | 
| 6 | 
             
            <div align="center">
         | 
| @@ -50,7 +50,7 @@ license: apache-2.0 | |
| 50 | 
             
                <img alt="ModelScope" src="https://img.shields.io/badge/🤖️_ModelScope-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
         | 
| 51 | 
             
              </a>
         | 
| 52 | 
             
              <a href="https://github.com/MiniMax-AI/MiniMax-M2/blob/main/LICENSE" style="margin: 2px;">
         | 
| 53 | 
            -
                <img alt="License" src="https://img.shields.io/badge/⚖️_License- | 
| 54 | 
             
              </a>
         | 
| 55 | 
             
              <a href="https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
         | 
| 56 | 
             
                <img alt="WeChat" src="https://img.shields.io/badge/💬_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
         | 
| @@ -61,7 +61,7 @@ license: apache-2.0 | |
| 61 |  | 
| 62 | 
             
            Today, we release and open source MiniMax-M2, a **Mini** model built for **Max** coding & agentic workflows.
         | 
| 63 |  | 
| 64 | 
            -
            **MiniMax-M2** redefines efficiency for agents. It's a compact, fast, and cost-effective model built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever.
         | 
| 65 |  | 
| 66 | 
             
            <p align="center">
         | 
| 67 | 
             
              <img width="100%" src="figures/Bench.png">
         | 
| @@ -174,6 +174,8 @@ We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. SGLang | |
| 174 |  | 
| 175 | 
             
            We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
         | 
| 176 |  | 
|  | |
|  | |
| 177 | 
             
            ### Inference Parameters
         | 
| 178 | 
             
            We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`.
         | 
| 179 |  | 
|  | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            pipeline_tag: text-generation
         | 
| 3 | 
            +
            license: mit
         | 
| 4 | 
             
            ---
         | 
| 5 |  | 
| 6 | 
             
            <div align="center">
         | 
|  | |
| 50 | 
             
                <img alt="ModelScope" src="https://img.shields.io/badge/🤖️_ModelScope-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
         | 
| 51 | 
             
              </a>
         | 
| 52 | 
             
              <a href="https://github.com/MiniMax-AI/MiniMax-M2/blob/main/LICENSE" style="margin: 2px;">
         | 
| 53 | 
            +
                <img alt="License" src="https://img.shields.io/badge/⚖️_License-MIT-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
         | 
| 54 | 
             
              </a>
         | 
| 55 | 
             
              <a href="https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
         | 
| 56 | 
             
                <img alt="WeChat" src="https://img.shields.io/badge/💬_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
         | 
|  | |
| 61 |  | 
| 62 | 
             
            Today, we release and open source MiniMax-M2, a **Mini** model built for **Max** coding & agentic workflows.
         | 
| 63 |  | 
| 64 | 
            +
            **MiniMax-M2** redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever.
         | 
| 65 |  | 
| 66 | 
             
            <p align="center">
         | 
| 67 | 
             
              <img width="100%" src="figures/Bench.png">
         | 
|  | |
| 174 |  | 
| 175 | 
             
            We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
         | 
| 176 |  | 
| 177 | 
            +
            IMPORTANT: MiniMax-M2 is an interleaved thinking model. Therefore, when using it, it is important to retain the thinking content from the assistant's turns within the historical messages. In the model's output content, we use the `<think>...</think>` format to wrap the assistant's thinking content. When using the model, you must ensure that the historical content is passed back in its original format. Do not remove the `<think>...</think>` part, otherwise, the model's performance will be negatively affected.
         | 
| 178 | 
            +
             | 
| 179 | 
             
            ### Inference Parameters
         | 
| 180 | 
             
            We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`.
         | 
| 181 |  | 

