Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8
jiaxin commited on
Commit
7496e98
·
1 Parent(s): 05f3755

update REDME and guides

Browse files
README.md CHANGED
@@ -29,8 +29,8 @@ license: apache-2.0
29
  <a href="https://www.minimax.io" target="_blank" style="margin: 2px;">
30
  <img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
31
  </a>
32
- <a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
33
- <img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
34
  </a>
35
  <a href="https://www.minimax.io/platform" style="margin: 2px;">
36
  <img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
@@ -59,7 +59,8 @@ license: apache-2.0
59
 
60
  # Meet MiniMax-M2
61
 
62
- Today, we release (and open source?) MiniMax-M2, a **Mini** model built for **Max** coding & agentic workflows.
 
63
  **MiniMax-M2** redefines efficiency for agents. It's a compact, fast, and cost-effective model built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever.
64
 
65
  <p align="center">
@@ -77,7 +78,7 @@ Today, we release (and open source?) MiniMax-M2, a **Mini** model built for **Ma
77
 
78
  **Agent Performance**. MiniMax-M2 plans and executes complex, long-horizon toolchains across shell, browser, retrieval, and code runners. In BrowseComp-style evaluations, it consistently locates hard-to-surface sources, maintains evidence traceable, and gracefully recovers from flaky steps.
79
 
80
- **Efficient Design**. With 10 billion activated parameters (200 billion in total), MiniMax-M2 delivers lower latency, lower cost, and higher throughput for interactive agents and batched sampling—perfectly aligned with the shift toward highly deployable models that still shine on coding and agentic tasks.
81
 
82
  ---
83
 
@@ -104,7 +105,7 @@ These comprehensive evaluations test real-world end-to-end coding and agentic to
104
  >Notes: Data points marked with an asterisk (*) are taken directly from the model's official tech report or blog. All other metrics were obtained using the evaluation methods described below.
105
  >- SWE-bench Verified: We use the same scaffold as [R2E-Gym](https://arxiv.org/pdf/2504.07164) (Jain et al. 2025) on top of OpenHands to test with agents on SWE tasks. All scores are validated on our internal infrastructure with 128k context length, 100 max steps, and no test-time scaling. All git-related content is removed to ensure agent sees only the code at the issue point.
106
  >- Multi-SWE-Bench & SWE-bench Multilingual: All scores are averaged across 8 runs using the [claude-code](https://github.com/anthropics/claude-code) CLI (300 max steps) as the evaluation scaffold.
107
- >- Terminal-Bench: All scores are evaluated with the official claude-code from the original [Terminal-Bench](https://www.tbench.ai/) repository(commit 94bf692), averaged over 8 runs to report the mean pass rate.
108
  >- ArtifactsBench: All Scores are computed by averaging three runs with the official implementation of [ArtifactsBench](https://github.com/Tencent-Hunyuan/ArtifactsBenchmark), using the stable Gemini-2.5-Pro as the judge model.
109
  >- BrowseComp & BrowseComp-zh & GAIA (text only) & xbench-DeepSearch: All scores reported use the same agent framework as [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025), with minor tools description adjustment. We use the 103-sample text-only GAIA validation subset following [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025).
110
  >- HLE (w/ tools): All reported scores are obtained using search tools and a Python tool. The search tools employ the same agent framework as [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025), and the Python tool runs in a Jupyter environment.
@@ -138,7 +139,7 @@ We align with **Artificial Analysis**, which aggregates challenging benchmarks u
138
 
139
  ## Why activation size matters
140
 
141
- Keeping activations at **10B** compresses the critical path of agentic workflow - plan → act → verify - so loops feel responsive and cheaper to run:
142
 
143
  - **Faster feedback cycles** in compile-run-test and browse-retrieve-cite chains.
144
 
@@ -156,8 +157,30 @@ We look forward to your feedback and to collaborating with developers and resear
156
 
157
  ## How to Use
158
 
159
- - **MiniMax Agent**: Our general agent product, built on MiniMax-M2, is now publicly available and free for a limited time: https://agent.minimax.io/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
160
 
161
- - **MiniMax Open Platform**: https://www.minimax.io/
162
 
163
- - **MiniMax-M2 Model Weights**: The open-source model weights are available on Hugging Face: https://huggingface.co/MiniMaxAI/ . vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/ for latest deployment guide.
 
29
  <a href="https://www.minimax.io" target="_blank" style="margin: 2px;">
30
  <img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
31
  </a>
32
+ <a href="https://agent.minimax.io/" target="_blank" style="margin: 2px;">
33
+ <img alt="Agent" src="https://img.shields.io/badge/_MiniMax_Agent-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
34
  </a>
35
  <a href="https://www.minimax.io/platform" style="margin: 2px;">
36
  <img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
 
59
 
60
  # Meet MiniMax-M2
61
 
62
+ Today, we release and open source MiniMax-M2, a **Mini** model built for **Max** coding & agentic workflows.
63
+
64
  **MiniMax-M2** redefines efficiency for agents. It's a compact, fast, and cost-effective model built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever.
65
 
66
  <p align="center">
 
78
 
79
  **Agent Performance**. MiniMax-M2 plans and executes complex, long-horizon toolchains across shell, browser, retrieval, and code runners. In BrowseComp-style evaluations, it consistently locates hard-to-surface sources, maintains evidence traceable, and gracefully recovers from flaky steps.
80
 
81
+ **Efficient Design**. With 10 billion activated parameters (230 billion in total), MiniMax-M2 delivers lower latency, lower cost, and higher throughput for interactive agents and batched sampling—perfectly aligned with the shift toward highly deployable models that still shine on coding and agentic tasks.
82
 
83
  ---
84
 
 
105
  >Notes: Data points marked with an asterisk (*) are taken directly from the model's official tech report or blog. All other metrics were obtained using the evaluation methods described below.
106
  >- SWE-bench Verified: We use the same scaffold as [R2E-Gym](https://arxiv.org/pdf/2504.07164) (Jain et al. 2025) on top of OpenHands to test with agents on SWE tasks. All scores are validated on our internal infrastructure with 128k context length, 100 max steps, and no test-time scaling. All git-related content is removed to ensure agent sees only the code at the issue point.
107
  >- Multi-SWE-Bench & SWE-bench Multilingual: All scores are averaged across 8 runs using the [claude-code](https://github.com/anthropics/claude-code) CLI (300 max steps) as the evaluation scaffold.
108
+ >- Terminal-Bench: All scores are evaluated with the official claude-code from the original [Terminal-Bench](https://www.tbench.ai/) repository(commit `94bf692`), averaged over 8 runs to report the mean pass rate.
109
  >- ArtifactsBench: All Scores are computed by averaging three runs with the official implementation of [ArtifactsBench](https://github.com/Tencent-Hunyuan/ArtifactsBenchmark), using the stable Gemini-2.5-Pro as the judge model.
110
  >- BrowseComp & BrowseComp-zh & GAIA (text only) & xbench-DeepSearch: All scores reported use the same agent framework as [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025), with minor tools description adjustment. We use the 103-sample text-only GAIA validation subset following [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025).
111
  >- HLE (w/ tools): All reported scores are obtained using search tools and a Python tool. The search tools employ the same agent framework as [WebExplorer](https://arxiv.org/pdf/2509.06501) (Liu et al. 2025), and the Python tool runs in a Jupyter environment.
 
139
 
140
  ## Why activation size matters
141
 
142
+ By maintaining activations around **10B** , the plan → act → verify loop in the agentic workflow is streamlined, improving responsiveness and reducing compute overhead:
143
 
144
  - **Faster feedback cycles** in compile-run-test and browse-retrieve-cite chains.
145
 
 
157
 
158
  ## How to Use
159
 
160
+ - Our product **MiniMax Agent**, built on MiniMax-M2, is now **publicly available and free** for a limited time: https://agent.minimaxi.io/
161
+
162
+ - The MiniMax-M2 API is now live on the **MiniMax Open Platform** and is **free** for a limited time: https://platform.minimax.io/docs/guides/text-generation
163
+
164
+ - The MiniMax-M2 model weights are now **open-source**, allowing for local deployment and use: https://huggingface.co/MiniMaxAI/MiniMax-M2.
165
+
166
+ ## Local Deployment Guide
167
+
168
+ Download the model from HuggingFace repository: https://huggingface.co/MiniMaxAI/MiniMax-M2
169
+
170
+ ### vLLM
171
+
172
+ We recommend using [vLLM](https://docs.vllm.ai/en/latest/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
173
+
174
+ ### SGLang
175
+ We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. Please refer to our [SGLang Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/sglang_deploy_guide.md).
176
+
177
+ ### Inference Parameters
178
+ We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 20`.
179
+
180
+ ## Tool Calling Guide
181
+
182
+ Please refer to our [Tool Calling Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/tool_calling_guide.md).
183
 
184
+ # Contact Us
185
 
186
+ Contact us at [model@minimax.io](mailto:model@minimax.io).
docs/sglang_deploy_guide.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MiniMax M2 Model SGLang Deployment Guide
2
+
3
+ We recommend using [SGLang](https://github.com/sgl-project/sglang) to deploy the [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) model. SGLang is a high-performance inference engine with excellent serving throughput, efficient and intelligent memory management, powerful batch request processing capabilities, and deeply optimized underlying performance. We recommend reviewing SGLang's official documentation to check hardware compatibility before deployment.
4
+
5
+ ## Applicable Models
6
+
7
+ This document applies to the following models. You only need to change the model name during deployment.
8
+
9
+ - [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
10
+
11
+ The deployment process is illustrated below using MiniMax-M2 as an example.
12
+
13
+ ## System Requirements
14
+
15
+ - OS: Linux
16
+
17
+ - Python: 3.9 - 3.12
18
+
19
+ - GPU:
20
+
21
+ - compute capability 7.0 or higher
22
+
23
+ - Memory requirements: 220 GB for weights, 240 GB per 1M context tokens
24
+
25
+ The following are recommended configurations; actual requirements should be adjusted based on your use case:
26
+
27
+ - 4x 96GB GPUs: Supported context length of up to 400K tokens.
28
+
29
+ - 8x 144GB GPUs: Supported context length of up to 3M tokens.
30
+
31
+ ## Deployment with Python
32
+
33
+ It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
34
+
35
+ We recommend installing SGLang in a fresh Python environment. Since it has not been released yet, you need to manually build it from the source code:
36
+
37
+ ```bash
38
+ git clone https://github.com/sgl-project/sglang.git
39
+ cd sglang
40
+ uv pip install ./python --torch-backend=auto
41
+ ```
42
+
43
+ Run the following command to start the SGLang server. SGLang will automatically download and cache the MiniMax-M2 model from Hugging Face.
44
+
45
+ 8-GPU deployment command:
46
+
47
+ ```bash
48
+ python -m sglang.launch_server \
49
+ --model-path MiniMaxAI/MiniMax-M2 \
50
+ --tp-size 8 \
51
+ --ep-size 8 \
52
+ --tool-call-parser minimax-m2 \
53
+ --reasoning-parser minimax \
54
+ --trust-remote-code \
55
+ --port 8000 \
56
+ --mem-fraction-static 0.7
57
+ ```
58
+
59
+ ## Testing Deployment
60
+
61
+ After startup, you can test the vLLM OpenAI-compatible API with the following command:
62
+
63
+ ```bash
64
+ curl http://localhost:8000/v1/chat/completions \
65
+ -H "Content-Type: application/json" \
66
+ -d '{
67
+ "model": "MiniMaxAI/MiniMax-M2",
68
+ "messages": [
69
+ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]},
70
+ {"role": "user", "content": [{"type": "text", "text": "Who won the world series in 2020?"}]}
71
+ ]
72
+ }'
73
+ ```
74
+
75
+ ## Common Issues
76
+
77
+ ### Hugging Face Network Issues
78
+
79
+ If you encounter network issues, you can set up a proxy before pulling the model.
80
+
81
+ ```bash
82
+ export HF_ENDPOINT=https://hf-mirror.com
83
+ ```
84
+
85
+ ### MiniMax-M2 model is not currently supported
86
+
87
+ This vLLM version is outdated. Please upgrade to the latest version.
88
+
89
+ ## Getting Support
90
+
91
+ If you encounter any issues while deploying the MiniMax model:
92
+
93
+ - Contact our technical support team through official channels such as email at [api@minimaxi.com](mailto:api@minimaxi.com)
94
+
95
+ - Submit an issue on our [GitHub](https://github.com/MiniMax-AI) repository
96
+
97
+ We continuously optimize the deployment experience for our models. Feedback is welcome!
98
+
docs/sglang_deploy_guide_cn.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MiniMax M2 模型 SGLang 部署指南
2
+
3
+ 我们推荐使用 [SGLang](https://github.com/sgl-project/sglang) 来部署 [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) 模型。vLLM 是一个高性能的推理引擎,其具有卓越的服务吞吐、高效智能的内存管理机制、强大的批量请求处理能力、深度优化的底层性能等特性。我们建议在部署之前查看 SGLang 的官方文档以检查硬件兼容性。
4
+
5
+ ## 本文档适用模型
6
+
7
+ 本文档适用以下模型,只需在部署时修改模型名称即可。
8
+
9
+ - [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
10
+
11
+ 以下以 MiniMax-M2 为例说明部署流程。
12
+
13
+ ## 环境要求
14
+
15
+ - OS:Linux
16
+
17
+ - Python:3.9 - 3.12
18
+
19
+ - GPU:
20
+
21
+ - compute capability 7.0 or higher
22
+
23
+ - 显存需求:权重需要 220 GB,每 1M 上下文 token 需要 240 GB
24
+
25
+ 以下为推荐配置,实际需求请根据业务场景调整:
26
+
27
+ - 96G x4 GPU:支持 40 万 token 的总上下文。
28
+
29
+ - 144G x8 GPU:支持长达 300 万 token 的总上下文。
30
+
31
+ ## 使用 Python 部署
32
+
33
+ 建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
34
+
35
+ 建议在全新的 Python 环境中安装 SGLang。由于尚未 release,需要从源码手动编译:
36
+ ```bash
37
+ git clone https://github.com/sgl-project/sglang.git
38
+ cd sglang
39
+ uv pip install ./python --torch-backend=auto
40
+ ```
41
+
42
+ 运行如下命令启动 SGLang 服务器,SGLang 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
43
+
44
+ 8 卡部署命令:
45
+
46
+ ```bash
47
+ python -m sglang.launch_server \
48
+ --model-path MiniMaxAI/MiniMax-M2 \
49
+ --tp-size 8 \
50
+ --ep-size 8 \
51
+ --tool-call-parser minimax-m2 \
52
+ --reasoning-parser minimax \
53
+ --trust-remote-code \
54
+ --port 8000 \
55
+ --mem-fraction-static 0.7
56
+ ```
57
+
58
+ ## 测试部署
59
+
60
+ 启动后,可以通过如下命令测试 SGLang OpenAI 兼容接口:
61
+
62
+ ```bash
63
+ curl http://localhost:8000/v1/chat/completions \
64
+ -H "Content-Type: application/json" \
65
+ -d '{
66
+ "model": "MiniMaxAI/MiniMax-M2",
67
+ "messages": [
68
+ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]},
69
+ {"role": "user", "content": [{"type": "text", "text": "Who won the world series in 2020?"}]}
70
+ ]
71
+ }'
72
+ ```
73
+
74
+ ## 常见问题
75
+
76
+ ### Huggingface 网络问题
77
+
78
+ 如果遇到网络问题,可以设置代理后再进行拉取。
79
+
80
+ ```bash
81
+ export HF_ENDPOINT=https://hf-mirror.com
82
+ ```
83
+
84
+ ### MiniMax-M2 model is not currently supported
85
+
86
+ 该 vLLM 版本过旧,请升级到最新版本。
87
+
88
+ ## 获取支持
89
+
90
+ 如果在部署 MiniMax 模型过程中遇到任何问题:
91
+
92
+ - 通过邮箱 [api@minimaxi.com](mailto:api@minimaxi.com) 等官方渠道联系我们的技术支持团队
93
+
94
+ - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
95
+ 我们会持续优化模型的部署体验,欢迎反馈!
docs/{function_call_guide.md → tool_calling_guide.md} RENAMED
@@ -1,12 +1,12 @@
1
- # MiniMax-M2 Function Call Guide
2
 
3
  ## Introduction
4
 
5
- The MiniMax-M2 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format. This document provides detailed instructions on how to use the function calling features of MiniMax-M2.
6
 
7
  ## Basic Example
8
 
9
- The following Python script implements a weather query function call example based on the OpenAI SDK:
10
 
11
  ```python
12
  from openai import OpenAI
@@ -59,7 +59,7 @@ Result: Getting the weather for San Francisco, CA in celsius...
59
 
60
  ## Manually Parsing Model Output
61
 
62
- If you cannot use the built-in parser of inference engines that support MiniMax-M2, or need to use other inference frameworks (such as transformers, TGI, etc.), you can manually parse the model's raw output using the following method. This approach requires you to parse the XML tag format of the model output yourself.
63
 
64
  ### Example Using Transformers
65
 
@@ -125,14 +125,14 @@ raw_output = response.json()["choices"][0]["text"]
125
  print("Raw output:", raw_output)
126
 
127
  # Use the parsing function below to process the output
128
- function_calls = parse_tool_calls(raw_output, tools)
129
  ```
130
 
131
- ## 🛠️ Function Call Definition
132
 
133
- ### Function Structure
134
 
135
- Function calls need to define the `tools` field in the request body. Each function consists of the following parts:
136
 
137
  ```json
138
  {
@@ -171,7 +171,7 @@ Function calls need to define the `tools` field in the request body. Each functi
171
 
172
  ### Internal Processing Format
173
 
174
- When processing within the MiniMax-M2 model, function definitions are converted to a special format and concatenated to the input text. Here is a complete example:
175
 
176
  ```
177
  ]~!b[]~b]system
@@ -209,7 +209,7 @@ When were the latest announcements from OpenAI and Gemini?[e~[
209
  - `]~b]tool`: Tool result message start marker
210
  - `<tools>...</tools>`: Tool definition area, each tool is wrapped with `<tool>` tag, content is JSON Schema
211
  - `<minimax:tool_call>...</minimax:tool_call>`: Tool call area
212
- - `<think>`: Thinking process marker during generation (optional)
213
 
214
  ### Model Output Format
215
 
@@ -228,11 +228,11 @@ MiniMax-M2 uses structured XML tag format:
228
  </minimax:tool_call>
229
  ```
230
 
231
- Each function call uses the `<invoke name="function_name">` tag, and parameters use the `<parameter name="parameter_name">` tag wrapper.
232
 
233
- ## Manually Parsing Function Call Results
234
 
235
- ### Parsing Function Calls
236
 
237
  MiniMax-M2 uses structured XML tags, which require a different parsing approach. The core function is as follows:
238
 
@@ -427,9 +427,9 @@ for call in tool_calls:
427
  # Arguments: {'location': 'San Francisco', 'unit': 'celsius'}
428
  ```
429
 
430
- ### Executing Function Calls
431
 
432
- After parsing is complete, you can execute the corresponding function and construct the return result:
433
 
434
  ```python
435
  def execute_function_call(function_name: str, arguments: dict):
@@ -471,12 +471,13 @@ def execute_function_call(function_name: str, arguments: dict):
471
  return None
472
  ```
473
 
474
- ### Returning Function Execution Results to the Model
475
 
476
- After successfully parsing function calls, you should add the function execution results to the conversation history so that the model can access and utilize this information in subsequent interactions. Refer to chat_template.jinja for concatenation format.
477
 
478
  ## References
479
 
480
  - [MiniMax-M2 Model Repository](https://github.com/MiniMax-AI/MiniMax-M2)
481
  - [vLLM Project Homepage](https://github.com/vllm-project/vllm)
482
- - [OpenAI Python SDK](https://github.com/openai/openai-python)
 
 
1
+ # MiniMax-M2 Tool Calling Guide
2
 
3
  ## Introduction
4
 
5
+ The MiniMax-M2 model supports tool calling capabilities, enabling the model to identify when external tools need to be called and output tool call parameters in a structured format. This document provides detailed instructions on how to use the tool calling features of MiniMax-M2.
6
 
7
  ## Basic Example
8
 
9
+ The following Python script implements a weather query tool call example based on the OpenAI SDK:
10
 
11
  ```python
12
  from openai import OpenAI
 
59
 
60
  ## Manually Parsing Model Output
61
 
62
+ **We strongly recommend using vLLM or SGLang for parsing tool calls.** If you cannot use the built-in parser of inference engines (e.g., vLLM and SGLang) that support MiniMax-M2, or need to use other inference frameworks (such as transformers, TGI, etc.), you can manually parse the model's raw output using the following method. This approach requires you to parse the XML tag format of the model output yourself.
63
 
64
  ### Example Using Transformers
65
 
 
125
  print("Raw output:", raw_output)
126
 
127
  # Use the parsing function below to process the output
128
+ tool_calls = parse_tool_calls(raw_output, tools)
129
  ```
130
 
131
+ ## 🛠️ Tool Call Definition
132
 
133
+ ### Tool Structure
134
 
135
+ Tool calls need to define the `tools` field in the request body. Each tool consists of the following parts:
136
 
137
  ```json
138
  {
 
171
 
172
  ### Internal Processing Format
173
 
174
+ When processing within the MiniMax-M2 model, tool definitions are converted to a special format and concatenated to the input text. Here is a complete example:
175
 
176
  ```
177
  ]~!b[]~b]system
 
209
  - `]~b]tool`: Tool result message start marker
210
  - `<tools>...</tools>`: Tool definition area, each tool is wrapped with `<tool>` tag, content is JSON Schema
211
  - `<minimax:tool_call>...</minimax:tool_call>`: Tool call area
212
+ - `<think>...</think>`: Thinking process marker during generation
213
 
214
  ### Model Output Format
215
 
 
228
  </minimax:tool_call>
229
  ```
230
 
231
+ Each tool call uses the `<invoke name="function_name">` tag, and parameters use the `<parameter name="parameter_name">` tag wrapper.
232
 
233
+ ## Manually Parsing Tool Call Results
234
 
235
+ ### Parsing Tool Calls
236
 
237
  MiniMax-M2 uses structured XML tags, which require a different parsing approach. The core function is as follows:
238
 
 
427
  # Arguments: {'location': 'San Francisco', 'unit': 'celsius'}
428
  ```
429
 
430
+ ### Executing Tool Calls
431
 
432
+ After parsing is complete, you can execute the corresponding tool and construct the return result:
433
 
434
  ```python
435
  def execute_function_call(function_name: str, arguments: dict):
 
471
  return None
472
  ```
473
 
474
+ ### Returning Tool Execution Results to the Model
475
 
476
+ After successfully parsing tool calls, you should add the tool execution results to the conversation history so that the model can access and utilize this information in subsequent interactions. Refer to [chat_template.jinja](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/chat_template.jinja) for concatenation format.
477
 
478
  ## References
479
 
480
  - [MiniMax-M2 Model Repository](https://github.com/MiniMax-AI/MiniMax-M2)
481
  - [vLLM Project Homepage](https://github.com/vllm-project/vllm)
482
+ - [SGLang Project Homepage](https://github.com/sgl-project/sglang)
483
+ - [OpenAI Python SDK](https://github.com/openai/openai-python)
docs/{function_call_guide_cn.md → tool_calling_guide_cn.md} RENAMED
@@ -1,12 +1,12 @@
1
- # MiniMax-M2 函数调用(Function Call)功能指南
2
 
3
  ## 简介
4
 
5
- MiniMax-M2 模型支持函数调用功能,使模型能够识别何时需要调用外部函数,并以结构化格式输出函数调用参数。本文档详细介绍了如何使用 MiniMax-M2 的函数调用功能。
6
 
7
  ## 基础示例
8
 
9
- 以下 Python 脚本基于 OpenAI SDK 实现了一个天气查询函数的调用示例:
10
 
11
  ```python
12
  from openai import OpenAI
@@ -59,11 +59,11 @@ Result: Getting the weather for San Francisco, CA in celsius...
59
 
60
  ## 手动解析模型输出
61
 
62
- 如果您无法使用已支持 MiniMax-M2 的推理引擎的内置解析器,或者需要使用其他推理框架(如 transformers、TGI 等),可以使用以下方法手动解析模型的原始输出。这种方法需要您自己解析模型输出的 XML 标签格式。
63
 
64
  ### 使用 Transformers 的示例
65
 
66
- 以下是使用 transformers 库的完整示例:
67
 
68
  ```python
69
  from transformers import AutoTokenizer
@@ -87,7 +87,7 @@ def get_default_tools():
87
  }
88
  ]
89
 
90
- # 加载模型和分词器
91
  tokenizer = AutoTokenizer.from_pretrained(model_id)
92
  prompt = "What's the weather like in Shanghai today?"
93
  messages = [
@@ -95,10 +95,10 @@ messages = [
95
  {"role": "user", "content": prompt},
96
  ]
97
 
98
- # 启用函数调用工具
99
  tools = get_default_tools()
100
 
101
- # 应用聊天模板,并加入工具定义
102
  text = tokenizer.apply_chat_template(
103
  messages,
104
  tokenize=False,
@@ -106,7 +106,7 @@ text = tokenizer.apply_chat_template(
106
  tools=tools
107
  )
108
 
109
- # 发送请求(这里使用任何推理服务)
110
  import requests
111
  payload = {
112
  "model": "MiniMaxAI/MiniMax-M2",
@@ -120,35 +120,35 @@ response = requests.post(
120
  stream=False,
121
  )
122
 
123
- # 模型输出需要手动解析
124
  raw_output = response.json()["choices"][0]["text"]
125
- print("原始输出:", raw_output)
126
 
127
- # 使用下面的解析函数处理输出
128
- function_calls = parse_tool_calls(raw_output, tools)
129
  ```
130
 
131
- ## 🛠️ 函数调用的定义
132
 
133
- ### 函数结构体
134
 
135
- 函数调用需要在请求体中定义 `tools` 字段,每个函数由以下部分组成:
136
 
137
  ```json
138
  {
139
  "tools": [
140
  {
141
  "name": "search_web",
142
- "description": "搜索函数。",
143
  "parameters": {
144
  "properties": {
145
  "query_list": {
146
- "description": "进行搜索的关键词,列表元素个数为1",
147
  "items": { "type": "string" },
148
  "type": "array"
149
  },
150
  "query_tag": {
151
- "description": "query的分类",
152
  "items": { "type": "string" },
153
  "type": "array"
154
  }
@@ -162,16 +162,16 @@ function_calls = parse_tool_calls(raw_output, tools)
162
  ```
163
 
164
  **字段说明:**
165
- - `name`: 函数名称
166
- - `description`: 函数功能描述
167
- - `parameters`: 函数参数定义
168
- - `properties`: 参数属性定义,key 是参数名,value 包含参数的详细描述
169
- - `required`: 必填参数列表
170
- - `type`: 参数类型(通常为 "object")
171
 
172
- ### 模型内部处理格式
173
 
174
- 在 MiniMax-M2 模型内部处理时,函数定义会被转换为特殊格式并拼接到输入文本中。以下是一个完整的示例:
175
 
176
  ```
177
  ]~!b[]~b]system
@@ -182,7 +182,7 @@ You may call one or more tools to assist with the user query.
182
  Here are the tools available in JSONSchema format:
183
 
184
  <tools>
185
- <tool>{"name": "search_web", "description": "搜索函数。", "parameters": {"type": "object", "properties": {"query_list": {"type": "array", "items": {"type": "string"}, "description": "进行搜索的关键词,列表元素个数为1"}, "query_tag": {"type": "array", "items": {"type": "string"}, "description": "query的分类"}}, "required": ["query_list", "query_tag"]}}</tool>
186
  </tools>
187
 
188
  When making tool calls, use XML format to invoke tools and pass parameters:
@@ -195,25 +195,25 @@ When making tool calls, use XML format to invoke tools and pass parameters:
195
  </invoke>
196
  [e~[
197
  ]~b]user
198
- OpenAI Gemini 的最近一次发布会都是什么时候?[e~[
199
  ]~b]ai
200
  <think>
201
  ```
202
 
203
  **格式说明:**
204
 
205
- - `]~!b[]~b]system`: System 消息开始标记
206
- - `[e~[`: 消息结束标记
207
- - `]~b]user`: User 消息开始标记
208
- - `]~b]ai`: Assistant 消息开始标记
209
- - `]~b]tool`: Tool 结果消息开始标记
210
- - `<tools>...</tools>`: 工具定义区域,每个工具用 `<tool>` 标签包裹,内容为 JSON Schema
211
- - `<minimax:tool_call>...</minimax:tool_call>`: 工具调用区域
212
- - `<think>`: 生成时的思考过程标记(可选)
213
 
214
  ### 模型输出格式
215
 
216
- MiniMax-M2使用结构化的 XML 标签格式:
217
 
218
  ```xml
219
  <minimax:tool_call>
@@ -228,13 +228,13 @@ MiniMax-M2使用结构化的 XML 标签格式:
228
  </minimax:tool_call>
229
  ```
230
 
231
- 每个函数调用使用 `<invoke name="函数名">` 标签,参数使用 `<parameter name="参数名">` 标签包裹。
232
 
233
- ## 手动解析函数调用结果
234
 
235
- ### 解析函数调用
236
 
237
- MiniMax-M2使用结构化的 XML 标签,需要不同的解析方式。核心函数如下:
238
 
239
  ```python
240
  import re
@@ -243,7 +243,7 @@ from typing import Any, Optional, List, Dict
243
 
244
 
245
  def extract_name(name_str: str) -> str:
246
- """从引号包裹的字符串中提取名称"""
247
  name_str = name_str.strip()
248
  if name_str.startswith('"') and name_str.endswith('"'):
249
  return name_str[1:-1]
@@ -253,7 +253,7 @@ def extract_name(name_str: str) -> str:
253
 
254
 
255
  def convert_param_value(value: str, param_type: str) -> Any:
256
- """根据参数类型转换参数值"""
257
  if value.lower() == "null":
258
  return None
259
 
@@ -280,7 +280,7 @@ def convert_param_value(value: str, param_type: str) -> Any:
280
  except json.JSONDecodeError:
281
  return value
282
  else:
283
- # 尝试 JSON 解析,失败则返回字符串
284
  try:
285
  return json.loads(value)
286
  except json.JSONDecodeError:
@@ -289,16 +289,16 @@ def convert_param_value(value: str, param_type: str) -> Any:
289
 
290
  def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> List[Dict]:
291
  """
292
- 从模型输出中提取所有工具调用
293
 
294
  Args:
295
- model_output: 模型的完整输出文本
296
- tools: 工具定义列表,用于获取参数类型信息,格式可以是:
297
  - [{"name": "...", "parameters": {...}}]
298
  - [{"type": "function", "function": {"name": "...", "parameters": {...}}}]
299
 
300
  Returns:
301
- 解析后的工具调用列表,每个元素包含 name arguments 字段
302
 
303
  Example:
304
  >>> tools = [{
@@ -321,30 +321,30 @@ def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> L
321
  >>> print(result)
322
  [{'name': 'get_weather', 'arguments': {'location': 'San Francisco', 'unit': 'celsius'}}]
323
  """
324
- # 快速检查是否包含工具调用标记
325
  if "<minimax:tool_call>" not in model_output:
326
  return []
327
 
328
  tool_calls = []
329
 
330
  try:
331
- # 匹配所有 <minimax:tool_call>
332
  tool_call_regex = re.compile(r"<minimax:tool_call>(.*?)</minimax:tool_call>", re.DOTALL)
333
  invoke_regex = re.compile(r"<invoke name=(.*?)</invoke>", re.DOTALL)
334
  parameter_regex = re.compile(r"<parameter name=(.*?)</parameter>", re.DOTALL)
335
 
336
- # 遍历所有 tool_call
337
  for tool_call_match in tool_call_regex.findall(model_output):
338
- # 遍历该块中的所有 invoke
339
  for invoke_match in invoke_regex.findall(tool_call_match):
340
- # 提取函数名
341
  name_match = re.search(r'^([^>]+)', invoke_match)
342
  if not name_match:
343
  continue
344
 
345
  function_name = extract_name(name_match.group(1))
346
 
347
- # 获取参数配置
348
  param_config = {}
349
  if tools:
350
  for tool in tools:
@@ -355,7 +355,7 @@ def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> L
355
  param_config = params["properties"]
356
  break
357
 
358
- # 提取参数
359
  param_dict = {}
360
  for match in parameter_regex.findall(invoke_match):
361
  param_match = re.search(r'^([^>]+)>(.*)', match, re.DOTALL)
@@ -363,13 +363,13 @@ def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> L
363
  param_name = extract_name(param_match.group(1))
364
  param_value = param_match.group(2).strip()
365
 
366
- # 去除首尾的换行符
367
  if param_value.startswith('\n'):
368
  param_value = param_value[1:]
369
  if param_value.endswith('\n'):
370
  param_value = param_value[:-1]
371
 
372
- # 获取参数类型并转换
373
  param_type = "string"
374
  if param_name in param_config:
375
  if isinstance(param_config[param_name], dict) and "type" in param_config[param_name]:
@@ -383,7 +383,7 @@ def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> L
383
  })
384
 
385
  except Exception as e:
386
- print(f"解析工具调用失败: {e}")
387
  return []
388
 
389
  return tool_calls
@@ -392,7 +392,7 @@ def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> L
392
  **使用示例:**
393
 
394
  ```python
395
- # 定义工具
396
  tools = [
397
  {
398
  "name": "get_weather",
@@ -407,8 +407,8 @@ tools = [
407
  }
408
  ]
409
 
410
- # 模型输出
411
- model_output = """我来帮你查询天气。
412
  <minimax:tool_call>
413
  <invoke name="get_weather">
414
  <parameter name="location">San Francisco</parameter>
@@ -416,28 +416,28 @@ model_output = """我来帮你查询天气。
416
  </invoke>
417
  </minimax:tool_call>"""
418
 
419
- # 解析工具调用
420
  tool_calls = parse_tool_calls(model_output, tools)
421
 
422
- # 输出结果
423
  for call in tool_calls:
424
- print(f"调用函数: {call['name']}")
425
- print(f"参数: {call['arguments']}")
426
- # 输出: 调用函数: get_weather
427
- # 参数: {'location': 'San Francisco', 'unit': 'celsius'}
428
  ```
429
 
430
- ### 执行函数调用
431
 
432
- 解析完成后,您可以执行对应的函数并构建返回结果:
433
 
434
  ```python
435
  def execute_function_call(function_name: str, arguments: dict):
436
- """执行函数调用并返回结果"""
437
  if function_name == "get_weather":
438
- location = arguments.get("location", "未知位置")
439
  unit = arguments.get("unit", "celsius")
440
- # 构建函数执行结果
441
  return {
442
  "role": "tool",
443
  "content": [
@@ -448,7 +448,7 @@ def execute_function_call(function_name: str, arguments: dict):
448
  "location": location,
449
  "temperature": "25",
450
  "unit": unit,
451
- "weather": "晴朗"
452
  }, ensure_ascii=False)
453
  }
454
  ]
@@ -456,14 +456,14 @@ def execute_function_call(function_name: str, arguments: dict):
456
  elif function_name == "search_web":
457
  query_list = arguments.get("query_list", [])
458
  query_tag = arguments.get("query_tag", [])
459
- # 模拟搜索结果
460
  return {
461
  "role": "tool",
462
  "content": [
463
  {
464
  "name": function_name,
465
  "type": "text",
466
- "text": f"搜索关键词: {query_list}, 分类: {query_tag}\n搜索结果: 相关信息已找到"
467
  }
468
  ]
469
  }
@@ -471,12 +471,13 @@ def execute_function_call(function_name: str, arguments: dict):
471
  return None
472
  ```
473
 
474
- ### 将函数执行结果返回给模型
475
 
476
- 成功解析函数调用后,您应将函数执行结果添加到对话历史中,以便模型在后续交互中能够访问和利用这些信息,拼接格式参考chat_template.jinja
477
 
478
- ## 参考资料
479
 
480
  - [MiniMax-M2 模型仓库](https://github.com/MiniMax-AI/MiniMax-M2)
481
  - [vLLM 项目主页](https://github.com/vllm-project/vllm)
 
482
  - [OpenAI Python SDK](https://github.com/openai/openai-python)
 
1
+ # MiniMax-M2 工具调用指南
2
 
3
  ## 简介
4
 
5
+ MiniMax-M2 模型支持工具调用功能,使模型能够识别何时需要调用外部工具,并以结构化格式输出工具调用参数。本文档提供了有关如何使用 MiniMax-M2 工具调用功能的详细说明。
6
 
7
  ## 基础示例
8
 
9
+ 以下 Python 脚本基于 OpenAI SDK 实现了一个天气查询工具调用示例:
10
 
11
  ```python
12
  from openai import OpenAI
 
59
 
60
  ## 手动解析模型输出
61
 
62
+ **我们强烈建议使用 vLLM 或 SGLnag 来解析工具调用。** 如果您无法使用支持 MiniMax-M2 的推理引擎(如 vLLM 和 SGLang)的内置解析器,或需要使用其他推理框架(如 transformers、TGI 等),您可以使用以下方法手动解析模型的原始输出。这种方法需要您自己解析模型输出的 XML 标签格式。
63
 
64
  ### 使用 Transformers 的示例
65
 
66
+ 这是一个使用 transformers 库的完整示例:
67
 
68
  ```python
69
  from transformers import AutoTokenizer
 
87
  }
88
  ]
89
 
90
+ # Load model and tokenizer
91
  tokenizer = AutoTokenizer.from_pretrained(model_id)
92
  prompt = "What's the weather like in Shanghai today?"
93
  messages = [
 
95
  {"role": "user", "content": prompt},
96
  ]
97
 
98
+ # Enable function calling tools
99
  tools = get_default_tools()
100
 
101
+ # Apply chat template and include tool definitions
102
  text = tokenizer.apply_chat_template(
103
  messages,
104
  tokenize=False,
 
106
  tools=tools
107
  )
108
 
109
+ # Send request (using any inference service)
110
  import requests
111
  payload = {
112
  "model": "MiniMaxAI/MiniMax-M2",
 
120
  stream=False,
121
  )
122
 
123
+ # Model output needs manual parsing
124
  raw_output = response.json()["choices"][0]["text"]
125
+ print("Raw output:", raw_output)
126
 
127
+ # Use the parsing function below to process the output
128
+ tool_calls = parse_tool_calls(raw_output, tools)
129
  ```
130
 
131
+ ## 🛠️ 工具调用定义
132
 
133
+ ### 工具结构
134
 
135
+ 工具调用需要在请求体中定义 `tools` 字段。每个工具由以下部分组成:
136
 
137
  ```json
138
  {
139
  "tools": [
140
  {
141
  "name": "search_web",
142
+ "description": "Search function.",
143
  "parameters": {
144
  "properties": {
145
  "query_list": {
146
+ "description": "Keywords for search, list should contain 1 element.",
147
  "items": { "type": "string" },
148
  "type": "array"
149
  },
150
  "query_tag": {
151
+ "description": "Category of query",
152
  "items": { "type": "string" },
153
  "type": "array"
154
  }
 
162
  ```
163
 
164
  **字段说明:**
165
+ - `name`:函数名称
166
+ - `description`:函数描述
167
+ - `parameters`:函数参数定义
168
+ - `properties`:参数属性定义,其中键是参数名称,值包含详细的参数描述
169
+ - `required`:必需参数列表
170
+ - `type`:参数类型(通常为 "object")
171
 
172
+ ### 内部处理格式
173
 
174
+ 在 MiniMax-M2 模型内部处理时,工具定义会被转换为特殊格式并连接到输入文本中。以下是一个完整示例:
175
 
176
  ```
177
  ]~!b[]~b]system
 
182
  Here are the tools available in JSONSchema format:
183
 
184
  <tools>
185
+ <tool>{"name": "search_web", "description": "Search function.", "parameters": {"type": "object", "properties": {"query_list": {"type": "array", "items": {"type": "string"}, "description": "Keywords for search, list should contain 1 element."}, "query_tag": {"type": "array", "items": {"type": "string"}, "description": "Category of query"}}, "required": ["query_list", "query_tag"]}}</tool>
186
  </tools>
187
 
188
  When making tool calls, use XML format to invoke tools and pass parameters:
 
195
  </invoke>
196
  [e~[
197
  ]~b]user
198
+ When were the latest announcements from OpenAI and Gemini?[e~[
199
  ]~b]ai
200
  <think>
201
  ```
202
 
203
  **格式说明:**
204
 
205
+ - `]~!b[]~b]system`:系统消息开始标记
206
+ - `[e~[`:消息结束标记
207
+ - `]~b]user`:用户消息开始标记
208
+ - `]~b]ai`:助手消息开始标记
209
+ - `]~b]tool`:工具结果消息开始标记
210
+ - `<tools>...</tools>`:工具定义区域,每个工具都用 `<tool>` 标签包装,内容为 JSON Schema
211
+ - `<minimax:tool_call>...</minimax:tool_call>`:工具调用区域
212
+ - `<think>...</think>`:生成过程中的思考过程标记
213
 
214
  ### 模型输出格式
215
 
216
+ MiniMax-M2 使用结构化的 XML 标签格式:
217
 
218
  ```xml
219
  <minimax:tool_call>
 
228
  </minimax:tool_call>
229
  ```
230
 
231
+ 每个工具调用使用 `<invoke name="function_name">` 标签,参数使用 `<parameter name="parameter_name">` 标签包装。
232
 
233
+ ## 手动解析工具调用结果
234
 
235
+ ### 解析工具调用
236
 
237
+ MiniMax-M2 使用结构化的 XML 标签,这需要一种不同的解析方法。核心函数如下:
238
 
239
  ```python
240
  import re
 
243
 
244
 
245
  def extract_name(name_str: str) -> str:
246
+ """Extract name from quoted string"""
247
  name_str = name_str.strip()
248
  if name_str.startswith('"') and name_str.endswith('"'):
249
  return name_str[1:-1]
 
253
 
254
 
255
  def convert_param_value(value: str, param_type: str) -> Any:
256
+ """Convert parameter value based on parameter type"""
257
  if value.lower() == "null":
258
  return None
259
 
 
280
  except json.JSONDecodeError:
281
  return value
282
  else:
283
+ # Try JSON parsing, return string if failed
284
  try:
285
  return json.loads(value)
286
  except json.JSONDecodeError:
 
289
 
290
  def parse_tool_calls(model_output: str, tools: Optional[List[Dict]] = None) -> List[Dict]:
291
  """
292
+ Extract all tool calls from model output
293
 
294
  Args:
295
+ model_output: Complete output text from the model
296
+ tools: Tool definition list for getting parameter type information, format can be:
297
  - [{"name": "...", "parameters": {...}}]
298
  - [{"type": "function", "function": {"name": "...", "parameters": {...}}}]
299
 
300
  Returns:
301
+ Parsed tool call list, each element contains name and arguments fields
302
 
303
  Example:
304
  >>> tools = [{
 
321
  >>> print(result)
322
  [{'name': 'get_weather', 'arguments': {'location': 'San Francisco', 'unit': 'celsius'}}]
323
  """
324
+ # Quick check if tool call marker is present
325
  if "<minimax:tool_call>" not in model_output:
326
  return []
327
 
328
  tool_calls = []
329
 
330
  try:
331
+ # Match all <minimax:tool_call> blocks
332
  tool_call_regex = re.compile(r"<minimax:tool_call>(.*?)</minimax:tool_call>", re.DOTALL)
333
  invoke_regex = re.compile(r"<invoke name=(.*?)</invoke>", re.DOTALL)
334
  parameter_regex = re.compile(r"<parameter name=(.*?)</parameter>", re.DOTALL)
335
 
336
+ # Iterate through all tool_call blocks
337
  for tool_call_match in tool_call_regex.findall(model_output):
338
+ # Iterate through all invokes in this block
339
  for invoke_match in invoke_regex.findall(tool_call_match):
340
+ # Extract function name
341
  name_match = re.search(r'^([^>]+)', invoke_match)
342
  if not name_match:
343
  continue
344
 
345
  function_name = extract_name(name_match.group(1))
346
 
347
+ # Get parameter configuration
348
  param_config = {}
349
  if tools:
350
  for tool in tools:
 
355
  param_config = params["properties"]
356
  break
357
 
358
+ # Extract parameters
359
  param_dict = {}
360
  for match in parameter_regex.findall(invoke_match):
361
  param_match = re.search(r'^([^>]+)>(.*)', match, re.DOTALL)
 
363
  param_name = extract_name(param_match.group(1))
364
  param_value = param_match.group(2).strip()
365
 
366
+ # Remove leading and trailing newlines
367
  if param_value.startswith('\n'):
368
  param_value = param_value[1:]
369
  if param_value.endswith('\n'):
370
  param_value = param_value[:-1]
371
 
372
+ # Get parameter type and convert
373
  param_type = "string"
374
  if param_name in param_config:
375
  if isinstance(param_config[param_name], dict) and "type" in param_config[param_name]:
 
383
  })
384
 
385
  except Exception as e:
386
+ print(f"Failed to parse tool calls: {e}")
387
  return []
388
 
389
  return tool_calls
 
392
  **使用示例:**
393
 
394
  ```python
395
+ # Define tools
396
  tools = [
397
  {
398
  "name": "get_weather",
 
407
  }
408
  ]
409
 
410
+ # Model output
411
+ model_output = """Let me help you query the weather.
412
  <minimax:tool_call>
413
  <invoke name="get_weather">
414
  <parameter name="location">San Francisco</parameter>
 
416
  </invoke>
417
  </minimax:tool_call>"""
418
 
419
+ # Parse tool calls
420
  tool_calls = parse_tool_calls(model_output, tools)
421
 
422
+ # Output results
423
  for call in tool_calls:
424
+ print(f"Function called: {call['name']}")
425
+ print(f"Arguments: {call['arguments']}")
426
+ # Output: Function called: get_weather
427
+ # Arguments: {'location': 'San Francisco', 'unit': 'celsius'}
428
  ```
429
 
430
+ ### 执行工具调用
431
 
432
+ 完成解析后,您可以执行相应的工具并构造返回结果:
433
 
434
  ```python
435
  def execute_function_call(function_name: str, arguments: dict):
436
+ """Execute function call and return result"""
437
  if function_name == "get_weather":
438
+ location = arguments.get("location", "Unknown location")
439
  unit = arguments.get("unit", "celsius")
440
+ # Build function execution result
441
  return {
442
  "role": "tool",
443
  "content": [
 
448
  "location": location,
449
  "temperature": "25",
450
  "unit": unit,
451
+ "weather": "Sunny"
452
  }, ensure_ascii=False)
453
  }
454
  ]
 
456
  elif function_name == "search_web":
457
  query_list = arguments.get("query_list", [])
458
  query_tag = arguments.get("query_tag", [])
459
+ # Simulate search results
460
  return {
461
  "role": "tool",
462
  "content": [
463
  {
464
  "name": function_name,
465
  "type": "text",
466
+ "text": f"Search keywords: {query_list}, Category: {query_tag}\nSearch results: Relevant information found"
467
  }
468
  ]
469
  }
 
471
  return None
472
  ```
473
 
474
+ ### 将工具执行结果返回给模型
475
 
476
+ 在成功解析工具调用后,您应该将工具执行结果添加到对话历史中,以便模型在后续交互中可以访问和利用这些信息。请参考 [chat_template.jinja](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/chat_template.jinja) 了解连接格式。
477
 
478
+ ## 参考文献
479
 
480
  - [MiniMax-M2 模型仓库](https://github.com/MiniMax-AI/MiniMax-M2)
481
  - [vLLM 项目主页](https://github.com/vllm-project/vllm)
482
+ - [SGLang 项目主页](https://github.com/sgl-project/sglang)
483
  - [OpenAI Python SDK](https://github.com/openai/openai-python)
docs/vllm_deploy_guide.md CHANGED
@@ -2,6 +2,14 @@
2
 
3
  We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to deploy the [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) model. vLLM is a high-performance inference engine with excellent serving throughput, efficient and intelligent memory management, powerful batch request processing capabilities, and deeply optimized underlying performance. We recommend reviewing vLLM's official documentation to check hardware compatibility before deployment.
4
 
 
 
 
 
 
 
 
 
5
  ## System Requirements
6
 
7
  - OS: Linux
@@ -12,38 +20,37 @@ We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to deploy the [MiniMa
12
 
13
  - compute capability 7.0 or higher
14
 
15
- - Memory requirements: 220 GB for weights, 60 GB per 1M context tokens
16
 
17
  The following are recommended configurations; actual requirements should be adjusted based on your use case:
18
 
19
- - 4x 96GB GPUs: Supports context input of up to 400K tokens.
20
 
21
- - 8x 144GB GPUs: Supports context input of up to 3M tokens.
22
 
23
  ## Deployment with Python
24
 
25
- It is recommended to use a virtual environment (such as venv, conda, or uv) to avoid dependency conflicts. We recommend installing vLLM in a fresh Python environment:
 
 
26
 
27
  ```bash
28
- # Not yet released, please install nightly build
29
- uv pip install -U vllm \
30
- --torch-backend=auto \
31
- --extra-index-url https://wheels.vllm.ai/nightly
32
- # If released, install using uv
33
- uv pip install "vllm" --torch-backend=auto
34
  ```
35
 
36
  Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
37
 
38
- 4-GPU deployment command:
39
 
40
  ```bash
41
- SAFETENSORS_FAST_GPU=1 VLLM_USE_V1=0 vllm serve \
42
- --model MiniMaxAI/MiniMax-M2 \
43
- --trust-remote-code \
44
- --enable-expert-parallel --tensor-parallel-size 4 \
45
  --enable-auto-tool-choice --tool-call-parser minimax_m2 \
46
- --reasoning-parser minimax_m2
 
47
  ```
48
 
49
  ## Testing Deployment
@@ -80,7 +87,7 @@ This vLLM version is outdated. Please upgrade to the latest version.
80
 
81
  If you encounter any issues while deploying the MiniMax model:
82
 
83
- - Contact our technical support team through official channels such as email at api@minimaxi.com
84
 
85
  - Submit an issue on our [GitHub](https://github.com/MiniMax-AI) repository
86
 
 
2
 
3
  We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to deploy the [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) model. vLLM is a high-performance inference engine with excellent serving throughput, efficient and intelligent memory management, powerful batch request processing capabilities, and deeply optimized underlying performance. We recommend reviewing vLLM's official documentation to check hardware compatibility before deployment.
4
 
5
+ ## Applicable Models
6
+
7
+ This document applies to the following models. You only need to change the model name during deployment.
8
+
9
+ - [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
10
+
11
+ The deployment process is illustrated below using MiniMax-M2 as an example.
12
+
13
  ## System Requirements
14
 
15
  - OS: Linux
 
20
 
21
  - compute capability 7.0 or higher
22
 
23
+ - Memory requirements: 220 GB for weights, 240 GB per 1M context tokens
24
 
25
  The following are recommended configurations; actual requirements should be adjusted based on your use case:
26
 
27
+ - 4x 96GB GPUs: Supported context length of up to 400K tokens.
28
 
29
+ - 8x 144GB GPUs: Supported context length of up to 3M tokens.
30
 
31
  ## Deployment with Python
32
 
33
+ It is recommended to use a virtual environment (such as **venv**, **conda**, or **uv**) to avoid dependency conflicts.
34
+
35
+ We recommend installing vLLM in a fresh Python environment. Since it has not been released yet, you need to manually build it from the source code:
36
 
37
  ```bash
38
+ git clone https://github.com/vllm-project/vllm.git
39
+ cd vllm
40
+ uv pip install . --torch-backend=auto
 
 
 
41
  ```
42
 
43
  Run the following command to start the vLLM server. vLLM will automatically download and cache the MiniMax-M2 model from Hugging Face.
44
 
45
+ 8-GPU deployment command:
46
 
47
  ```bash
48
+ SAFETENSORS_FAST_GPU=1 vllm serve \
49
+ MiniMaxAI/MiniMax-M2 --trust-remote-code \
50
+ --enable_expert_parallel --tensor-parallel-size 8 \
 
51
  --enable-auto-tool-choice --tool-call-parser minimax_m2 \
52
+ --reasoning-parser minimax_m2_append_think \
53
+ --compilation-config "{\"cudagraph_mode\": \"PIECEWISE\"}"
54
  ```
55
 
56
  ## Testing Deployment
 
87
 
88
  If you encounter any issues while deploying the MiniMax model:
89
 
90
+ - Contact our technical support team through official channels such as email at [api@minimaxi.com](mailto:api@minimaxi.com)
91
 
92
  - Submit an issue on our [GitHub](https://github.com/MiniMax-AI) repository
93
 
docs/vllm_deploy_guide_cn.md CHANGED
@@ -2,6 +2,14 @@
2
 
3
  我们推荐使用 [vLLM](https://docs.vllm.ai/en/stable/) 来部署 [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) 模型。vLLM 是一个高性能的推理引擎,其具有卓越的服务吞吐、高效智能的内存管理机制、强大的批量请求处理能力、深度优化的底层性能等特性。我们建议在部署之前查看 vLLM 的官方文档以检查硬件兼容性。
4
 
 
 
 
 
 
 
 
 
5
  ## 环境要求
6
 
7
  - OS:Linux
@@ -12,37 +20,36 @@
12
 
13
  - compute capability 7.0 or higher
14
 
15
- - 显存需求:权重需要 220 GB,每 1M 上下文 token 需要 60 GB
16
 
17
  以下为推荐配置,实际需求请根据业务场景调整:
18
 
19
- - 96G x4 GPU:支持 40 万 token 的上下文输入。
20
 
21
- - 144G x8 GPU:支持长达 300 万 token 的上下文输入。
22
 
23
  ## 使用 Python 部署
24
 
25
- 建议使用虚拟环境(如 venvcondauv)以避免依赖冲突。建议在全新的 Python 环境中安装 vLLM:
 
 
26
  ```bash
27
- # 尚未 release,请安装 nightly 构建
28
- uv pip install -U vllm \
29
- --torch-backend=auto \
30
- --extra-index-url https://wheels.vllm.ai/nightly
31
- # 如果 release,使用 uv 安装
32
- uv pip install "vllm" --torch-backend=auto
33
  ```
34
 
35
  运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
36
 
37
- 4 卡部署命令:
38
 
39
  ```bash
40
- SAFETENSORS_FAST_GPU=1 VLLM_USE_V1=0 vllm serve \
41
- --model MiniMaxAI/MiniMax-M2 \
42
- --trust-remote-code \
43
- --enable-expert-parallel --tensor-parallel-size 4 \
44
  --enable-auto-tool-choice --tool-call-parser minimax_m2 \
45
- --reasoning-parser minimax_m2
 
46
  ```
47
 
48
  ## 测试部署
@@ -79,7 +86,7 @@ export HF_ENDPOINT=https://hf-mirror.com
79
 
80
  如果在部署 MiniMax 模型过程中遇到任何问题:
81
 
82
- - 通过邮箱 api@minimaxi.com 等官方渠道联系我们的技术支持团队
83
 
84
  - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
85
  我们会持续优化模型的部署体验,欢迎反馈!
 
2
 
3
  我们推荐使用 [vLLM](https://docs.vllm.ai/en/stable/) 来部署 [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) 模型。vLLM 是一个高性能的推理引擎,其具有卓越的服务吞吐、高效智能的内存管理机制、强大的批量请求处理能力、深度优化的底层性能等特性。我们建议在部署之前查看 vLLM 的官方文档以检查硬件兼容性。
4
 
5
+ ## 本文档适用模型
6
+
7
+ 本文档适用以下模型,只需在部署时修改模型名称即可。
8
+
9
+ - [MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
10
+
11
+ 以下以 MiniMax-M2 为例说明部署流程。
12
+
13
  ## 环境要求
14
 
15
  - OS:Linux
 
20
 
21
  - compute capability 7.0 or higher
22
 
23
+ - 显存需求:权重需要 220 GB,每 1M 上下文 token 需要 240 GB
24
 
25
  以下为推荐配置,实际需求请根据业务场景调整:
26
 
27
+ - 96G x4 GPU:支持 40 万 token 的总上下文。
28
 
29
+ - 144G x8 GPU:支持长达 300 万 token 的总上下文。
30
 
31
  ## 使用 Python 部署
32
 
33
+ 建议使用虚拟环境(如 **venv**、**conda**、**uv**)以避免依赖冲突。
34
+
35
+ 建议在全新的 Python 环境中安装 vLLM。由于尚未 release,需要从源码手动编译:
36
  ```bash
37
+ git clone https://github.com/vllm-project/vllm.git
38
+ cd vllm
39
+ uv pip install . --torch-backend=auto
 
 
 
40
  ```
41
 
42
  运行如下命令启动 vLLM 服务器,vLLM 会自动从 Huggingface 下载并缓存 MiniMax-M2 模型。
43
 
44
+ 8 卡部署命令:
45
 
46
  ```bash
47
+ SAFETENSORS_FAST_GPU=1 vllm serve \
48
+ MiniMaxAI/MiniMax-M2 --trust-remote-code \
49
+ --enable_expert_parallel --tensor-parallel-size 8 \
 
50
  --enable-auto-tool-choice --tool-call-parser minimax_m2 \
51
+ --reasoning-parser minimax_m2_append_think \
52
+ --compilation-config "{\"cudagraph_mode\": \"PIECEWISE\"}"
53
  ```
54
 
55
  ## 测试部署
 
86
 
87
  如果在部署 MiniMax 模型过程中遇到任何问题:
88
 
89
+ - 通过邮箱 [api@minimaxi.com](mailto:api@minimaxi.com) 等官方渠道联系我们的技术支持团队
90
 
91
  - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
92
  我们会持续优化模型的部署体验,欢迎反馈!