Update docs/vllm_deployment_guide.md
Browse files- docs/vllm_deployment_guide.md +12 -22
docs/vllm_deployment_guide.md
CHANGED
|
@@ -41,20 +41,19 @@ git clone https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
|
|
| 41 |
|
| 42 |
## 🛠️ Deployment Options
|
| 43 |
|
| 44 |
-
### Option
|
| 45 |
|
| 46 |
To ensure consistency and stability of the deployment environment, we recommend using Docker for deployment.
|
| 47 |
|
| 48 |
⚠️ **Version Requirements**:
|
| 49 |
-
- MiniMax-M1 model requires vLLM version 0.
|
| 50 |
-
-
|
| 51 |
-
|
| 52 |
-
2. Recompile vLLM from source. Follow the compilation instructions in Solution 2 of the Common Issues section
|
| 53 |
-
- Special Note: For vLLM versions between 0.8.3 and 0.9.2, you need to modify the model configuration:
|
| 54 |
-
1. Open `config.json`
|
| 55 |
-
2. Change `config['architectures'] = ["MiniMaxM1ForCausalLM"]` to `config['architectures'] = ["MiniMaxText01ForCausalLM"]`
|
| 56 |
|
| 57 |
1. Get the container image:
|
|
|
|
|
|
|
|
|
|
| 58 |
```bash
|
| 59 |
docker pull vllm/vllm-openai:v0.8.3
|
| 60 |
```
|
|
@@ -77,21 +76,12 @@ sudo docker run -it \
|
|
| 77 |
--name $NAME \
|
| 78 |
$DOCKER_RUN_CMD \
|
| 79 |
$IMAGE /bin/bash
|
| 80 |
-
```
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
### Option 2: Direct Installation of vLLM
|
| 84 |
-
|
| 85 |
-
If your environment meets the following requirements:
|
| 86 |
-
|
| 87 |
-
- CUDA 12.1
|
| 88 |
-
- PyTorch 2.1
|
| 89 |
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
pip install
|
| 95 |
```
|
| 96 |
|
| 97 |
💡 If you are using other environment configurations, please refer to the [vLLM Installation Guide](https://docs.vllm.ai/en/latest/getting_started/installation.html)
|
|
|
|
| 41 |
|
| 42 |
## 🛠️ Deployment Options
|
| 43 |
|
| 44 |
+
### Option: Deploy Using Docker (Recommended)
|
| 45 |
|
| 46 |
To ensure consistency and stability of the deployment environment, we recommend using Docker for deployment.
|
| 47 |
|
| 48 |
⚠️ **Version Requirements**:
|
| 49 |
+
- MiniMax-M1 model requires vLLM version 0.9.2 or later for full support
|
| 50 |
+
- Special Note: Using vLLM versions below 0.9.2 may result in incompatibility or incorrect precision for the model:
|
| 51 |
+
- For details, see: [Fix minimax model cache & lm_head precision #19592](https://github.com/vllm-project/vllm/pull/19592)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
1. Get the container image:
|
| 54 |
+
|
| 55 |
+
Currently, the official vLLM Docker image for version v0.9.2 has not been released yet.
|
| 56 |
+
As an example, we will demonstrate how to manually build vLLM using version v0.8.3.
|
| 57 |
```bash
|
| 58 |
docker pull vllm/vllm-openai:v0.8.3
|
| 59 |
```
|
|
|
|
| 76 |
--name $NAME \
|
| 77 |
$DOCKER_RUN_CMD \
|
| 78 |
$IMAGE /bin/bash
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
# install vLLM
|
| 81 |
+
cd $CODE_DIR
|
| 82 |
+
git clone https://github.com/vllm-project/vllm.git
|
| 83 |
+
cd vllm
|
| 84 |
+
pip install -e .
|
| 85 |
```
|
| 86 |
|
| 87 |
💡 If you are using other environment configurations, please refer to the [vLLM Installation Guide](https://docs.vllm.ai/en/latest/getting_started/installation.html)
|