ariG23498 HF Staff commited on
Commit
68c528c
·
verified ·
1 Parent(s): ed072c7

Upload MiniMaxAI_MiniMax-M2_0.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. MiniMaxAI_MiniMax-M2_0.txt +8 -12
MiniMaxAI_MiniMax-M2_0.txt CHANGED
@@ -1,4 +1,3 @@
1
-
2
  ```CODE:
3
  # Use a pipeline as a high-level helper
4
  from transformers import pipeline
@@ -9,9 +8,10 @@ messages = [
9
  ]
10
  pipe(messages)
11
  ```
 
12
  ERROR:
13
  Traceback (most recent call last):
14
- File "/tmp/MiniMaxAI_MiniMax-M2_0Iy0bNK.py", line 17, in <module>
15
  pipe = pipeline("text-generation", model="MiniMaxAI/MiniMax-M2")
16
  File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
17
  framework, model = infer_framework_load_model(
@@ -59,11 +59,9 @@ Traceback (most recent call last):
59
  ^^^^^^^^^^^^^^^^^^^^^^^^^^
60
  )
61
  ^
62
- File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/quantizers/quantizer_finegrained_fp8.py", line 54, in validate_environment
63
- raise ValueError(
64
- ...<2 lines>...
65
- )
66
- ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = `8.6`
67
 
68
  During handling of the above exception, another exception occurred:
69
 
@@ -96,11 +94,9 @@ Traceback (most recent call last):
96
  ^^^^^^^^^^^^^^^^^^^^^^^^^^
97
  )
98
  ^
99
- File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/quantizers/quantizer_finegrained_fp8.py", line 54, in validate_environment
100
- raise ValueError(
101
- ...<2 lines>...
102
- )
103
- ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = `8.6`
104
 
105
 
106
 
 
 
1
  ```CODE:
2
  # Use a pipeline as a high-level helper
3
  from transformers import pipeline
 
8
  ]
9
  pipe(messages)
10
  ```
11
+
12
  ERROR:
13
  Traceback (most recent call last):
14
+ File "/tmp/MiniMaxAI_MiniMax-M2_0AY8pTg.py", line 17, in <module>
15
  pipe = pipeline("text-generation", model="MiniMaxAI/MiniMax-M2")
16
  File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
17
  framework, model = infer_framework_load_model(
 
59
  ^^^^^^^^^^^^^^^^^^^^^^^^^^
60
  )
61
  ^
62
+ File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/quantizers/quantizer_finegrained_fp8.py", line 48, in validate_environment
63
+ raise RuntimeError("No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.")
64
+ RuntimeError: No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.
 
 
65
 
66
  During handling of the above exception, another exception occurred:
67
 
 
94
  ^^^^^^^^^^^^^^^^^^^^^^^^^^
95
  )
96
  ^
97
+ File "/tmp/.cache/uv/environments-v2/d3eea229ed2fb556/lib/python3.13/site-packages/transformers/quantizers/quantizer_finegrained_fp8.py", line 48, in validate_environment
98
+ raise RuntimeError("No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.")
99
+ RuntimeError: No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.
 
 
100
 
101
 
102