ariG23498 HF Staff commited on
Commit
3894df4
·
verified ·
1 Parent(s): 68ca2d8

Upload utter-project_EuroLLM-9B_1.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. utter-project_EuroLLM-9B_1.txt +1 -47
utter-project_EuroLLM-9B_1.txt CHANGED
@@ -1,47 +1 @@
1
- ```CODE:
2
- # Use a pipeline as a high-level helper
3
- from transformers import pipeline
4
-
5
- pipe = pipeline("text-generation", model="utter-project/EuroLLM-9B")
6
- ```
7
-
8
- ERROR:
9
- Traceback (most recent call last):
10
- File "/tmp/utter-project_EuroLLM-9B_1xfZBeJ.py", line 24, in <module>
11
- pipe = pipeline("text-generation", model="utter-project/EuroLLM-9B")
12
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1078, in pipeline
13
- raise e
14
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1073, in pipeline
15
- tokenizer = AutoTokenizer.from_pretrained(
16
- tokenizer_identifier, use_fast=use_fast, _from_pipeline=task, **hub_kwargs, **tokenizer_kwargs
17
- )
18
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/models/auto/tokenization_auto.py", line 1140, in from_pretrained
19
- return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
20
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
21
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/tokenization_utils_base.py", line 2097, in from_pretrained
22
- return cls._from_pretrained(
23
- ~~~~~~~~~~~~~~~~~~~~^
24
- resolved_vocab_files,
25
- ^^^^^^^^^^^^^^^^^^^^^
26
- ...<9 lines>...
27
- **kwargs,
28
- ^^^^^^^^^
29
- )
30
- ^
31
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/tokenization_utils_base.py", line 2343, in _from_pretrained
32
- tokenizer = cls(*init_inputs, **init_kwargs)
33
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 154, in __init__
34
- super().__init__(
35
- ~~~~~~~~~~~~~~~~^
36
- vocab_file=vocab_file,
37
- ^^^^^^^^^^^^^^^^^^^^^^
38
- ...<10 lines>...
39
- **kwargs,
40
- ^^^^^^^^^
41
- )
42
- ^
43
- File "/tmp/.cache/uv/environments-v2/ab60391da2086f05/lib/python3.13/site-packages/transformers/tokenization_utils_fast.py", line 108, in __init__
44
- raise ValueError(
45
- ...<2 lines>...
46
- )
47
- ValueError: Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you have sentencepiece installed.
 
1
+ Everything was good in utter-project_EuroLLM-9B_1.txt