ariG23498 HF Staff commited on
Commit
2bd7510
·
verified ·
1 Parent(s): 8c3c4a0

Upload Open-Bee_Bee-8B-RL_0.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. Open-Bee_Bee-8B-RL_0.txt +62 -0
Open-Bee_Bee-8B-RL_0.txt ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```CODE:
2
+ # Use a pipeline as a high-level helper
3
+ from transformers import pipeline
4
+
5
+ pipe = pipeline("image-text-to-text", model="Open-Bee/Bee-8B-RL", trust_remote_code=True)
6
+ messages = [
7
+ {
8
+ "role": "user",
9
+ "content": [
10
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
11
+ {"type": "text", "text": "What animal is on the candy?"}
12
+ ]
13
+ },
14
+ ]
15
+ pipe(text=messages)
16
+ ```
17
+
18
+ ERROR:
19
+ Traceback (most recent call last):
20
+ File "/tmp/Open-Bee_Bee-8B-RL_03OgbBH.py", line 17, in <module>
21
+ pipe = pipeline("image-text-to-text", model="Open-Bee/Bee-8B-RL", trust_remote_code=True)
22
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
23
+ framework, model = infer_framework_load_model(
24
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~^
25
+ adapter_path if adapter_path is not None else model,
26
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
27
+ ...<5 lines>...
28
+ **model_kwargs,
29
+ ^^^^^^^^^^^^^^^
30
+ )
31
+ ^
32
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model
33
+ raise ValueError(
34
+ f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
35
+ )
36
+ ValueError: Could not load model Open-Bee/Bee-8B-RL with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:
37
+
38
+ while loading with AutoModelForImageTextToText, an error is thrown:
39
+ Traceback (most recent call last):
40
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
41
+ model = model_class.from_pretrained(model, **kwargs)
42
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
43
+ raise ValueError(
44
+ ...<2 lines>...
45
+ )
46
+ ValueError: Unrecognized configuration class <class 'transformers_modules.Open_hyphen_Bee.Bee_hyphen_8B_hyphen_RL.17774f4ad76f51e6be43fcc45543116f384e5278.configuration_bee.BeeConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
47
+ Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Lfm2VlConfig, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, Qwen3VLConfig, Qwen3VLMoeConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
48
+
49
+ During handling of the above exception, another exception occurred:
50
+
51
+ Traceback (most recent call last):
52
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
53
+ model = model_class.from_pretrained(model, **fp32_kwargs)
54
+ File "/tmp/.cache/uv/environments-v2/70b915a8b88cab9c/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
55
+ raise ValueError(
56
+ ...<2 lines>...
57
+ )
58
+ ValueError: Unrecognized configuration class <class 'transformers_modules.Open_hyphen_Bee.Bee_hyphen_8B_hyphen_RL.17774f4ad76f51e6be43fcc45543116f384e5278.configuration_bee.BeeConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
59
+ Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Lfm2VlConfig, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, Qwen3VLConfig, Qwen3VLMoeConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
60
+
61
+
62
+