manaestras commited on
Commit
117ae9b
·
verified ·
1 Parent(s): dc45dc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  base_model:
3
  - tencent/Hunyuan-4B-Pretrain
 
4
  library_name: transformers
5
  ---
6
 
@@ -86,9 +87,9 @@ Note: The following benchmarks are evaluated by TRT-LLM-backend on several **bas
86
   
87
 
88
  ### Use with transformers
89
- First, please install transformers. We will merge it into the main branch later.
90
  ```SHELL
91
- pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
92
  ```
93
  Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
94
  1. Pass **"enable_thinking=False"** when calling apply_chat_template.
@@ -504,4 +505,4 @@ docker run --entrypoint="python3" --gpus all \
504
 
505
  ## Contact Us
506
 
507
- If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
 
1
  ---
2
  base_model:
3
  - tencent/Hunyuan-4B-Pretrain
4
+ - tencent/Hunyuan-4B-Instruct
5
  library_name: transformers
6
  ---
7
 
 
87
   
88
 
89
  ### Use with transformers
90
+ First, please install transformers.
91
  ```SHELL
92
+ pip install "transformers>=4.56.0"
93
  ```
94
  Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
95
  1. Pass **"enable_thinking=False"** when calling apply_chat_template.
 
505
 
506
  ## Contact Us
507
 
508
+ If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).