Update README.md
Browse files
README.md
CHANGED
|
@@ -75,8 +75,6 @@ generated_ids = [
|
|
| 75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 76 |
```
|
| 77 |
|
| 78 |
-
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen-beta-0_5B-Chat-GPTQ`, `Qwen-beta-0_5B-Chat-AWQ`, and `Qwen-beta-0_5B-Chat-GGUF`.
|
| 79 |
-
|
| 80 |
|
| 81 |
## Limitations
|
| 82 |
* Reduced capabilities in agent;
|
|
|
|
| 75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 76 |
```
|
| 77 |
|
|
|
|
|
|
|
| 78 |
|
| 79 |
## Limitations
|
| 80 |
* Reduced capabilities in agent;
|