SmerkyG commited on
Commit
311a0cb
·
verified ·
1 Parent(s): f853200

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -49,10 +49,10 @@ This is RWKV-7 model under flash-linear attention format.
49
  ## Uses
50
 
51
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
52
- Install `flash-linear-attention` and the latest version of `transformers` before using this model:
53
 
54
  ```bash
55
- pip install git+https://github.com/fla-org/flash-linear-attention
56
  pip install 'transformers>=4.48.0'
57
  ```
58
 
 
49
  ## Uses
50
 
51
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
52
+ Install `flash-linear-attention` <= 0.1.2 and the latest version of `transformers` before using this model:
53
 
54
  ```bash
55
+ pip install --no-use-pep517 flash-linear-attention==0.1.2
56
  pip install 'transformers>=4.48.0'
57
  ```
58