try my copy of the space
Browse files- README.md +1 -1
- src/App.svelte +1 -1
README.md
CHANGED
|
@@ -32,7 +32,7 @@ The LLM backed has a dynamic setup so it can switch between providers, for now w
|
|
| 32 |
Options for LLM backend:
|
| 33 |
1. tencent/Hunyuan-Large (has been free for a while!)
|
| 34 |
2. yuntian-deng/ChatGPT (say it will have short term availability)
|
| 35 |
-
3.
|
| 36 |
4. BlinkDL/RWKV-Gradio-2 (running on T4, never ran into rate limits)
|
| 37 |
5. make my own Cohere space (20 generations per minute)
|
| 38 |
|
|
|
|
| 32 |
Options for LLM backend:
|
| 33 |
1. tencent/Hunyuan-Large (has been free for a while!)
|
| 34 |
2. yuntian-deng/ChatGPT (say it will have short term availability)
|
| 35 |
+
3. Fraser/zephyr-7b (runs on zero-gpu)
|
| 36 |
4. BlinkDL/RWKV-Gradio-2 (running on T4, never ran into rate limits)
|
| 37 |
5. make my own Cohere space (20 generations per minute)
|
| 38 |
|
src/App.svelte
CHANGED
|
@@ -107,7 +107,7 @@
|
|
| 107 |
);
|
| 108 |
|
| 109 |
rwkvClient = await gradioClient.Client.connect(
|
| 110 |
-
"
|
| 111 |
opts
|
| 112 |
);
|
| 113 |
|
|
|
|
| 107 |
);
|
| 108 |
|
| 109 |
rwkvClient = await gradioClient.Client.connect(
|
| 110 |
+
"Fraser/zephyr-7b",
|
| 111 |
opts
|
| 112 |
);
|
| 113 |
|