Spaces:
Running
on
Zero
Running
on
Zero
Commit
·
1369f55
1
Parent(s):
c743a43
Update app.py
Browse files
app.py
CHANGED
|
@@ -15,6 +15,8 @@ from diffusers import AutoencoderKL, DiffusionPipeline
|
|
| 15 |
DESCRIPTION = """
|
| 16 |
# OpenDalle 1.1
|
| 17 |
|
|
|
|
|
|
|
| 18 |
This is a demo of <a href="https://huggingface.co/dataautogpt3/OpenDalleV1.1">OpenDalle V1.1</a> by @dataautogpt3.
|
| 19 |
|
| 20 |
It's a merge of several different models and is supposed to provide excellent performance. Try it out!
|
|
@@ -23,6 +25,8 @@ It's a merge of several different models and is supposed to provide excellent pe
|
|
| 23 |
|
| 24 |
**The code for this demo is based on [@hysts's SD-XL demo](https://huggingface.co/spaces/hysts/SD-XL) running on a A10G GPU.**
|
| 25 |
|
|
|
|
|
|
|
| 26 |
Also see [OpenDalle Original Demo](https://huggingface.co/spaces/mrfakename/OpenDalle-GPU-Demo/)
|
| 27 |
"""
|
| 28 |
if not torch.cuda.is_available():
|
|
|
|
| 15 |
DESCRIPTION = """
|
| 16 |
# OpenDalle 1.1
|
| 17 |
|
| 18 |
+
**Demo by [mrfakename](https://mrfake.name/) - [Twitter](https://twitter.com/realmrfakename) - [GitHub](https://github.com/fakerybakery/) - [Hugging Face](https://huggingface.co/mrfakename)**
|
| 19 |
+
|
| 20 |
This is a demo of <a href="https://huggingface.co/dataautogpt3/OpenDalleV1.1">OpenDalle V1.1</a> by @dataautogpt3.
|
| 21 |
|
| 22 |
It's a merge of several different models and is supposed to provide excellent performance. Try it out!
|
|
|
|
| 25 |
|
| 26 |
**The code for this demo is based on [@hysts's SD-XL demo](https://huggingface.co/spaces/hysts/SD-XL) running on a A10G GPU.**
|
| 27 |
|
| 28 |
+
**NOTE: There may be a restriction on generated images. Depending on your jurisdiction, this restriction may or may not apply. Please see [this](https://huggingface.co/dataautogpt3/OpenDalleV1.1/discussions/23). Please consult a lawyer before publishing images.**
|
| 29 |
+
|
| 30 |
Also see [OpenDalle Original Demo](https://huggingface.co/spaces/mrfakename/OpenDalle-GPU-Demo/)
|
| 31 |
"""
|
| 32 |
if not torch.cuda.is_available():
|