Commit
·
fad47f9
1
Parent(s):
7cb69d5
Add diffusers as a way to inference in the model
Browse filesFormer-commit-id: e1367298c7497de7b9a5f076f40433ce6857077e
README.md
CHANGED
|
@@ -87,6 +87,25 @@ and sample with
|
|
| 87 |
```
|
| 88 |
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
| 89 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
| 91 |
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
|
| 92 |
|
|
|
|
| 87 |
```
|
| 88 |
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
| 89 |
```
|
| 90 |
+
|
| 91 |
+
Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
|
| 92 |
+
```py
|
| 93 |
+
# make sure you're logged in with `huggingface-cli login`
|
| 94 |
+
from torch import autocast
|
| 95 |
+
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
|
| 96 |
+
|
| 97 |
+
pipe = StableDiffusionPipeline.from_pretrained(
|
| 98 |
+
"CompVis/stable-diffusion-v1-3-diffusers",
|
| 99 |
+
use_auth_token=True
|
| 100 |
+
)
|
| 101 |
+
|
| 102 |
+
prompt = "a photo of an astronaut riding a horse on mars"
|
| 103 |
+
with autocast("cuda"):
|
| 104 |
+
image = pipe(prompt)["sample"][0]
|
| 105 |
+
|
| 106 |
+
image.save("astronaut_rides_horse.png")
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
| 110 |
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
|
| 111 |
|