Spaces:
Paused
Paused
Rishi Desai
commited on
Commit
·
7f4e1be
1
Parent(s):
a125be2
read me
Browse files
README.md
CHANGED
|
@@ -16,13 +16,13 @@ A tool for improving facial consistency and quality in AI-generated images. Dram
|
|
| 16 |
|
| 17 |
1. Set up your Hugging Face token:
|
| 18 |
- Create a token at [Hugging Face](https://huggingface.co/settings/tokens)
|
|
|
|
| 19 |
- Set the following environment variables:
|
| 20 |
```
|
| 21 |
export HUGGINGFACE_TOKEN=your_token_here
|
| 22 |
export HF_HOME=/path/to/your/huggingface_cache
|
| 23 |
```
|
| 24 |
-
- Models will be downloaded to `$HF_HOME` and
|
| 25 |
-
- Hugging Face requires login for downloading Flux
|
| 26 |
|
| 27 |
2. Create the virtual environment:
|
| 28 |
```
|
|
@@ -36,9 +36,9 @@ A tool for improving facial consistency and quality in AI-generated images. Dram
|
|
| 36 |
python install.py
|
| 37 |
```
|
| 38 |
|
| 39 |
-
This will
|
| 40 |
-
- Install ComfyUI, custom nodes, and required dependencies to your venv
|
| 41 |
-
- Download all required models (Flux.1-dev, ControlNet, text encoders, PuLID, and more)
|
| 42 |
|
| 43 |
4. Run inference on one example:
|
| 44 |
|
|
@@ -67,16 +67,13 @@ The FAL API key is used for face upscaling during preprocessing. You can get one
|
|
| 67 |
|
| 68 |
A simple web interface for the face enhancement workflow.
|
| 69 |
|
| 70 |
-
1. Run
|
| 71 |
|
| 72 |
-
```bash
|
| 73 |
-
python gradio_demo.py
|
| 74 |
-
```
|
| 75 |
2. Go to http://localhost:7860. You may need to enable port forwarding.
|
| 76 |
|
| 77 |
### Notes
|
| 78 |
- The script and demo run a ComfyUI server ephemerally
|
| 79 |
-
- Gradio demo faster than the script
|
| 80 |
- All images are saved in ./ComfyUI/input/scratch/
|
| 81 |
- Temporary files are created during processing and cleaned up afterward
|
| 82 |
|
|
|
|
| 16 |
|
| 17 |
1. Set up your Hugging Face token:
|
| 18 |
- Create a token at [Hugging Face](https://huggingface.co/settings/tokens)
|
| 19 |
+
- Log into Hugging Face and accept their terms of service to download Flux
|
| 20 |
- Set the following environment variables:
|
| 21 |
```
|
| 22 |
export HUGGINGFACE_TOKEN=your_token_here
|
| 23 |
export HF_HOME=/path/to/your/huggingface_cache
|
| 24 |
```
|
| 25 |
+
- Models will be downloaded to `$HF_HOME` and symlinked to `./ComfyUI/models/`
|
|
|
|
| 26 |
|
| 27 |
2. Create the virtual environment:
|
| 28 |
```
|
|
|
|
| 36 |
python install.py
|
| 37 |
```
|
| 38 |
|
| 39 |
+
This will
|
| 40 |
+
- Install ComfyUI, custom nodes, and required dependencies to your venv
|
| 41 |
+
- Download all required models (Flux.1-dev, ControlNet, text encoders, PuLID, and more)
|
| 42 |
|
| 43 |
4. Run inference on one example:
|
| 44 |
|
|
|
|
| 67 |
|
| 68 |
A simple web interface for the face enhancement workflow.
|
| 69 |
|
| 70 |
+
1. Run `python gradio_demo.py`
|
| 71 |
|
|
|
|
|
|
|
|
|
|
| 72 |
2. Go to http://localhost:7860. You may need to enable port forwarding.
|
| 73 |
|
| 74 |
### Notes
|
| 75 |
- The script and demo run a ComfyUI server ephemerally
|
| 76 |
+
- Gradio demo is faster than the script because models remain loaded in memory
|
| 77 |
- All images are saved in ./ComfyUI/input/scratch/
|
| 78 |
- Temporary files are created during processing and cleaned up afterward
|
| 79 |
|