Spaces:
				
			
			
	
			
			
					
		Running
		
	
	
	
			
			
	
	
	
	
		
		
					
		Running
		
	Sync from GitHub repo
Browse filesThis Space is synced from the GitHub repo: https://github.com/SWivid/F5-TTS. Please submit contributions to the Space there
- README_REPO.md +3 -3
    	
        README_REPO.md
    CHANGED
    
    | @@ -72,7 +72,7 @@ An initial guidance on Finetuning [#57](https://github.com/SWivid/F5-TTS/discuss | |
| 72 |  | 
| 73 | 
             
            Gradio UI finetuning with `finetune_gradio.py` see [#143](https://github.com/SWivid/F5-TTS/discussions/143).
         | 
| 74 |  | 
| 75 | 
            -
             | 
| 76 |  | 
| 77 | 
             
            By default, the training script does NOT use logging (assuming you didn't manually log in using `wandb login`).
         | 
| 78 |  | 
| @@ -112,8 +112,8 @@ Currently support 30s for a single generation, which is the **TOTAL** length of | |
| 112 |  | 
| 113 | 
             
            Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_path` in `inference-cli.py`
         | 
| 114 |  | 
| 115 | 
            -
            for change model use  | 
| 116 | 
            -
            for change vocab.txt use  | 
| 117 |  | 
| 118 | 
             
            ```bash
         | 
| 119 | 
             
            python inference-cli.py \
         | 
|  | |
| 72 |  | 
| 73 | 
             
            Gradio UI finetuning with `finetune_gradio.py` see [#143](https://github.com/SWivid/F5-TTS/discussions/143).
         | 
| 74 |  | 
| 75 | 
            +
            ### Wandb Logging
         | 
| 76 |  | 
| 77 | 
             
            By default, the training script does NOT use logging (assuming you didn't manually log in using `wandb login`).
         | 
| 78 |  | 
|  | |
| 112 |  | 
| 113 | 
             
            Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_path` in `inference-cli.py`
         | 
| 114 |  | 
| 115 | 
            +
            for change model use `--ckpt_file` to specify the model you want to load,  
         | 
| 116 | 
            +
            for change vocab.txt use `--vocab_file` to provide your vocab.txt file.
         | 
| 117 |  | 
| 118 | 
             
            ```bash
         | 
| 119 | 
             
            python inference-cli.py \
         | 
 
			
