Fine-tuned MiMo Audio to accept text/emotion captions (e.g. "intense fury, rage, hate") as input, trained a LoRA for 1k steps on LAION's voice acting dataset.
Iโm excited to introduce a new leaderboard UI + keyboard shortcuts on the TTS Arena!
The refreshed UI for the leaderboard is smoother and (hopefully) more intuitive. You can now view models based on a simpler win-rate percentage and exclude closed models.
In addition, the TTS Arena now supports keyboard shortcuts. This should make voting much more efficient as you can now vote without clicking anything!
In both the normal Arena and Battle Mode, press "r" to select a random text, Cmd/Ctrl + Enter to synthesize, and "a"/"b" to vote! View more details about keyboard shortcuts by pressing "?" (Shift + /) on the Arena.
Hi, do you see a limit in the number of voices I have 416 and it fails to load all of them. (scroll menu limit?)
I'm not sure if there's a set limit for the dropdown, but with that many voices, it might make sense to not use the dropdown but instead have a textbox to specify the path to the reference speaker.
I don't think that's supported by the model, but you could fine-tune it or clone a voice with emotions. (I am not the author of the model itself, just of the web demo)
Hi, You can upload a WAV file to the voices folder. Then, in the app.py file, add the filename of the voice (without .wav) to the voicelist list. It should show up in the Gradio demo.
I just released an unofficial demo for Moonshine ASR!
Moonshine is a fast, efficient, & accurate ASR model released by Useful Sensors. It's designed for on-device inference and licensed under the MIT license!
Training itself would be pretty easy, but the main issue would be data. AFAIK there's not much data out there for other TTS models. I synthetically generated the StyleTTS 2 dataset as it's quite efficient but other models would require much more compute.
reacted to Jofthomas's
post with ๐ฅabout 1 year ago
It is an LLM controlled Rogue-Like in which the LLM gets a markdown representation of the map, and should generate a JSON with the objective to fulfill on the map as well as the necessary objects and their placements.
Since new TTS (Text-to-Speech) systems are coming out what feels like every day, and it's currently hard to compare them, my latest project has focused on doing just that.
I was inspired by the TTS-AGI/TTS-Arena (definitely check it out if you haven't), which compares recent TTS system using crowdsourced A/B testing.
I wanted to see if we can also do a similar evaluation with objective metrics and it's now available here: ttsds/benchmark Anyone can submit a new TTS model, and I hope this can provide a way to get some information on which areas models perform well or poorly in.