Spaces:
Sleeping
Sleeping
metadata
title: Sheikh LLM Studio
emoji: π
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
Sheikh LLM Studio
Full-featured FastAPI application deployed on Hugging Face Spaces with multi-model chat, tooling, and model-creation workflows.
Features
- Web UI with real-time chat experience and adjustable generation settings
- Multi-model support backed by Hugging Face gated models via
InferenceClient - Tool integration endpoints for search and code execution prototypes
- Model Studio workflow to queue fine-tuning jobs and monitor status
- WebSocket endpoint for streaming-style interactions
Configuration
- Add an
HF_TOKENrepository secret in your Space with access to the desired gated models. - Optional: adjust available models in
app.pyunderConfig.AVAILABLE_MODELS.
Development
git clone git@hf.co:spaces/RecentCoders/sheikh-llm
cd sheikh-llm
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn app:app --reload --port 7860
Deployment
./deploy.sh
After pushing, monitor the build logs on your Space and test the endpoints:
https://recentcoders-sheikh-llm.hf.space/https://recentcoders-sheikh-llm.hf.space/chathttps://recentcoders-sheikh-llm.hf.space/docshttps://recentcoders-sheikh-llm.hf.space/health