Update docs/API_REFERENCE.md
Browse files- docs/API_REFERENCE.md +61 -34
docs/API_REFERENCE.md
CHANGED
|
@@ -1,68 +1,95 @@
|
|
| 1 |
-
|
| 2 |
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
|
| 5 |
---
|
| 6 |
|
| 7 |
-
## `
|
| 8 |
|
| 9 |
-
### `
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
name: str
|
| 15 |
-
id: str
|
| 16 |
-
description: str
|
| 17 |
-
default_provider: str = "auto"
|
| 18 |
-
```
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
-
|
|
|
|
| 27 |
|
| 28 |
---
|
| 29 |
|
| 30 |
## `inference.py`
|
| 31 |
|
| 32 |
-
### `chat_completion(model_id: str, messages: List[Dict[str,str]], provider:
|
|
|
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
| 35 |
|
| 36 |
-
### `stream_chat_completion(model_id: str, messages: List[Dict[str,str]], provider:
|
| 37 |
-
|
| 38 |
-
Stream partial generation results, yielding content chunks.
|
| 39 |
|
| 40 |
---
|
| 41 |
|
| 42 |
-
## `
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
### `
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
---
|
| 49 |
|
| 50 |
## `deploy.py`
|
| 51 |
|
| 52 |
-
### `send_to_sandbox(code: str)
|
|
|
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
Import a Hugging Face Space by URL, returning status message and code content.
|
| 59 |
|
| 60 |
---
|
| 61 |
|
| 62 |
-
## `
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
-
* `list_plugins() -> List[str]`: return registered plugin names.
|
| 68 |
-
* `run_plugin(name: str, payload: Dict) -> Any`: execute a plugin action.
|
|
|
|
| 1 |
+
<!-- API_REFERENCE.md -->
|
| 2 |
|
| 3 |
+
# Shasha AI — API Reference
|
| 4 |
+
|
| 5 |
+
This document describes the public interfaces provided by each module.
|
| 6 |
|
| 7 |
---
|
| 8 |
|
| 9 |
+
## `hf_client.py`
|
| 10 |
|
| 11 |
+
### `get_inference_client(model_id: str, provider: str = "auto") → InferenceClient`
|
| 12 |
+
Creates and configures a Hugging Face Hub client for chat completions.
|
| 13 |
|
| 14 |
+
- **model_id**: HF model ID or external provider prefix (e.g. `"openai/gpt-4"`, `"gemini/pro"`, `"moonshotai/Kimi-K2-Instruct"`).
|
| 15 |
+
- **provider**: Override provider; one of `auto`, `groq`, `openai`, `gemini`, `fireworks`.
|
| 16 |
+
- **Returns**: `InferenceClient` instance with proper API key & billing target.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
---
|
| 19 |
|
| 20 |
+
## `models.py`
|
| 21 |
+
|
| 22 |
+
### `ModelInfo`
|
| 23 |
+
Dataclass representing model metadata.
|
| 24 |
|
| 25 |
+
- **name**: Human‑readable model name.
|
| 26 |
+
- **id**: Model identifier for API calls.
|
| 27 |
+
- **description**: Short description.
|
| 28 |
+
- **default_provider**: Preferred inference provider.
|
| 29 |
+
|
| 30 |
+
### `AVAILABLE_MODELS: List[ModelInfo]`
|
| 31 |
+
Registry of all supported models.
|
| 32 |
|
| 33 |
+
### `find_model(identifier: str) → Optional[ModelInfo]`
|
| 34 |
+
Lookup model by name (case‑insensitive) or ID.
|
| 35 |
|
| 36 |
---
|
| 37 |
|
| 38 |
## `inference.py`
|
| 39 |
|
| 40 |
+
### `chat_completion(model_id: str, messages: List[Dict[str, str]], provider: str = None, max_tokens: int = 4096) → str`
|
| 41 |
+
Synchronously sends a chat completion request.
|
| 42 |
|
| 43 |
+
- **messages**: List of `{"role": "...", "content": "..."}`
|
| 44 |
+
- **provider**: Optional override; defaults to model’s `default_provider`.
|
| 45 |
+
- **Returns**: Response content string.
|
| 46 |
|
| 47 |
+
### `stream_chat_completion(model_id: str, messages: List[Dict[str, str]], provider: str = None, max_tokens: int = 4096) → Generator[str]`
|
| 48 |
+
Streams a chat completion, yielding incremental content chunks.
|
|
|
|
| 49 |
|
| 50 |
---
|
| 51 |
|
| 52 |
+
## `utils.py`
|
| 53 |
+
|
| 54 |
+
### `history_to_messages(history: History, system: str) → List[Dict]`
|
| 55 |
+
Converts internal history list to OpenAI‑style messages.
|
| 56 |
+
|
| 57 |
+
### `remove_code_block(text: str) → str`
|
| 58 |
+
Strips markdown code fences from AI output and returns raw code.
|
| 59 |
+
|
| 60 |
+
### `parse_transformers_js_output(text: str) → Dict[str, str]`
|
| 61 |
+
Extracts `index.html`, `index.js`, `style.css` from a multi‑file markdown output.
|
| 62 |
|
| 63 |
+
### `format_transformers_js_output(files: Dict[str, str]) → str`
|
| 64 |
+
Formats a dict of file contents into a single combined string with section headers.
|
| 65 |
|
| 66 |
+
*(Other utilities: multimodal image processing, search/replace, history rendering)*
|
| 67 |
|
| 68 |
---
|
| 69 |
|
| 70 |
## `deploy.py`
|
| 71 |
|
| 72 |
+
### `send_to_sandbox(code: str) → str`
|
| 73 |
+
Wraps HTML code in a base64 data‑URI iframe for live preview.
|
| 74 |
|
| 75 |
+
### `load_project_from_url(url: str) → Tuple[str, str]`
|
| 76 |
+
Fetches `app.py` or `index.html` from a public HF Space URL.
|
| 77 |
|
| 78 |
+
*(Also: HF Spaces deploy helpers: `deploy_to_spaces()`, `deploy_to_spaces_static()`, `deploy_to_user_space()`)*
|
|
|
|
|
|
|
| 79 |
|
| 80 |
---
|
| 81 |
|
| 82 |
+
## `app.py`
|
| 83 |
+
|
| 84 |
+
### `generation_code(query, image, file, website_url, _setting, _history, _current_model, enable_search, language, provider) → Tuple[str, History, str, List[Dict]]`
|
| 85 |
+
Main generation handler bound to the “Generate” button.
|
| 86 |
|
| 87 |
+
- **Returns**:
|
| 88 |
+
1. `code_str`: Generated (or edited) source code
|
| 89 |
+
2. `new_history`: Updated prompt/response history
|
| 90 |
+
3. `sandbox_html`: Live preview HTML iframe string
|
| 91 |
+
4. `chat_msgs`: Chatbot‑style history for the UI
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
|
| 95 |
+
_For more examples, see the Jupyter notebooks in_ `notebooks/` and the quick‑start guide in `QUICKSTART.md`.
|
|
|
|
|
|