language:
- en
- code
language_bcp47:
- en
- javascript
license: apache-2.0
tags:
- text-generation
- code
- javascript
- coding-assistant
- fine-tuning
- lora
- unsloth
- gpt-oss
- vllm
base_model: openai/gpt-oss-20b
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: gpt-oss-coder-v0.1-javascript
results: []
gpt-oss-coder-v0.1-javascript
A language-specialized coding model for JavaScript, fine-tuned from OpenAI’s open-weight gpt-oss base with very small, curated JS data using Unsloth.
This release prioritizes practical code generation quality over benchmark scores and has been qualitatively validated on real prompts (e.g., completions, refactors, docstrings, tests).
Status: Experimental preview (
v0.1-javascript).
Focus: JS coding tasks (function-level completion, small refactors, idiomatic patterns).
Why small-data? Faster iteration and lower cost while proving specialization value.
Model Details
- Model type: Causal LM (decoder-only), JS-specialized fine-tune
- Base model:
openai/gpt-oss-20b(open-weight, Apache-2.0) - Fine-tuning: LoRA via Unsloth, minimal curated dataset (code snippets, tasks, transformations)
- License: Apache-2.0 (derivative weights released under Apache-2.0)
- Author / Maintainer:
hokar3361 - Intended Languages: JavaScript (ES6+); English prompts recommended
Intended Use & Limitations
Intended Use
- Code completion and synthesis for JavaScript
- Small refactors, idiomatic rewrites, test scaffolding, JSDoc/docstrings
- Snippet-level reasoning and bug fixes
Out of Scope / Limitations
- Not a substitute for static analysis, linters, or security review
- May hallucinate APIs or types; verify before production use
- Trained on small domain data → expect gaps on rare frameworks or edge APIs
Quickstart
vllm serve hokar3361/gpt-oss-coderjs-v0.1 \
--async-scheduling \
--max-model-len 4096 \
--gpu-memory-utilization 0.90
For LoRA-only repos, add --lora-modules as per vLLM documentation.
For merged weights, the above command is sufficient.
Acknowledgements This work was made possible thanks to the open-weight release of gpt-oss by OpenAI, which provided a strong foundation under the Apache-2.0 license. Special thanks to the open-source community around Unsloth for enabling memory-efficient and rapid LoRA fine-tuning on limited hardware. We also thank the Hugging Face and vLLM ecosystems for lowering the barrier to experimentation.
Disclaimer & Experimental Status This model (v0.1-javascript) is highly experimental:
Small data: Fine-tuned on a very small JavaScript-focused dataset, mainly to validate the workflow and feasibility of language specialization.
Not production-ready: The model may generate incomplete, insecure, or non-idiomatic code; do not rely on it for production use without careful review.
Benchmarks not representative: Due to issues in the current verification scripts, benchmark scores are not included. Assessment is based only on qualitative inspection of outputs, which show promising improvements but remain anecdotal.
Early stage: This is only an initial exploration; future versions with larger, more diverse training corpora are expected to improve stability and coverage.
We share this release to contribute to the community and gather early feedback. Use responsibly, validate outputs, and treat this as a proof-of-concept.