Fine-Tuned BART Model for Translation

This repository contains a fine-tuned version of the BART model for translation tasks. The model is trained to translate human instructions into CLI (Command Line Interface) commands.

Model Details

  • Model: BART
  • Architecture: facebook/bart-large
  • Fine-Tuned on: Custom translation dataset
  • Max Input Length: 128 tokens
  • Max Output Length: 50 tokens
  • Beam Search: 4 beams

Usage

You can use this model to generate CLI commands from human instructions. You can either directly use the model for inference or integrate it into applications using the provided Gradio interface.

Inference

To perform inference using the model, you can load it.

Gradio Interface

An interactive Gradio interface is provided for easy model interaction. You can run the interface by executing the code in gradio_app.py. The interface allows you to enter human instructions and get corresponding generated CLI commands.

Model Files

  • config.json: Model configuration file
  • pytorch_model.bin: Model weights
  • tokenizer.json: Tokenizer configuration file
  • vocab.txt: Vocabulary file

Acknowledgments

The initial BART model and tokenizer are from the Hugging Face Transformers library (facebook/bart-large).

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using Asna-DifiNative/AIBuddy 1