Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -1,89 +1,22 @@ 
     | 
|
| 1 | 
         
             
            ---
         
     | 
| 
         | 
|
| 2 | 
         
             
            library_name: sentence-transformers
         
     | 
| 3 | 
         
             
            pipeline_tag: sentence-similarity
         
     | 
| 4 | 
         
             
            tags:
         
     | 
| 5 | 
         
             
            - sentence-transformers
         
     | 
| 6 | 
         
             
            - feature-extraction
         
     | 
| 7 | 
         
             
            - sentence-similarity
         
     | 
| 8 | 
         
            -
             
     | 
| 9 | 
         
             
            ---
         
     | 
| 
         | 
|
| 10 | 
         | 
| 11 | 
         
            -
             
     | 
| 12 | 
         
            -
             
     | 
| 13 | 
         
            -
            This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
         
     | 
| 14 | 
         
            -
             
     | 
| 15 | 
         
            -
            <!--- Describe your model here -->
         
     | 
| 16 | 
         
            -
             
     | 
| 17 | 
         
            -
            ## Usage (Sentence-Transformers)
         
     | 
| 18 | 
         
            -
             
     | 
| 19 | 
         
            -
            Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
         
     | 
| 20 | 
         
            -
             
     | 
| 21 | 
         
            -
            ```
         
     | 
| 22 | 
         
            -
            pip install -U sentence-transformers
         
     | 
| 23 | 
         
            -
            ```
         
     | 
| 24 | 
         
            -
             
     | 
| 25 | 
         
            -
            Then you can use the model like this:
         
     | 
| 26 | 
         
            -
             
     | 
| 27 | 
         
            -
            ```python
         
     | 
| 28 | 
         
            -
            from sentence_transformers import SentenceTransformer
         
     | 
| 29 | 
         
            -
            sentences = ["This is an example sentence", "Each sentence is converted"]
         
     | 
| 30 | 
         
            -
             
     | 
| 31 | 
         
            -
            model = SentenceTransformer('{MODEL_NAME}')
         
     | 
| 32 | 
         
            -
            embeddings = model.encode(sentences)
         
     | 
| 33 | 
         
            -
            print(embeddings)
         
     | 
| 34 | 
         
            -
            ```
         
     | 
| 35 | 
         
            -
             
     | 
| 36 | 
         
            -
             
     | 
| 37 | 
         
            -
             
     | 
| 38 | 
         
            -
            ## Evaluation Results
         
     | 
| 39 | 
         
            -
             
     | 
| 40 | 
         
            -
            <!--- Describe how your model was evaluated -->
         
     | 
| 41 | 
         
            -
             
     | 
| 42 | 
         
            -
            For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
         
     | 
| 43 | 
         
            -
             
     | 
| 44 | 
         
            -
             
     | 
| 45 | 
         
            -
            ## Training
         
     | 
| 46 | 
         
            -
            The model was trained with the parameters:
         
     | 
| 47 | 
         
            -
             
     | 
| 48 | 
         
            -
            **DataLoader**:
         
     | 
| 49 | 
         
            -
             
     | 
| 50 | 
         
            -
            `torch.utils.data.dataloader.DataLoader` of length 113 with parameters:
         
     | 
| 51 | 
         
            -
            ```
         
     | 
| 52 | 
         
            -
            {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
         
     | 
| 53 | 
         
            -
            ```
         
     | 
| 54 | 
         
            -
             
     | 
| 55 | 
         
            -
            **Loss**:
         
     | 
| 56 | 
         
            -
             
     | 
| 57 | 
         
            -
            `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` 
         
     | 
| 58 | 
         
            -
             
     | 
| 59 | 
         
            -
            Parameters of the fit()-Method:
         
     | 
| 60 | 
         
            -
            ```
         
     | 
| 61 | 
         
            -
            {
         
     | 
| 62 | 
         
            -
                "epochs": 1,
         
     | 
| 63 | 
         
            -
                "evaluation_steps": 1000,
         
     | 
| 64 | 
         
            -
                "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
         
     | 
| 65 | 
         
            -
                "max_grad_norm": 1,
         
     | 
| 66 | 
         
            -
                "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
         
     | 
| 67 | 
         
            -
                "optimizer_params": {
         
     | 
| 68 | 
         
            -
                    "lr": 2e-05
         
     | 
| 69 | 
         
            -
                },
         
     | 
| 70 | 
         
            -
                "scheduler": "WarmupLinear",
         
     | 
| 71 | 
         
            -
                "steps_per_epoch": null,
         
     | 
| 72 | 
         
            -
                "warmup_steps": 0,
         
     | 
| 73 | 
         
            -
                "weight_decay": 0.01
         
     | 
| 74 | 
         
            -
            }
         
     | 
| 75 | 
         
            -
            ```
         
     | 
| 76 | 
         | 
| 
         | 
|
| 77 | 
         | 
| 78 | 
         
            -
             
     | 
| 79 | 
         
            -
             
     | 
| 80 | 
         
            -
            SentenceTransformer(
         
     | 
| 81 | 
         
            -
              (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
         
     | 
| 82 | 
         
            -
              (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
         
     | 
| 83 | 
         
            -
              (2): Normalize()
         
     | 
| 84 | 
         
            -
            )
         
     | 
| 85 | 
         
            -
            ```
         
     | 
| 86 | 
         | 
| 87 | 
         
            -
            ##  
     | 
| 88 | 
         | 
| 89 | 
         
            -
             
     | 
| 
         | 
|
| 1 | 
         
             
            ---
         
     | 
| 2 | 
         
            +
            license: mit
         
     | 
| 3 | 
         
             
            library_name: sentence-transformers
         
     | 
| 4 | 
         
             
            pipeline_tag: sentence-similarity
         
     | 
| 5 | 
         
             
            tags:
         
     | 
| 6 | 
         
             
            - sentence-transformers
         
     | 
| 7 | 
         
             
            - feature-extraction
         
     | 
| 8 | 
         
             
            - sentence-similarity
         
     | 
| 9 | 
         
            +
            - mteb
         
     | 
| 10 | 
         
             
            ---
         
     | 
| 11 | 
         
            +
            # Squirtle
         
     | 
| 12 | 
         | 
| 13 | 
         
            +
            Squirtle is a distill of [bge-base-en-v1.5](BAAI/bge-base-en-v1.5).
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 14 | 
         | 
| 15 | 
         
            +
            ## Intended purpose
         
     | 
| 16 | 
         | 
| 17 | 
         
            +
            <span style="color:blue">This model is designed for use in semantic-autocomplete ([click here for demo](https://mihaiii.github.io/semantic-autocomplete/)).</span>
         
     | 
| 18 | 
         
            +
            Make sure you also pass `pipelineParams={{ pooling: "cls", normalize: true }}` since the default pooling in the component is mean.
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 19 | 
         | 
| 20 | 
         
            +
            ## Usage
         
     | 
| 21 | 
         | 
| 22 | 
         
            +
            Other than within [semantic-autocomplete](https://github.com/Mihaiii/semantic-autocomplete), you can use this model same as [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5#usage).
         
     |