MiniLM-cosqa-512 / README.md
Devy1's picture
Add new SentenceTransformer model
cdf04d0 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:9020
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: python multiprocessing show cpu count
    sentences:
      - |-
        def unique(seq):
            """Return the unique elements of a collection even if those elements are
               unhashable and unsortable, like dicts and sets"""
            cleaned = []
            for each in seq:
                if each not in cleaned:
                    cleaned.append(each)
            return cleaned
      - |-
        def is_in(self, point_x, point_y):
                """ Test if a point is within this polygonal region """

                point_array = array(((point_x, point_y),))
                vertices = array(self.points)
                winding = self.inside_rule == "winding"
                result = points_in_polygon(point_array, vertices, winding)
                return result[0]
      - |-
        def machine_info():
            """Retrieve core and memory information for the current machine.
            """
            import psutil
            BYTES_IN_GIG = 1073741824.0
            free_bytes = psutil.virtual_memory().total
            return [{"memory": float("%.1f" % (free_bytes / BYTES_IN_GIG)), "cores": multiprocessing.cpu_count(),
                     "name": socket.gethostname()}]
  - source_sentence: python subplot set the whole title
    sentences:
      - |-
        def set_title(self, title, **kwargs):
                """Sets the title on the underlying matplotlib AxesSubplot."""
                ax = self.get_axes()
                ax.set_title(title, **kwargs)
      - |-
        def moving_average(array, n=3):
            """
            Calculates the moving average of an array.

            Parameters
            ----------
            array : array
                The array to have the moving average taken of
            n : int
                The number of points of moving average to take
            
            Returns
            -------
            MovingAverageArray : array
                The n-point moving average of the input array
            """
            ret = _np.cumsum(array, dtype=float)
            ret[n:] = ret[n:] - ret[:-n]
            return ret[n - 1:] / n
      - |-
        def to_query_parameters(parameters):
            """Converts DB-API parameter values into query parameters.

            :type parameters: Mapping[str, Any] or Sequence[Any]
            :param parameters: A dictionary or sequence of query parameter values.

            :rtype: List[google.cloud.bigquery.query._AbstractQueryParameter]
            :returns: A list of query parameters.
            """
            if parameters is None:
                return []

            if isinstance(parameters, collections_abc.Mapping):
                return to_query_parameters_dict(parameters)

            return to_query_parameters_list(parameters)
  - source_sentence: python merge two set to dict
    sentences:
      - |-
        def make_regex(separator):
            """Utility function to create regexp for matching escaped separators
            in strings.

            """
            return re.compile(r'(?:' + re.escape(separator) + r')?((?:[^' +
                              re.escape(separator) + r'\\]|\\.)+)')
      - |-
        def csvtolist(inputstr):
            """ converts a csv string into a list """
            reader = csv.reader([inputstr], skipinitialspace=True)
            output = []
            for r in reader:
                output += r
            return output
      - |-
        def dict_merge(set1, set2):
            """Joins two dictionaries."""
            return dict(list(set1.items()) + list(set2.items()))
  - source_sentence: python string % substitution float
    sentences:
      - |-
        def _configure_logger():
            """Configure the logging module."""
            if not app.debug:
                _configure_logger_for_production(logging.getLogger())
            elif not app.testing:
                _configure_logger_for_debugging(logging.getLogger())
      - |-
        def __set__(self, instance, value):
                """ Set a related object for an instance. """

                self.map[id(instance)] = (weakref.ref(instance), value)
      - |-
        def format_float(value): # not used
            """Modified form of the 'g' format specifier.
            """
            string = "{:g}".format(value).replace("e+", "e")
            string = re.sub("e(-?)0*(\d+)", r"e\1\2", string)
            return string
  - source_sentence: bottom 5 rows in python
    sentences:
      - "def refresh(self, document):\n\t\t\"\"\" Load a new copy of a document from the database.  does not\n\t\t\treplace the old one \"\"\"\n\t\ttry:\n\t\t\told_cache_size = self.cache_size\n\t\t\tself.cache_size = 0\n\t\t\tobj = self.query(type(document)).filter_by(mongo_id=document.mongo_id).one()\n\t\tfinally:\n\t\t\tself.cache_size = old_cache_size\n\t\tself.cache_write(obj)\n\t\treturn obj"
      - |-
        def table_top_abs(self):
                """Returns the absolute position of table top"""
                table_height = np.array([0, 0, self.table_full_size[2]])
                return string_to_array(self.floor.get("pos")) + table_height
      - |-
        def get_dimension_array(array):
            """
            Get dimension of an array getting the number of rows and the max num of
            columns.
            """
            if all(isinstance(el, list) for el in array):
                result = [len(array), len(max([x for x in array], key=len,))]

            # elif array and isinstance(array, list):
            else:
                result = [len(array), 1]

            return result
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Devy1/MiniLM-cosqa-512")
# Run inference
sentences = [
    'bottom 5 rows in python',
    'def table_top_abs(self):\n        """Returns the absolute position of table top"""\n        table_height = np.array([0, 0, self.table_full_size[2]])\n        return string_to_array(self.floor.get("pos")) + table_height',
    'def refresh(self, document):\n\t\t""" Load a new copy of a document from the database.  does not\n\t\t\treplace the old one """\n\t\ttry:\n\t\t\told_cache_size = self.cache_size\n\t\t\tself.cache_size = 0\n\t\t\tobj = self.query(type(document)).filter_by(mongo_id=document.mongo_id).one()\n\t\tfinally:\n\t\t\tself.cache_size = old_cache_size\n\t\tself.cache_write(obj)\n\t\treturn obj',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.4537, -0.0817],
#         [ 0.4537,  1.0000, -0.0463],
#         [-0.0817, -0.0463,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,020 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 9.67 tokens
    • max: 21 tokens
    • min: 40 tokens
    • mean: 86.17 tokens
    • max: 256 tokens
  • Samples:
    anchor positive
    1d array in char datatype in python def _convert_to_array(array_like, dtype):
    """
    Convert Matrix attributes which are array-like or buffer to array.
    """
    if isinstance(array_like, bytes):
    return np.frombuffer(array_like, dtype=dtype)
    return np.asarray(array_like, dtype=dtype)
    python condition non none def _not(condition=None, **kwargs):
    """
    Return the opposite of input condition.

    :param condition: condition to process.

    :result: not condition.
    :rtype: bool
    """

    result = True

    if condition is not None:
    result = not run(condition, **kwargs)

    return result
    accessing a column from a matrix in python def get_column(self, X, column):
    """Return a column of the given matrix.

    Args:
    X: numpy.ndarray or pandas.DataFrame.
    column: int or str.

    Returns:
    np.ndarray: Selected column.
    """
    if isinstance(X, pd.DataFrame):
    return X[column].values

    return X[:, column]
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 512
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0556 1 1.2259
0.1111 2 1.1589
0.1667 3 0.9588
0.2222 4 1.0265
0.2778 5 0.9783
0.3333 6 0.9464
0.3889 7 0.9527
0.4444 8 0.969
0.5 9 1.0237
0.5556 10 1.0134
0.6111 11 0.9297
0.6667 12 0.9877
0.7222 13 0.9531
0.7778 14 0.9156
0.8333 15 0.8613
0.8889 16 0.83
0.9444 17 0.8991
1.0 18 0.6764
1.0556 19 0.8545
1.1111 20 0.7454
1.1667 21 0.834
1.2222 22 0.7625
1.2778 23 0.7808
1.3333 24 0.817
1.3889 25 0.8032
1.4444 26 0.7854
1.5 27 0.7376
1.5556 28 0.8346
1.6111 29 0.8738
1.6667 30 0.7524
1.7222 31 0.72
1.7778 32 0.711
1.8333 33 0.7498
1.8889 34 0.7597
1.9444 35 0.7883
2.0 36 0.5038
2.0556 37 0.6932
2.1111 38 0.7273
2.1667 39 0.6723
2.2222 40 0.7059
2.2778 41 0.6159
2.3333 42 0.809
2.3889 43 0.6959
2.4444 44 0.7881
2.5 45 0.6861
2.5556 46 0.6545
2.6111 47 0.7235
2.6667 48 0.7031
2.7222 49 0.6679
2.7778 50 0.6835
2.8333 51 0.6773
2.8889 52 0.6972
2.9444 53 0.7043
3.0 54 0.4647

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 5.1.1
  • Transformers: 4.56.2
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.10.1
  • Datasets: 4.1.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}