Dataset Viewer
	| results
				 dict | group_subtasks
				 dict | configs
				 dict | versions
				 dict | n-shot
				 dict | higher_is_better
				 dict | n-samples
				 dict | config
				 dict | git_hash
				 stringclasses 1
				value | date
				 float64 1.73B 1.73B | pretty_env_info
				 stringclasses 1
				value | transformers_version
				 stringclasses 1
				value | upper_git_hash
				 null | tokenizer_pad_token
				 sequencelengths 2 2 | tokenizer_eos_token
				 sequencelengths 2 2 | tokenizer_bos_token
				 sequencelengths 2 2 | eot_token_id
				 int64 128k 128k | max_length
				 int64 131k 131k | task_hashes
				 dict | model_source
				 stringclasses 1
				value | model_name
				 stringclasses 1
				value | model_name_sanitized
				 stringclasses 1
				value | hf_log_model_name
				 null | system_instruction
				 null | system_instruction_sha
				 null | fewshot_as_multiturn
				 bool 1
				class | chat_template
				 null | chat_template_sha
				 null | start_time
				 float64 759k 767k | end_time
				 float64 764k 768k | total_evaluation_time_seconds
				 stringclasses 2
				values | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	{
  "gsm8k": {
    "alias": "gsm8k",
    "exact_match,strict-match": 0,
    "exact_match_stderr,strict-match": 0,
    "exact_match,flexible-extract": 0.12351130737321023,
    "exact_match_stderr,flexible-extract": 0.003806344450172962
  }
} | 
	{
  "gsm8k": []
} | 
	{
  "gsm8k": {
    "task": "gsm8k",
    "tag": [
      "math_word_problems"
    ],
    "dataset_path": "gsm8k",
    "dataset_name": "main",
    "training_split": "train",
    "test_split": "train",
    "fewshot_split": "train",
    "doc_to_text": "Question: {{question}}\nAnswer:",
    "doc_to_target": "{{answer}}",
    "description": "",
    "target_delimiter": " ",
    "fewshot_delimiter": "\n\n",
    "num_fewshot": 0,
    "metric_list": [
      {
        "metric": "exact_match",
        "aggregation": "mean",
        "higher_is_better": true,
        "ignore_case": true,
        "ignore_punctuation": false,
        "regexes_to_ignore": [
          ",",
          "\\$",
          "(?s).*#### ",
          "\\.$"
        ]
      }
    ],
    "output_type": "generate_until",
    "generation_kwargs": {
      "until": [
        "Question:",
        "</s>",
        "<|im_end|>"
      ],
      "do_sample": false,
      "temperature": 0,
      "top_p": 1
    },
    "repeats": 1,
    "filter_list": [
      {
        "name": "strict-match",
        "filter": [
          {
            "function": "regex",
            "regex_pattern": "#### (\\-?[0-9\\.\\,]+)",
            "group_select": null
          },
          {
            "function": "take_first",
            "regex_pattern": null,
            "group_select": null
          }
        ]
      },
      {
        "name": "flexible-extract",
        "filter": [
          {
            "function": "regex",
            "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)",
            "group_select": -1
          },
          {
            "function": "take_first",
            "regex_pattern": null,
            "group_select": null
          }
        ]
      }
    ],
    "should_decontaminate": false,
    "metadata": {
      "version": 3
    }
  }
} | 
	{
  "gsm8k": 3
} | 
	{
  "gsm8k": 0
} | 
	{
  "gsm8k": {
    "exact_match": true
  }
} | 
	{
  "gsm8k": {
    "original": 7473,
    "effective": 7473
  }
} | 
	{
  "model": "hf",
  "model_args": "pretrained=meta-llama/Llama-3.2-3B,dtype=float16",
  "model_num_parameters": 3212749824,
  "model_dtype": "torch.float16",
  "model_revision": "main",
  "model_sha": "13afe5124825b4f3751f836b40dafda64c1ed062",
  "batch_size": "auto",
  "batch_sizes": [],
  "device": null,
  "use_cache": null,
  "limit": null,
  "bootstrap_iters": 100000,
  "gen_kwargs": null,
  "random_seed": 0,
  "numpy_seed": 1234,
  "torch_seed": 1234,
  "fewshot_seed": 1234
} | 
	0b99443 | 1,734,103,215.14859 | 
	PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.11 (main, Apr 17 2023, 17:57:03) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             192
On-line CPU(s) list:                0-191
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 9654 96-Core Processor
CPU family:                         25
Model:                              17
Thread(s) per core:                 1
Core(s) per socket:                 96
Socket(s):                          2
Stepping:                           1
Frequency boost:                    enabled
CPU max MHz:                        3707.8120
CPU min MHz:                        1500.0000
BogoMIPS:                           4799.99
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V
L1d cache:                          6 MiB (192 instances)
L1i cache:                          6 MiB (192 instances)
L2 cache:                           192 MiB (192 instances)
L3 cache:                           768 MiB (24 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-95
NUMA node1 CPU(s):                  96-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect | 
	4.47.0 | null | 
	[
  "<|end_of_text|>",
  "128001"
] | 
	[
  "<|end_of_text|>",
  "128001"
] | 
	[
  "<|begin_of_text|>",
  "128000"
] | 128,001 | 131,072 | 
	{
  "gsm8k": "7acf08a400c7a97fa29fef2256877679b4ae6b1e0bd5343713ce97aa9046c469"
} | 
	hf | 
	meta-llama/Llama-3.2-3B | 
	meta-llama__Llama-3.2-3B | null | null | null | false | null | null | 759,340.222666 | 764,433.054194 | 
	5092.831527994014 | 
| 
	{
  "gsm8k": {
    "alias": "gsm8k",
    "exact_match,strict-match": 0,
    "exact_match_stderr,strict-match": 0,
    "exact_match,flexible-extract": 0.10841546626231995,
    "exact_match_stderr,flexible-extract": 0.008563852506627495
  }
} | 
	{
  "gsm8k": []
} | 
	{
  "gsm8k": {
    "task": "gsm8k",
    "tag": [
      "math_word_problems"
    ],
    "dataset_path": "gsm8k",
    "dataset_name": "main",
    "training_split": "train",
    "test_split": "test",
    "fewshot_split": "train",
    "doc_to_text": "Question: {{question}}\nAnswer:",
    "doc_to_target": "{{answer}}",
    "description": "",
    "target_delimiter": " ",
    "fewshot_delimiter": "\n\n",
    "num_fewshot": 0,
    "metric_list": [
      {
        "metric": "exact_match",
        "aggregation": "mean",
        "higher_is_better": true,
        "ignore_case": true,
        "ignore_punctuation": false,
        "regexes_to_ignore": [
          ",",
          "\\$",
          "(?s).*#### ",
          "\\.$"
        ]
      }
    ],
    "output_type": "generate_until",
    "generation_kwargs": {
      "until": [
        "Question:",
        "</s>",
        "<|im_end|>"
      ],
      "do_sample": false,
      "temperature": 0,
      "top_p": 1
    },
    "repeats": 1,
    "filter_list": [
      {
        "name": "strict-match",
        "filter": [
          {
            "function": "regex",
            "regex_pattern": "#### (\\-?[0-9\\.\\,]+)",
            "group_select": null
          },
          {
            "function": "take_first",
            "regex_pattern": null,
            "group_select": null
          }
        ]
      },
      {
        "name": "flexible-extract",
        "filter": [
          {
            "function": "regex",
            "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)",
            "group_select": -1
          },
          {
            "function": "take_first",
            "regex_pattern": null,
            "group_select": null
          }
        ]
      }
    ],
    "should_decontaminate": false,
    "metadata": {
      "version": 3
    }
  }
} | 
	{
  "gsm8k": 3
} | 
	{
  "gsm8k": 0
} | 
	{
  "gsm8k": {
    "exact_match": true
  }
} | 
	{
  "gsm8k": {
    "original": 1319,
    "effective": 1319
  }
} | 
	{
  "model": "hf",
  "model_args": "pretrained=meta-llama/Llama-3.2-3B,dtype=float16",
  "model_num_parameters": 3212749824,
  "model_dtype": "torch.float16",
  "model_revision": "main",
  "model_sha": "13afe5124825b4f3751f836b40dafda64c1ed062",
  "batch_size": "1",
  "batch_sizes": [],
  "device": null,
  "use_cache": null,
  "limit": null,
  "bootstrap_iters": 100000,
  "gen_kwargs": null,
  "random_seed": 0,
  "numpy_seed": 1234,
  "torch_seed": 1234,
  "fewshot_seed": 1234
} | 
	0b99443 | 1,734,110,658.501217 | 
	PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.11 (main, Apr 17 2023, 17:57:03) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             192
On-line CPU(s) list:                0-191
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 9654 96-Core Processor
CPU family:                         25
Model:                              17
Thread(s) per core:                 1
Core(s) per socket:                 96
Socket(s):                          2
Stepping:                           1
Frequency boost:                    enabled
CPU max MHz:                        3707.8120
CPU min MHz:                        1500.0000
BogoMIPS:                           4799.99
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V
L1d cache:                          6 MiB (192 instances)
L1i cache:                          6 MiB (192 instances)
L2 cache:                           192 MiB (192 instances)
L3 cache:                           768 MiB (24 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-95
NUMA node1 CPU(s):                  96-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect | 
	4.47.0 | null | 
	[
  "<|end_of_text|>",
  "128001"
] | 
	[
  "<|end_of_text|>",
  "128001"
] | 
	[
  "<|begin_of_text|>",
  "128000"
] | 128,001 | 131,072 | 
	{
  "gsm8k": "84d59512bae5be7ab7a8aa6ee025db3460ce8947c48d851de0474f8f29fc2668"
} | 
	hf | 
	meta-llama/Llama-3.2-3B | 
	meta-llama__Llama-3.2-3B | null | null | null | false | null | null | 766,781.339171 | 767,712.27869 | 
	930.9395187930204 | 
No dataset card yet
- Downloads last month
- 5
