AbstractPhil's picture
Update README.md
3c5276f verified
metadata
dataset_info:
  features:
    - name: image_id
      dtype: string
    - name: label
      dtype: int32
    - name: clip_model
      dtype: string
    - name: clip_features
      list: float32
    - name: vector_dim
      dtype: int32
    - name: timestamp
      dtype: timestamp[ns]
  splits:
    - name: clip_vit_b32_train
      num_bytes: 2723761042
      num_examples: 1281167
    - name: clip_vit_laion_b32_train
      num_bytes: 2789100559
      num_examples: 1281167
    - name: clip_vit_laion_b32_validation
      num_bytes: 108850000
      num_examples: 50000
    - name: clip_vit_b16_train
      num_bytes: 2777570056
      num_examples: 1281167
    - name: clip_vit_b16_validation
      num_bytes: 108400000
      num_examples: 50000
    - name: clip_vit_l14_train
      num_bytes: 4090766231
      num_examples: 1281167
    - name: clip_vit_l14_validation
      num_bytes: 159650000
      num_examples: 50000
    - name: clip_vit_laion_bigg14_train
      num_bytes: 6728689084
      num_examples: 1281167
    - name: clip_vit_laion_bigg14_validation
      num_bytes: 262600000
      num_examples: 50000
    - name: clip_vit_b32_validation
      num_bytes: 108400000
      num_examples: 50000
    - name: clip_vit_b32_test
      num_bytes: 216800000
      num_examples: 100000
    - name: clip_vit_b16_test
      num_bytes: 216800000
      num_examples: 100000
    - name: clip_vit_laion_b32_test
      num_bytes: 217700000
      num_examples: 100000
    - name: clip_vit_l14_test
      num_bytes: 319300000
      num_examples: 100000
    - name: clip_vit_laion_h14_test
      num_bytes: 422500000
      num_examples: 100000
  download_size: 25438949728
  dataset_size: 21250886972
configs:
  - config_name: default
    data_files:
      - split: clip_vit_b32_train
        path: data/clip_vit_b32_train-*
      - split: clip_vit_b32_validation
        path: data/clip_vit_b32_validation-*
      - split: clip_vit_laion_b32_train
        path: data/clip_vit_laion_b32_train-*
      - split: clip_vit_laion_b32_validation
        path: data/clip_vit_laion_b32_validation-*
      - split: clip_vit_b16_train
        path: data/clip_vit_b16_train-*
      - split: clip_vit_b16_validation
        path: data/clip_vit_b16_validation-*
      - split: clip_vit_l14_train
        path: data/clip_vit_l14_train-*
      - split: clip_vit_l14_validation
        path: data/clip_vit_l14_validation-*
      - split: clip_vit_laion_bigg14_train
        path: data/clip_vit_laion_bigg14_train-*
      - split: clip_vit_laion_bigg14_validation
        path: data/clip_vit_laion_bigg14_validation-*
      - split: clip_vit_b32_test
        path: data/clip_vit_b32_test-*
      - split: clip_vit_b16_test
        path: data/clip_vit_b16_test-*
      - split: clip_vit_laion_b32_test
        path: data/clip_vit_laion_b32_test-*
      - split: clip_vit_l14_test
        path: data/clip_vit_l14_test-*
      - split: clip_vit_laion_h14_test
        path: data/clip_vit_laion_h14_test-*
task_categories:
  - feature-extraction
  - image-feature-extraction
license: mit
tags:
  - features
  - image_features
  - extracted_features
  - precomputed_features
  - imagenet
  - imagenet_features
  - clip_vit
  - variants
size_categories:
  - 1M<n<10M

Update: 10/2/2025

Claude said that I'm not being careful enough with my database curation after grilling me for 20 minutes, so I included the preparer script as well.

Claude Sonnet 4.5 is kind of a chad.

Update; 9/26/2025

Having to download this whole repo is annoying, so I'm making sure the splits are named train/val/test (if they exist) and the named subset is the clip name.

Older non-dated updates

Everything extracted with torch configured as deterministic; using seed 42 on an a100 using colab; so if it has variances from expectation it's on cuda.

It's a little quirky;

  • Most of the splits have train, test, val. Many do not.
  • Most of the splits have a proper "image_id" md5 id for verification.

The prompts used were direct literal prompts for the classification name;

No use of "a photo of" or any such invariance; just the classification text.

This is a series of clip-vit extracted feature maps from a 256x256 cropped and resized imagenet variant hosted here on huggingface.

I ran the processor 224x224 and then extracted features from the entire dataset batch-sequentially while simultaneously capturing the necessary classifiers and classifications associated with the images for downstream testing and assessment.

Academic and research purpose use only.

clip-vit-large-patch14 variations do exist in the splits.

clip-vit-bigG is the 1280 dim variation and it does exist; it took quite a while to extract - and it is in fact missing it's test split. Sorry about that.

There are many variants of clip-vit-base from many variant forms. Each of them extracted using the same process as the others.