Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
art
Libraries:
Datasets
Dask
License:
docent / README.md
amitha's picture
Update README.md
63ae3a0 verified
metadata
license: cc0-1.0
dataset_info:
  features:
    - name: uuid
      dtype: string
    - name: image
      dtype: image
    - name: reference
      dtype: string
  splits:
    - name: train
      num_bytes: 3513634757
      num_examples: 1000
    - name: val
      num_bytes: 866045160
      num_examples: 250
    - name: test
      num_bytes: 1722946225
      num_examples: 500
  download_size: 6101723890
  dataset_size: 6102626142
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*
task_categories:
  - image-to-text
language:
  - en
tags:
  - art
size_categories:
  - 1K<n<10K

Dataset Card for docent

This dataset contains works of art with expert-written detailed descriptions from the U.S. National Gallery of Art, published as part of DOCENT. It was introduced in "PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions". You can find a full description of its collection methodology in the paper: https://arxiv.org/abs/2510.19060.

Dataset Details

  • Language: English
  • License: CC-0

Dataset Sources

Uses

The intended use of this dataset is as a benchmark for evaluating detailed image description, in particular for artwork. It contains three splits: a training set of 1,000 images, a validation set of 250 images and a test set of 500 images. When evaluating model generations, we recommend reporting PoSh scores (https://github.com/amith-ananthram/posh) or using a replicable metric that produces stronger correlations with the judgments in https://huggingface.co/datasets/amitha/docent-eval-coarse.

Dataset Structure

Each row in the dataset corresponds to a work of art.

  • uuid: a unique identifier for work of art
  • image: an image of the work of art (useful for multimodal metrics)
  • reference: an expert-written reference description of this artwork from the U.S. National Gallery of Art

Dataset Creation

Curation Rationale

This dataset was collected to evaluate detailed image description, especially for artwork.

Source Data

The images/artwork are all in the public domain and provided by the U.S. National Gallery of Art.

The expert written references were published by the U.S. National Gallery of Art as part of their Open Data Initiative (https://github.com/NationalGalleryOfArt/opendata).

Annotations

Annotation process

The expert written reference descriptions were composed according to the U.S. National Gallery of Art's Accessibility Guidelines: https://www.nga.gov/visit/accessibility/collection-image-descriptions.

Who are the annotators?

An expert in art history from the U.S. National Gallery of Art.

Bias, Risks, and Limitations

While this work aims to benefit accessibility applications for blind and low-vision users (as reference descriptions were written according to the U.S. National Gallery of Art's Accessibility Guidelines: https://www.nga.gov/visit/accessibility/collection-image-descriptions), we acknowledge that it assumes a one-size-fits-all approach to assistive text. Ideally, such a benchmark would include different styles of accessibility text more representative of diverse user needs. However, it is our hope that by including reference descriptions that are extremely detailed, models that perform well in this more challenging setting will be able to adapt to a wide number of description needs.

Additionally, as with other computer vision systems, this work could theoretically be applied to surveillance contexts, but our focus on detailed description does not introduce novel privacy risks beyond those inherent to existing image analysis technologies. The primary intended application—-improving accessibility—-aligns with beneficial societal outcomes.

Citation

BibTeX:

@misc{ananthram2025poshusingscenegraphs, title={PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions}, author={Amith Ananthram and Elias Stengel-Eskin and Lorena A. Bradford and Julia Demarest and Adam Purvis and Keith Krut and Robert Stein and Rina Elster Pantalony and Mohit Bansal and Kathleen McKeown}, year={2025}, eprint={2510.19060}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2510.19060}, }

APA:

Ananthram, A., Stengel-Eskin, E., Bradford, L.A., Demarest, J., Purvis, A., Krut, K., Stein, R., Pantalony, R.E., Bansal, M., McKeown, K. (2025). PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions. arXiv preprint arXiv:2510.19060.

Dataset Card Authors

Amith Ananthram

Dataset Card Contact

amith@cs.columbia.edu