metadata
			tags:
  - ocr
  - document-processing
  - deepseek
  - deepseek-ocr
  - markdown
  - uv-script
  - generated
Document OCR using DeepSeek-OCR
This dataset contains markdown-formatted OCR results from images in NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset using DeepSeek-OCR.
Processing Details
- Source Dataset: NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset
- Model: deepseek-ai/DeepSeek-OCR
- Number of Samples: 100
- Processing Time: 3.0 min
- Processing Date: 2025-10-22 18:00 UTC
Configuration
- Image Column: image
- Output Column: markdown
- Dataset Split: train
- Batch Size: 512
- Resolution Mode: large
- Base Size: 1280
- Image Size: 1280
- Crop Mode: False
- Max Model Length: 8,192 tokens
- Max Output Tokens: 8,192
- GPU Memory Utilization: 80.0%
Model Information
DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
- ๐ LaTeX equations - Mathematical formulas preserved in LaTeX format
- ๐ Tables - Extracted and formatted as HTML/markdown
- ๐ Document structure - Headers, lists, and formatting maintained
- ๐ผ๏ธ Image grounding - Spatial layout and bounding box information
- ๐ Complex layouts - Multi-column and hierarchical structures
- ๐ Multilingual - Supports multiple languages
Resolution Modes
- Tiny (512ร512): Fast processing, 64 vision tokens
- Small (640ร640): Balanced speed/quality, 100 vision tokens
- Base (1024ร1024): High quality, 256 vision tokens
- Large (1280ร1280): Maximum quality, 400 vision tokens
- Gundam (dynamic): Adaptive multi-tile processing for large documents
Dataset Structure
The dataset contains all original columns plus:
- markdown: The extracted text in markdown format with preserved structure
- inference_info: JSON list tracking all OCR models applied to this dataset
Usage
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")
# Access the markdown text
for example in dataset:
    print(example["markdown"])
    break
# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
    print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
Reproduction
This dataset was generated using the uv-scripts/ocr DeepSeek OCR vLLM script:
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\
    NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \\
    <output-dataset> \\
    --resolution-mode large \\
    --image-column image
Performance
- Processing Speed: ~0.6 images/second
- Processing Method: Batch processing with vLLM (2-3x speedup over sequential)
Generated with ๐ค UV Scripts
