Datasets:
				
			
			
	
			
	
		
			
	
		
		metadata
			license: cdla-permissive-2.0
task_categories:
  - image-text-to-text
tags:
  - code
  - ocr
size_categories:
  - 1M<n<10M
pretty_name: SynthCodeNet
SynthCodeNet
 
SynthCodeNet is a multimodal dataset created for training the SmolDocling model. It consists of over 9.3 million synthetically generated image-text pairs, covering code snippets from 56 different programming languages. Text data was sourced from permissively licensed sources, while images were synthetically generated at 120 DPI using LaTeX and Pygments to ensure visual diversity.
Dataset Statistics
- Total samples: 9,334,257 - Training set: 8,400,838
- Validation set: 466,703
- Test set: 466,716
 
- Modalities: Image, Text 
- Image Generation: Synthetic (LaTeX, Pygments) 
Programming Languages & Sample Counts
| Language | Samples | Language | Samples | Language | Samples | 
|---|---|---|---|---|---|
| Ada | 20,094 | Dart | 20,415 | Matlab | 1,170 | 
| Awk | 22,334 | Dockerfile | 99,459 | MoonScript | 6,237 | 
| Bash | 98,950 | Elixir | 20,387 | Nim | 37,236 | 
| C | 599,096 | Erlang | 20,039 | OCaml | 32,297 | 
| C# | 303,720 | FORTRAN | 34,023 | ObjectiveC | 158,398 | 
| C++ | 698,870 | Forth | 5,548 | Octave | 2,537 | 
| CMake | 19,910 | Go | 333,722 | PHP | 249,566 | 
| COBOL | 5,153 | HTML | 245,228 | Pascal | 28,254 | 
| CSS | 236,596 | Haskell | 39,848 | Perl | 33,938 | 
| Ceylon | 8,369 | Haxe | 20,070 | Prolog | 2,058 | 
| Clojure | 20,765 | Java | 698,421 | Python | 1,797,063 | 
| Crystal | 24,720 | JavaScript | 530,899 | Racket | 4,340 | 
| Cuda | 142,344 | Julia | 29,681 | Ruby | 348,976 | 
| Cython | 22,136 | Kotlin | 292,986 | Rust | 344,491 | 
| D | 20,338 | Lisp | 29,749 | SML | 19,333 | 
| Lua | 25,328 | SQL | 493,412 | YAML | 249,011 | 
| Scala | 273,825 | Scheme | 23,242 | VisualBasic | 13,908 | 
| Swift | 25,374 | TypeScript | 255,475 | XML | 246,209 | 
| bc | 249 | dc | 1,713 | 
Data Format
Each dataset entry is structured as follows:
{
  "images": [PIL Image],
  "texts": [
    {
      "assistant": "<loc_x0><loc_y0><loc_x1><loc_y1><_Language_>CODE_SNIPPET</code>",
      "source": "SynthCodeNetNoImageTag",
      "user": "<code>"
    }
  ]
}
Intended Use
- Training multimodal models for document understanding, specifically: - Code snippet extraction and transcription
 
Citation
If you use SynthCodeNet, please cite:
@article{nassar2025smoldocling,
  title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion},
  author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others},
  journal={arXiv preprint arXiv:2503.11576},
  year={2025}
}

