Spaces:
Running
Running
Upload 2 files
Browse files
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2025 DeepSeek
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
CHANGED
|
@@ -1,14 +1,180 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!-- markdownlint-disable first-line-h1 -->
|
| 2 |
+
<!-- markdownlint-disable html -->
|
| 3 |
+
<!-- markdownlint-disable no-duplicate-header -->
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
<div align="center">
|
| 7 |
+
<img src="assets/logo.svg" width="60%" alt="DeepSeek AI" />
|
| 8 |
+
</div>
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
<hr>
|
| 12 |
+
<div align="center">
|
| 13 |
+
<a href="https://www.deepseek.com/" target="_blank">
|
| 14 |
+
<img alt="Homepage" src="assets/badge.svg" />
|
| 15 |
+
</a>
|
| 16 |
+
<a href="https://huggingface.co/deepseek-ai/DeepSeek-OCR" target="_blank">
|
| 17 |
+
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" />
|
| 18 |
+
</a>
|
| 19 |
+
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
<div align="center">
|
| 23 |
+
|
| 24 |
+
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank">
|
| 25 |
+
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" />
|
| 26 |
+
</a>
|
| 27 |
+
<a href="https://twitter.com/deepseek_ai" target="_blank">
|
| 28 |
+
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" />
|
| 29 |
+
</a>
|
| 30 |
+
|
| 31 |
+
</div>
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
<p align="center">
|
| 36 |
+
<a href="https://huggingface.co/deepseek-ai/DeepSeek-OCR"><b>📥 Model Download</b></a> |
|
| 37 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-OCR/blob/main/DeepSeek_OCR_paper.pdf"><b>📄 Paper Link</b></a> |
|
| 38 |
+
<a href="https://arxiv.org/abs/2510.18234"><b>📄 Arxiv Paper Link</b></a> |
|
| 39 |
+
</p>
|
| 40 |
+
|
| 41 |
+
<h2>
|
| 42 |
+
<p align="center">
|
| 43 |
+
<a href="">DeepSeek-OCR: Contexts Optical Compression</a>
|
| 44 |
+
</p>
|
| 45 |
+
</h2>
|
| 46 |
+
|
| 47 |
+
<p align="center">
|
| 48 |
+
<img src="assets/fig1.png" style="width: 1000px" align=center>
|
| 49 |
+
</p>
|
| 50 |
+
<p align="center">
|
| 51 |
+
<a href="">Explore the boundaries of visual-text compression.</a>
|
| 52 |
+
</p>
|
| 53 |
+
|
| 54 |
+
## Release
|
| 55 |
+
- [2025/10/20]🚀🚀🚀 We release DeepSeek-OCR, a model to investigate the role of vision encoders from an LLM-centric viewpoint.
|
| 56 |
+
|
| 57 |
+
## Contents
|
| 58 |
+
- [Install](#install)
|
| 59 |
+
- [vLLM Inference](#vllm-inference)
|
| 60 |
+
- [Transformers Inference](#transformers-inference)
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
## Install
|
| 67 |
+
>Our environment is cuda11.8+torch2.6.0.
|
| 68 |
+
1. Clone this repository and navigate to the DeepSeek-OCR folder
|
| 69 |
+
```bash
|
| 70 |
+
git clone https://github.com/deepseek-ai/DeepSeek-OCR.git
|
| 71 |
+
```
|
| 72 |
+
2. Conda
|
| 73 |
+
```Shell
|
| 74 |
+
conda create -n deepseek-ocr python=3.12.9 -y
|
| 75 |
+
conda activate deepseek-ocr
|
| 76 |
+
```
|
| 77 |
+
3. Packages
|
| 78 |
+
|
| 79 |
+
- download the vllm-0.8.5 [whl](https://github.com/vllm-project/vllm/releases/tag/v0.8.5)
|
| 80 |
+
```Shell
|
| 81 |
+
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118
|
| 82 |
+
pip install vllm-0.8.5+cu118-cp38-abi3-manylinux1_x86_64.whl
|
| 83 |
+
pip install -r requirements.txt
|
| 84 |
+
pip install flash-attn==2.7.3 --no-build-isolation
|
| 85 |
+
```
|
| 86 |
+
**Note:** if you want vLLM and transformers codes to run in the same environment, you don't need to worry about this installation error like: vllm 0.8.5+cu118 requires transformers>=4.51.1
|
| 87 |
+
|
| 88 |
+
## vLLM-Inference
|
| 89 |
+
- VLLM:
|
| 90 |
+
>**Note:** change the INPUT_PATH/OUTPUT_PATH and other settings in the DeepSeek-OCR-master/DeepSeek-OCR-vllm/config.py
|
| 91 |
+
```Shell
|
| 92 |
+
cd DeepSeek-OCR-master/DeepSeek-OCR-vllm
|
| 93 |
+
```
|
| 94 |
+
1. image: streaming output
|
| 95 |
+
```Shell
|
| 96 |
+
python run_dpsk_ocr_image.py
|
| 97 |
+
```
|
| 98 |
+
2. pdf: concurrency ~2500tokens/s(an A100-40G)
|
| 99 |
+
```Shell
|
| 100 |
+
python run_dpsk_ocr_pdf.py
|
| 101 |
+
```
|
| 102 |
+
3. batch eval for benchmarks
|
| 103 |
+
```Shell
|
| 104 |
+
python run_dpsk_ocr_eval_batch.py
|
| 105 |
+
```
|
| 106 |
+
## Transformers-Inference
|
| 107 |
+
- Transformers
|
| 108 |
+
```python
|
| 109 |
+
from transformers import AutoModel, AutoTokenizer
|
| 110 |
+
import torch
|
| 111 |
+
import os
|
| 112 |
+
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
|
| 113 |
+
model_name = 'deepseek-ai/DeepSeek-OCR'
|
| 114 |
+
|
| 115 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
| 116 |
+
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
|
| 117 |
+
model = model.eval().cuda().to(torch.bfloat16)
|
| 118 |
+
|
| 119 |
+
# prompt = "<image>\nFree OCR. "
|
| 120 |
+
prompt = "<image>\n<|grounding|>Convert the document to markdown. "
|
| 121 |
+
image_file = 'your_image.jpg'
|
| 122 |
+
output_path = 'your/output/dir'
|
| 123 |
+
|
| 124 |
+
res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)
|
| 125 |
+
```
|
| 126 |
+
or you can
|
| 127 |
+
```Shell
|
| 128 |
+
cd DeepSeek-OCR-master/DeepSeek-OCR-hf
|
| 129 |
+
python run_dpsk_ocr.py
|
| 130 |
+
```
|
| 131 |
+
## Support-Modes
|
| 132 |
+
The current open-source model supports the following modes:
|
| 133 |
+
- Native resolution:
|
| 134 |
+
- Tiny: 512×512 (64 vision tokens)✅
|
| 135 |
+
- Small: 640×640 (100 vision tokens)✅
|
| 136 |
+
- Base: 1024×1024 (256 vision tokens)✅
|
| 137 |
+
- Large: 1280×1280 (400 vision tokens)✅
|
| 138 |
+
- Dynamic resolution
|
| 139 |
+
- Gundam: n×640×640 + 1×1024×1024 ✅
|
| 140 |
+
|
| 141 |
+
## Prompts examples
|
| 142 |
+
```python
|
| 143 |
+
# document: <image>\n<|grounding|>Convert the document to markdown.
|
| 144 |
+
# other image: <image>\n<|grounding|>OCR this image.
|
| 145 |
+
# without layouts: <image>\nFree OCR.
|
| 146 |
+
# figures in document: <image>\nParse the figure.
|
| 147 |
+
# general: <image>\nDescribe this image in detail.
|
| 148 |
+
# rec: <image>\nLocate <|ref|>xxxx<|/ref|> in the image.
|
| 149 |
+
# '先天下之忧而忧'
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
|
| 153 |
+
## Visualizations
|
| 154 |
+
<table>
|
| 155 |
+
<tr>
|
| 156 |
+
<td><img src="assets/show1.jpg" style="width: 500px"></td>
|
| 157 |
+
<td><img src="assets/show2.jpg" style="width: 500px"></td>
|
| 158 |
+
</tr>
|
| 159 |
+
<tr>
|
| 160 |
+
<td><img src="assets/show3.jpg" style="width: 500px"></td>
|
| 161 |
+
<td><img src="assets/show4.jpg" style="width: 500px"></td>
|
| 162 |
+
</tr>
|
| 163 |
+
</table>
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
## Acknowledgement
|
| 167 |
+
|
| 168 |
+
We would like to thank [Vary](https://github.com/Ucas-HaoranWei/Vary/), [GOT-OCR2.0](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/), [MinerU](https://github.com/opendatalab/MinerU), [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [OneChart](https://github.com/LingyvKong/OneChart), [Slow Perception](https://github.com/Ucas-HaoranWei/Slow-Perception) for their valuable models and ideas.
|
| 169 |
+
|
| 170 |
+
We also appreciate the benchmarks: [Fox](https://github.com/ucaslcl/Fox), [OminiDocBench](https://github.com/opendatalab/OmniDocBench).
|
| 171 |
+
|
| 172 |
+
## Citation
|
| 173 |
+
|
| 174 |
+
```bibtex
|
| 175 |
+
@article{wei2024deepseek-ocr,
|
| 176 |
+
title={DeepSeek-OCR: Contexts Optical Compression},
|
| 177 |
+
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
|
| 178 |
+
journal={arXiv preprint arXiv:2510.18234},
|
| 179 |
+
year={2025}
|
| 180 |
+
}
|