Layout Analysis Inference

#46
by saikanov - opened

Hi, I’m currently exploring this model and using vLLM as the inference engine for PaddleOCR-VL 0.9B.

I noticed that the layout analysis model seems to run on the client side, which could be problematic for production use.
Is there a native way to run the layout analysis inside the same Docker container as the inference engine?

Or should I manually host it by creating a small API for the layout model, adding it to the Docker setup, and connecting it to the vLLM server through Docker’s internal network?

Thanks, best regards!

https://huggingface.co/PaddlePaddle/PaddleOCR-VL/discussions/39

i see, so just use paddlex in the server right

Sign up or log in to comment