VLM model deployment vLLM framework

#1
by Vanshtiwari912 - opened

Hi, everyone I want to deploy qwen2.5-vlm-7b on production through vLLM framework, i want to test it first but my rtx 3050 gpu have 4 gb ram, is there any way to test it for free

This is my first time deploying VLM/LLM. It's for OCR and key value pair extraction from the form of required fields. I would also like to know the best practice for choosing frameworks as this is my first time deploying
Thank you.

Sign up or log in to comment