run with llama.cpp server
#13 opened about 1 month ago
by
Ali4ai
Does it support multi-GPU inference?
#12 opened about 2 months ago
by
micczzz
need a demo for the visual grounding task
8
#11 opened about 2 months ago
by
zanepoe
Max sequence length?
1
#10 opened about 2 months ago
by
hanshupe
Not able to run the model using the API keys
#9 opened 2 months ago
by
noblefool
How can we run this model on mac OS or any machine that does not have GPU?
5
#7 opened 2 months ago
by
merhanjan
Compatibility with Turing GPU's
3
#6 opened 2 months ago
by
Ddopez
Quantization support
β
5
5
#5 opened 2 months ago
by
princemjp
Local Installation Video and Testing - Step by Step
π₯
π
4
#4 opened 2 months ago
by
fahdmirzac
Update paper link and correct citation in model card
#3 opened 2 months ago
by
nielsr
Fine-Tuning?
π
1
3
#2 opened 2 months ago
by
hanshupe
VLLM support
π
1
3
#1 opened 2 months ago
by
princemjp