Image-to-text (OCR) functionality omits top-most line for recognition / output
I tried uploading a PNG containing handwritten notes using LM Studio and it returned the text but in almost all cases the top line of text from the document is missing, especially when near a possible margin.
I have tried several docs, with varying results which are unique to this model.
Other more purpose-trained models, such as olmocr-7b DO recognise / output the topmost line.
Same here. It omits either the first top line or the bottom line.
H
Hi,
Apologies for the late replay, thanks for reaching out to us. The above issue might be because of any of the following reasons.
Fixed Resolution and Resizing: The Gemma 3 vision encoder operates at a fixed resolution (896 X 896 pixels). When you upload a high-resolution PNG, especially one with a non-square aspect ratio, the image must be resized, scaled, or cropped to fit this square input size.
Crop Heuristics (Loss of Edge Detail): The pre-processing pipeline (even with techniques like Pan & Scan) can use heuristics to identify the main content and crop aggressively. If your handwritten notes are placed right against the margin of the PNG canvas, this automatic cropping or resizing process can mistake the edge-bound text for unimportant border information and discard it.
Recommended Workaround:
Since the issue is with the input-handling of the gemma-3-12b-it model , the most reliable fix is to modify your input image:
Add a Wide Margin: Before uploading the PNG to LM Studio, open it in an image editor and add a generous white border/margin (e.g 10-20% of the image height) to the top, bottom, and sides.
Objective: By placing the handwritten text clearly away from the physical edge of the PNG file, you prevent the model's internal cropping/resizing from accidentally cutting off the first line.
Thanks.
