error 3437340512

#1
by TryTrT - opened

When loading the model using SvelteKit:

onMount(async () => {
  try {
    generator = await pipeline(
      "text-generation",
      "onnx-community/gemma-3-1b-it-ONNX-GQA",
      { 
        dtype: "q4",
        progress_callback: updateFileStatuses
      }
    );
    console.log("Model loaded");
    modelLoaded.set(true);
  } catch (error) {
    console.error("Model loading error:", error);
  }
});

An error appears in the console with the message 3437340512.
How can I fix this?

@TryTrT for mine only q8 works

for mine also only q8 wasm works, have you tried others?

@TryTrT for mine only q8 works

@TryTrT for mine only q8 works

Sorry for the late reply! I have the same issue—only Q8 works for me too. Let me know if you find a fix!

for mine also only q8 wasm works, have you tried others?

@TryTrT for mine only q8 works

Same here—only Q8 works, and I haven’t found a solution for the others either. If you figure it out, please let me know!

Hello @Xenova ! Has there been progress to be able to run this model in WebGPU? For now I'm only able to run it in q8 WASM

ONNX Community org

Oh yes -- probably should mention, this model works better on WebGPU (without specifying device, it runs on CPU in WASM).

    generator = await pipeline(
      "text-generation",
      "onnx-community/gemma-3-1b-it-ONNX-GQA",
      { 
        dtype: "q4", // or "q4f16"
        device: "webgpu", // <-- NEW
        progress_callback: updateFileStatuses
      }
    );

@Xenova When I run the model with q4 quantization I get the following error: 3436702080 and when running with q4f16 quantization I get the following error: RuntimeError: Aborted(). Build with -sASSERTIONS for more info.

Does this happen to you as well? I'm on the latest version of transformers.js (3.7.5).

ONNX Community org

Could you try device: "q4f16"?

Yes, I get the following error when using dtype: "q4f16":
image

Sign up or log in to comment