runtime error

Exit code: 1. Reason: <00:00, 374MB/s] model-00005-of-00005.safetensors: 0%| | 0.00/1.24G [00:00<?, ?B/s] model-00005-of-00005.safetensors: 1%|▏ | 18.6M/1.24G [00:01<01:27, 14.0MB/s] model-00005-of-00005.safetensors: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 507M/1.24G [00:02<00:03, 214MB/s]  model-00005-of-00005.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.24G/1.24G [00:03<00:00, 357MB/s] Traceback (most recent call last): File "/app/app.py", line 52, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 604, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 277, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4971, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 435, in __init__ super().__init__(config) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2076, in __init__ self.config._attn_implementation_internal = self._check_and_adjust_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2686, in _check_and_adjust_attn_implementation applicable_attn_implementation = self.get_correct_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2714, in get_correct_attn_implementation self._flash_attn_2_can_dispatch(is_init_check) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2432, in _flash_attn_2_can_dispatch raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.

Container logs:

Fetching error logs...