runtime error

Exit code: 1. Reason: use `force_download=True`. warnings.warn( tokenizer_config.json: 0%| | 0.00/1.73k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.73k/1.73k [00:00<00:00, 10.6MB/s] vocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s] vocab.json: 100%|██████████| 2.78M/2.78M [00:00<00:00, 57.5MB/s] merges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s] merges.txt: 100%|██████████| 1.67M/1.67M [00:00<00:00, 38.2MB/s] tokenizer.json: 0%| | 0.00/7.03M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 7.03M/7.03M [00:00<00:00, 30.5MB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. config.json: 0%| | 0.00/1.14k [00:00<?, ?B/s] config.json: 100%|██████████| 1.14k/1.14k [00:00<00:00, 9.45MB/s] /usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py:155: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used. warnings.warn(warning_msg) Traceback (most recent call last): File "/home/user/app/app.py", line 115, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3024, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 62, in validate_environment raise ImportError( ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`

Container logs:

Fetching error logs...