CUDA out of memory issues when running gptoss model on colab T4
#99
by
sumeetm
- opened
Trying to run gptoss-20B on T4 colab, facing below memory issue, was anyone able to resolve this?
Loading checkpoint shards: 0%
0/3 [00:00<?, ?it/s]
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
/tmp/ipython-input-2717120482.py in <cell line: 0>()
4
5 tokenizer = AutoTokenizer.from_pretrained(model_id)
----> 6 model = AutoModelForCausalLM.from_pretrained(
7 model_id,
8 torch_dtype="auto",
9 frames
/usr/local/lib/python3.11/dist-packages/transformers/integrations/mxfp4.py in convert_moe_packed_tensors(blocks, scales, dtype, rows_per_chunk)
121
122 # nibble indices -> int64
--> 123 idx_lo = (blk & 0x0F).to(torch.long)
124 idx_hi = (blk >> 4).to(torch.long)
125
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.98 GiB. GPU 0 has a total capacity of 14.74 GiB of which 1.47 GiB is free. Process 7925 has 13.27 GiB memory in use. Of the allocated memory 11.95 GiB is allocated by PyTorch, and 1.21 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Have tried below things to free up memory still no luck
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"]="expandable_segments:True"
import gc
torch.cuda.empty_cache()
gc.collect()
It won't work, because "openai/gpt-oss-20b" requires atleast 16GB GPU and you need to use atleast L4 GPU, if you are using colab. T4 only have 15GB GPU.
Read this link carefully :