GPT-OSS-20B BigCodeBench LoRA Adapter
LoRA adapter weights fine-tuned from openai/gpt-oss-20b
on BigCodeBench split v0.1.4
(~1.1K samples).
Training Summary
- Steps: 100
- Final train_loss: 0.7833267974853516
- Runtime (s): 3717.3139
- Samples/sec: 0.43
- Total FLOPs: 6.825417425085542e+16
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = 'openai/gpt-oss-20b'
adapter = 'unlimitedbytes/gptoss-bigcodebench-20b-lora'
model = AutoModelForCausalLM.from_pretrained(base, device_map='auto', torch_dtype='auto')
model = PeftModel.from_pretrained(model, adapter)
tokenizer = AutoTokenizer.from_pretrained(base)
messages = [
{'role': 'system', 'content': 'You are a helpful coding assistant.'},
{'role': 'user', 'content': 'Write a Python function to add two numbers.'}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors='pt').to(model.device)
out = model.generate(input_ids, max_new_tokens=128)
print(tokenizer.decode(out[0], skip_special_tokens=False))
Merge adapter:
model = model.merge_and_unload()
model.save_pretrained('merged-model')
Limitations
- 100 training steps only; not fully converged.
- Adapter only, no merged full weights.
- Outputs may include control tokens.
License
Apache-2.0 (base) + dataset licenses.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for unlimitedbytes/gptoss-bigcodebench-20b-lora
Base model
openai/gpt-oss-20b