Code Reasoning
Collection
3 items
•
Updated
openai/gpt-oss-20b
nvidia/OpenCodeReasoning-2
(OCR-2), combining python
and cpp
splits. Each sample reconstructs the upstream question and uses the dataset's r1_generation
as the assistant responseSFTTrainer
This model was trained in a chat format. Recommended structure:
messages = [
{"role": "system", "content": "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful."},
{"role": "user", "content": problem_text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
If you prefer plain text, place the problem text after a brief instruction, but chat format generally yields better results.
Specify reasoning effort in apply_chat_template
(supported values: "low", "medium" (default), or "high"):
messages = [
{"role": "system", "content": "Always respond in riddles"},
{"role": "user", "content": "Explain why the meaning of life is 42"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="high",
).to(model.device)
generated = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(generated[0][inputs["input_ids"].shape[-1]:]))
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "GetSoloTech/GPT-OSS-Code-Reasoning-20B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=auto,
device_map="auto",
)
problem_text = """
You are given an array of integers ... (your problem here)
"""
messages = [
{"role": "system", "content": "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful."},
{"role": "user", "content": problem_text},
]
input_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
reasoning_effort="medium",
)
inputs = tokenizer([input_text], return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=768,
temperature=0.3,
top_p=0.9,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
max_new_tokens
512–1024 for full solutions; shorter for hintsnvidia/OpenCodeReasoning-2
with python
and cpp
splits--take_samples
examples per splitopen-r1/codeforces
)messages
and a formatted text
field with the tokenizer's chat templatetrain_test_split
according to --eval_ratio
FastLanguageModel
) for efficient 4-bit loading and fast PEFTSFTTrainer
) for straightforward supervised fine-tuningopen-r1/codeforces
)Base model
openai/gpt-oss-20b