user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
---|---|---|---|---|
jxmorris12
| 2025-06-10T19:32:44 |
i have this problem too! Forced me to switch to VLLM. The relevant docs are here: https://huggingface.co/docs/trl/en/grpo_trainer#-option-2-colocate-mode
| 3,034 | 61 |
AndreiCComan
| 2025-03-10T16:23:51 |
@JinyuanSun I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE and the latest changes (i.e., learning rate etc.) I posted there?
| 3,031 | 62 |
wyuzh
| 2025-03-18T12:10:25 |
Same issue.
#2856 is not the same issue, since we want to perform GRPO on a fine-tuned PeftModel, but not perform GRPO together with PEFT.
| 3,031 | 63 |
DingZhenChen-code
| 2025-03-20T12:51:56 |
Same issue. How to continue training on a fine-tuned PeftModel which lora module is not merged.
Maybe resume from ckpt is helpful.
| 3,031 | 64 |
cliang-huanglab
| 2025-05-26T05:28:19 |
Same issue. Have you found a solution?
| 3,031 | 65 |
EttoreCaputo
| 2025-06-10T09:45:01 |
@JinyuanSun After many attempts, this worked for me:
```python
# LOAD THE BASE MODEL AND APPLY THE PREVIOUS FINE-TUNED (SFT in my case) PEFT MODEL
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "BASE MODEL HERE (unsloth in my case)",
max_seq_length = max_seq_length,
load_in_4bit = True,
fast_inference = True,
max_lora_rank = lora_rank,
gpu_memory_utilization = 0.5,
)
model = PeftModel.from_pretrained(
model,
"ADAPTER PATH HERE",
is_trainable = True,
)
FastLanguageModel.patch_peft_model(model)
```
```python
#GRPO TRAIN
training_args = GRPOConfig(
use_vllm = True,
... #other params
)
trainer = GRPOTrainer(
model = model,
processing_class = tokenizer,
reward_funcs = [
...#reward functions
],
args = training_args,
train_dataset = dataset,
)
trainer.train()
```
| 3,031 | 66 |
qgallouedec
| 2025-03-11T14:07:37 |
That's a good point.
That's also what's done in open-instruct: https://github.com/allenai/open-instruct/blob/6d5320539f23a6dd55c892fd35e7e86907569af1/open_instruct/grpo_vllm_thread_ray_gtrl.py#L777C9-L777C37
Ideally, we would like to have some curves to show this gap, so if someone has any, feel free to share.
| 3,029 | 67 |
HuggingFaceDocBuilderDev
| 2025-03-11T15:37:01 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3029). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,029 | 68 |
deekshaVarshney
| 2025-03-07T15:19:09 |
@kashif
| 3,027 | 69 |
qgallouedec
| 2025-03-07T13:48:04 |
If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under $π_{\mathrm{old}}$ and not with $π_θ$?
Note that in practice (and this is the default setting), $μ=1$ (implies $π_{\mathrm{old}} = π_{\theta}$), this issue doesn’t arise.
In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea?
| 3,025 | 70 |
zanghyu
| 2025-03-07T16:41:07 |
> If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under π old and not with π θ ? Note that in practice (and this is the default setting), μ = 1 (implies π old = π θ ), this issue doesn’t arise. In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea?
Yes exactly. So the current implementation of GRPO is just an on-policy version, it does not seem like the original one in GRPO paper.
| 3,025 | 71 |
qgallouedec
| 2025-03-07T17:59:36 |
In the DeepSeek Math paper, they use the same KL term, no?
| 3,025 | 72 |
zanghyu
| 2025-03-07T18:29:18 |
> In the DeepSeek Math paper, they use the same KL term, no?
I got your point. Yeah, they use the same KL term, while the equation in their paper shows that their samples are from the old policy distribution. So the default implementation in this repo is okay ( as being on-policy), but it is hard to say how to implement an off-policy version, right?
| 3,025 | 73 |
qgallouedec
| 2025-03-07T18:46:34 |
Maybe with some kind of importance sampling?
| 3,025 | 74 |
zanghyu
| 2025-03-08T02:39:55 |
> Maybe with some kind of importance sampling?
$$\nabla_\theta\mathbb{E}_{\pi_\theta}[\log\pi_\theta - \log\pi_\text{ref}]=\mathbb{E}_{\pi_\theta}[(\log\pi_\theta-\log\pi_\text{ref})\cdot \nabla_\theta \log\pi_\theta$$. So we only need to add the logprob difference between $$\log\pi_\theta$$ and $$\log\pi_\text{ref}$$ in the reward function. By doing so, we don't need to re-sample again, we can just use the samples from the old policy, and since we add this term into the reward function, it naturally multiply the coef of IS, so everythings fine. It's quite simple.
---
The formula seems doesn't render right...
| 3,025 | 75 |
qgallouedec
| 2025-03-07T13:17:11 |
Thanks for reporting. The easiest is indeed to turn it off. Another way is to call `LLM.llm_engine.reset_prefix_cache()` (suggested by @hmellor) after the new weights are loaded. If someone wants to try this and if it works, a PR would be welcome
| 3,024 | 76 |
thepowerfuldeez
| 2025-06-01T14:10:33 |
Looks like this is already done: https://github.com/huggingface/trl/blob/7359ddcc6f80aaac7606ac1d9489909b054bbed9/trl/trainer/grpo_trainer.py#L944-L948
Would be nice though to support it by an argument, in cases where system prompt is long and it benefits prefix caching
| 3,024 | 77 |
hmellor
| 2025-06-03T09:48:39 |
The linked code block shows that prefix caching is still enabled by default, but that the prefix cache is reset when the weights change.
@qgallouedec is this issue considered resolved now?
| 3,024 | 78 |
qgallouedec
| 2025-06-03T14:02:23 |
Yes, this one can be closed
| 3,024 | 79 |
HuggingFaceDocBuilderDev
| 2025-03-07T11:25:59 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,023 | 80 |
qgallouedec
| 2025-04-10T15:26:41 |
The most common approach is to use vLLM now, I'm closing this PR
| 3,023 | 81 |
qgallouedec
| 2025-03-07T16:25:40 |
Can you share some code and results?
| 3,021 | 82 |
JosephChenHub
| 2025-04-17T10:42:21 |
<img width="891" alt="Image" src="https://github.com/user-attachments/assets/88993907-21d6-4e3e-ace6-98889ffffbff" />
We have a similar observation. The above curves show two settings:
- GH200: per device batch size=16, gradient accumulation steps = 2, world size=8, num_generations=8 => batch size = 32
- A100: per device batch size=2, gradient accumulation steps = 8, world size=16, num_generations=8 => batch size = 32
I guess this is because the advantage in the loss function
```
per_token_loss1 = coef_1 * advantages.unsqueeze(1)
per_token_loss2 = coef_2 * advantages.unsqueeze(1)
per_token_loss = -torch.min(per_token_loss1, per_token_loss2)
if self.beta != 0.0:
per_token_loss = per_token_loss + self.beta * per_token_kl
loss = (per_token_loss * completion_mask).sum() / completion_mask.sum()
```
let's say, you have gradient accumulation steps = 2, two batch samples D1, D2, advantages A1, A2
loss ( (D1,D2), (A1, A2) ) != loss(D1, A1) + loss(D2, A2)
| 3,021 | 83 |
JosephChenHub
| 2025-04-17T11:03:33 |
I noticed that the updated version is ```loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()```
maybe it has been resolved.
| 3,021 | 84 |
loxs123
| 2025-05-02T02:19:39 |
> I noticed that the updated version is `loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()`
>
> maybe it has been resolved.
```python
if self.loss_type == "grpo":
loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()
elif self.loss_type == "bnpo":
loss = (per_token_loss * completion_mask).sum() / completion_mask.sum().clamp(min=1.0)
elif self.loss_type == "dr_grpo":
loss = (per_token_loss * completion_mask).sum() / (per_token_loss.size(0) * self.max_completion_length)
else:
raise ValueError(f"Unknown loss type: {self.loss_type}")
```
The complete code is shown above, so the issue you just mentioned likely doesn’t exist for `grpo_loss`/`dr_grpo`. However, the `bnpo` (and probably `dapo` as well) loss still seems to have the issue. Additionally, `grpo_loss` and `bnpo_loss` are inherently two different loss calculation methods. I believe that `loss = ((per_token_loss * completion_mask).sum(-1) / completion_mask.sum(-1).clamp(min=1.0)).mean()` is not a correction to the original loss, but rather follows the original GRPO algorithm.
| 3,021 | 85 |
AMindToThink
| 2025-03-06T20:44:01 |
Here's the problem part of the documentation: [here](https://huggingface.co/docs/trl/en/sft_trainer#:~:text=dataset%20%3D%20load_dataset(%22lucasmccabe%2Dlmi/CodeAlpaca%2D20k%22%2C%20split%3D%22train%22)
| 3,019 | 86 |
CloseChoice
| 2025-05-05T18:11:35 |
This is fixed
| 3,019 | 87 |
tchang1997
| 2025-03-10T13:54:09 |
+1 — As a hack, I've been getting around this by defining new reward functions and setting `reward_weight` to zero (so it still gets logged, but doesn't affect the "actual" reward).
| 3,018 | 88 |
qgallouedec
| 2025-03-06T15:53:58 |
Let's say that you've 8 GPUs, in the limit you can have `per_device_batch_size=1` and `num_generations=8`. And set the number of gradient accumulation steps to any value.
> Currently `per_device_train_batch_size` must be a multiple of `num_generations` which can severely limit how large you can make it before
That's not exactly that. It's per_device_train_batch_size*num_devices that must be a multiple of `num_generations`.
While I understand the motivation, I think it's not straightforward to implement.
| 3,017 | 89 |
JamesBowerXanda
| 2025-03-06T16:19:59 |
Ah yes, sorry I forgot about number of devices. Though this doesn't change much right, we just amend my statement to
`num_devices * per_device_train_batch_size * gradient_accumulation_steps ` must be a multiple of `num_generations`.
Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass?
I can see how it could cause more issues than it is worth having to fiddle with the core pipeline just for one trainer. I just thought I would bring it because I noticed how much smoother the training seemed when I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that.
| 3,017 | 90 |
qgallouedec
| 2025-03-06T18:06:00 |
> Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass?
Yes that's correct
> I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that.
You can increase the number of generations quite high actually. Example, if you've 8 GPUs that can handle 4 generations, you can use number of generations per prompt up to 32,
| 3,017 | 91 |
JamesBowerXanda
| 2025-03-07T09:05:00 |
Ok, I understand, thanks for your prompt responses.
Unfortunately I am most interested in using this on my personal gpu so I am not using multiple gpu clusters.
Thanks for your time, I am happy for the issue to be closed since it is not deemed feasible.
| 3,017 | 92 |
qgallouedec
| 2025-03-07T09:11:09 |
With 1 GPU, the best you can do is to set `num_generations=per_device_train_batch_size`, and set the `gradient_accumulation_steps` depending on the desired effective batch size. Example:
```
per_device_train_batch_size = 8
num_generations = 8
gradient_accumulation_steps = 16
```
To have an effective batch size of 128
| 3,017 | 93 |
JamesBowerXanda
| 2025-03-07T09:28:08 |
I understand this but it doesn't solve the issue of the loss function being an estimation based on a sample size of 8.

Based on the GRPO loss formulation the expectation we estimate is conditional on the input prompt as are the advantage calculations and just increasing the gradient accumulation to 16 gives us 16 high variance estimates of the expectation rather than one low variance estimation.
I hope this makes sense. As I said before I can see why this is deemed not worth it since most large scale use cases can probably afford to just up the number of gpus. I had just hoped it would be an easier adjustment that would allow us hobbyists to stick closer to the theory of the paper.
| 3,017 | 94 |
qgallouedec
| 2025-03-07T10:20:17 |
Then you should increase `num_generations`. By default it's 8, but in the DeepSeek Math paper, they use 64. Of course you'll be probably limited by the compute here if you've only have 1 GPU
| 3,017 | 95 |
qgallouedec
| 2025-03-07T10:25:20 |
> I had just hoped it would be an easier adjustment
In fact, this is tricky, as it would involve sampling, generating and calculating the advantage for the whole batch, then iterating somehow over the batch. It's not impossible, but it adds an implementation complexity that I don't think is justified.
In my experience, playing with a low `num_generations` gives good results.
| 3,017 | 96 |
JamesBowerXanda
| 2025-03-07T11:00:04 |
Forgive my naivety but would it not be as simple as overiding the `training_step` method for `GRPOTrainer` from the base `Trainer` one which is:
```
def training_step(
self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None
) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to train.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
Return:
`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
if hasattr(self.optimizer, "train") and callable(self.optimizer.train):
self.optimizer.train()
inputs = self._prepare_inputs(inputs)
if is_sagemaker_mp_enabled():
loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps)
return loss_mb.reduce_mean().detach().to(self.args.device)
with self.compute_loss_context_manager():
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
del inputs
if (
self.args.torch_empty_cache_steps is not None
and self.state.global_step % self.args.torch_empty_cache_steps == 0
):
if is_torch_xpu_available():
torch.xpu.empty_cache()
elif is_torch_mlu_available():
torch.mlu.empty_cache()
elif is_torch_musa_available():
torch.musa.empty_cache()
elif is_torch_npu_available():
torch.npu.empty_cache()
elif is_torch_mps_available(min_version="2.0"):
torch.mps.empty_cache()
else:
torch.cuda.empty_cache()
kwargs = {}
# For LOMO optimizers you need to explicitly use the learnign rate
if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]:
kwargs["learning_rate"] = self._get_learning_rate()
if self.args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
# Finally we need to normalize the loss for reporting
if not self.model_accepts_loss_kwargs and self.compute_loss_func is None:
loss = loss / self.args.gradient_accumulation_steps
# Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled
# https://github.com/huggingface/transformers/pull/35808
if self.accelerator.distributed_type == DistributedType.DEEPSPEED:
kwargs["scale_wrt_gas"] = False
self.accelerator.backward(loss, **kwargs)
return loss.detach()
```
to somehting like
```
def training_step(
self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None
) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to train.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
Return:
`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
if hasattr(self.optimizer, "train") and callable(self.optimizer.train):
self.optimizer.train()
inputs = self._prepare_inputs(inputs)
if is_sagemaker_mp_enabled():
loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps)
return loss_mb.reduce_mean().detach().to(self.args.device)
# CHANGED: Split the inputs into mini-batches
mini_batch_size = self.args.per_device_train_batch_size * self.args.n_gpu
mini_batch_inputs = []
for i in range(inputs["prompt_ids"].shape[0] // mini_batch_size):
mini_batch_inputs.append(
{
key: value[i * mini_batch_size : (i + 1) * mini_batch_size] for key, value in inputs.items()
}
)
losses = []
del inputs
# CHANGED: Iterate over the mini-batches for loss calculation and gradient backward pass
for inputs in mini_batch_inputs:
with self.compute_loss_context_manager():
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
del inputs
if (
self.args.torch_empty_cache_steps is not None
and self.state.global_step % self.args.torch_empty_cache_steps == 0
):
if is_torch_xpu_available():
torch.xpu.empty_cache()
elif is_torch_mlu_available():
torch.mlu.empty_cache()
elif is_torch_musa_available():
torch.musa.empty_cache()
elif is_torch_npu_available():
torch.npu.empty_cache()
elif is_torch_mps_available(min_version="2.0"):
torch.mps.empty_cache()
else:
torch.cuda.empty_cache()
kwargs = {}
# For LOMO optimizers you need to explicitly use the learnign rate
if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]:
kwargs["learning_rate"] = self._get_learning_rate()
if self.args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
# Finally we need to normalize the loss for reporting
if not self.model_accepts_loss_kwargs and self.compute_loss_func is None:
loss = loss / self.args.gradient_accumulation_steps
# Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled
# https://github.com/huggingface/transformers/pull/35808
if self.accelerator.distributed_type == DistributedType.DEEPSPEED:
kwargs["scale_wrt_gas"] = False
self.accelerator.backward(loss, **kwargs)
# CHANGED: Append the loss to the list so that we can average it later and return the same value as before
losses.append(loss.detach())
# CHANGED: Average the losses and return the same value as before
loss = torch.mean(torch.tensor(losses))
return loss.detach()
```
I have added comments starting with `# CHANGED:` to all parts I have edited from the trainers method.
| 3,017 | 97 |
JamesBowerXanda
| 2025-03-07T11:04:13 |
Sorry, I am not trying to be a pain. As I said previously I am happy for you to close this if it is just a no go. Just thought I would offer the suggestion in case it helped.
| 3,017 | 98 |
qgallouedec
| 2025-03-07T11:11:11 |
It might work, but that's the complexity I want to avoid. Forking the repo might be the best option here. Or subclass `GRPOTrainer` to override the `training_step` method.
| 3,017 | 99 |
JamesBowerXanda
| 2025-03-07T11:17:52 |
Ok, I am happy to do that. I won't bog you down anymore on this.
| 3,017 | 100 |
ingambe
| 2025-03-16T21:13:30 |
Actually, being restricted on the minibatch size by the number of trajectories is very limiting.
Depending on the problem, if the variance is large or the reward is very sparse, 8 iterations will not cut it.
| 3,017 | 101 |
jaeminSon
| 2025-04-07T06:03:58 |
If I understand correctly, per_device_train_batch_size is an integer, which means single GPU should be able to handle a backprop. H100 has roughly 80GB memory and I encountered GPU OOM with Qwen2-7B model. If I'm correct, this could be quite a constraint as bigger models cannot be run.
| 3,017 | 102 |
jarrelscy
| 2025-04-13T23:44:59 |
Hi @JamesBowerXanda I ran into a similar thing as what you had and needed a larger generation batch size. I've implemented something which you can run using this. As mentioned above, I overwrote training_step within GRPOTrainer for this to work.
```
# train_grpo.py
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
dataset = load_dataset("trl-lib/tldr", split="train")
# Define the reward function, which rewards completions that are close to 20 characters
def reward_len(completions, **kwargs):
return [-abs(20 - len(completion)) for completion in completions]
training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO",
logging_steps=10,
per_device_train_batch_size=16, # needs to be a multiple of num_generations
num_generations=8, # needs to be a multiple of num_generations_chunks
num_generations_chunks=8)
trainer = GRPOTrainer(
model="Qwen/Qwen2-0.5B-Instruct",
reward_funcs=reward_len,
args=training_args,
train_dataset=dataset,
)
trainer.train()
```
You can find it [here](https://github.com/huggingface/trl/pull/3288)
| 3,017 | 103 |
skoshx
| 2025-03-06T14:47:19 |
This is the offending code in `online_dpo_trainer.py`:
```py
def _generate(self, model, prompts):
eos_token_id = self.processing_class.eos_token_id
pad_token_id = self.processing_class.pad_token_id
# Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and
# policies with different tokenizers / chat templates.
inputs = [{"prompt": prompt} for prompt in prompts]
inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs]
inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs]
inputs = self.data_collator(inputs)
# Sample 2 completions per prompt of size `max_new_tokens` from the model
inputs = self._prepare_inputs(inputs)
prompt_ids = inputs["prompt_input_ids"].repeat(2, 1)
prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1)
with unwrap_model_for_generation(
model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation
) as unwrapped_model:
output = unwrapped_model.generate(
input_ids=prompt_ids,
attention_mask=prompt_mask,
generation_config=self.generation_config,
)
completion_ids = output[:, prompt_ids.size(1) :]
completion_ids, completion_mask = truncate_right(completion_ids, eos_token_id, pad_token_id)
return prompt_ids, prompt_mask, completion_ids, completion_mask
```
I fixed the error by moving the input tokenization and collation logic inside the `unwrap_model_for_generation` block.
```py
with unwrap_model_for_generation(
model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation
) as unwrapped_model:
# Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and
# policies with different tokenizers / chat templates.
inputs = [{"prompt": prompt} for prompt in prompts]
inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs]
inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs]
inputs = self.data_collator(inputs)
# Sample 2 completions per prompt of size `max_new_tokens` from the model
inputs = self._prepare_inputs(inputs)
prompt_ids = inputs["prompt_input_ids"].repeat(2, 1)
prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1)
output = unwrapped_model.generate(
input_ids=prompt_ids,
attention_mask=prompt_mask,
generation_config=self.generation_config,
)
```
That seemed to work, but then I get some quite bad looking error:
```
AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively'
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/ml-data/crafty/simple/docs_dpo_online_repro.py", line 28, in <module>
[rank0]: trainer.train()
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 538, in training_step
[rank0]: prompt_ids, prompt_mask, completion_ids, completion_mask = self._generate(model, prompts)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 482, in _generate
[rank0]: with unwrap_model_for_generation(
[rank0]: File "/usr/lib/python3.11/contextlib.py", line 144, in __exit__
[rank0]: next(self.gen)
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 213, in unwrap_model_for_generation
[rank0]: add_hooks(model)
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 174, in add_hooks
[rank0]: optimizer_offload._register_hooks_recursively(optimizer_offload.module)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively'
```
So I tried by using a lower DeepSpeed stage (1), and a smaller model so they would fit on one GPU:
```py
# train_online_dpo.py
from datasets import load_dataset
from trl import OnlineDPOConfig, OnlineDPOTrainer, PairRMJudge
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import BasePairwiseJudge
class DummyPairwiseJudge(BasePairwiseJudge):
def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]:
return [0 for prompt in prompts]
pass
pass
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
# model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
# tokenizer = AutoTokenizer.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
# Explicitly defining `ref_model` because of error "ValueError: DeepSpeed ZeRO-3 is enabled and is not compatible with `create_reference_model()`. Please instantiate your reference model directly with `AutoModelForCausalLM.from_pretrained()`."
# ref_model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
ref_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
train_dataset = load_dataset("trl-lib/ultrafeedback-prompt", split="train")
training_args = OnlineDPOConfig(output_dir="Qwen2-0.5B-OnlineDPO", logging_steps=10, bf16=True)
trainer = OnlineDPOTrainer(
model=model, judge=DummyPairwiseJudge(), args=training_args, processing_class=tokenizer, train_dataset=train_dataset, ref_model=ref_model
)
trainer.train()
```
This trains successfully:
```bash
{'loss': 0.6932, 'grad_norm': 46.673831939697266, 'learning_rate': 4.823521106875618e-07, 'objective/kl': 0.9423828125, 'objective/entropy': 255.7, 'objective/non_score_reward': -0.09420166015625, 'rewards/chosen': -0.001299285888671875, 'rewards/rejected': -0.00139923095703125, 'rewards/accuracies': 0.45625, 'rewards/margins': 0.000103759765625, 'logps/chosen': -53.7, 'logps/rejected': -56.8, 'val/contain_eos_token': 0.29375, 'beta': 0.09999999999999999, 'epoch': 0.11}
4%|████▋ | 255/7083 [10:28<4:36:33, 2.43s/it]
```
So basically, seems like using DeepSpeed Stage 3 just doesn't work. And it's a shame because even 7B models can't be finetuned without quantization even with A100 80GB GPUs...
| 3,016 | 104 |
skoshx
| 2025-03-06T17:08:01 |
🎉 Update:
Quickly reading through the DeepSpeed codebase gave me the understanding that `DeepSpeedZeRoOffload` class automatically registered hooks upon instance creation, so I removed the `optimized_offload._register_hooks_recursively(optimizer_offload.module)` code line (`add_hooks` can completely be disregarded in `trl/models/utils.py`) and now Online DPO works with DeepSpeed ZeRo Stage 3.
The above training with the `unsloth/Meta-Llama-3.1-8B-Instruct` model on 2xA100 (80GB) node would take about 53 hours to complete:
```
0%| | 6/7083 [03:00<53:37:49, 27.28s/it]
```
I'm happy to open a PR to make these fixes, but would love the input of a maintainer to maybe shed some light on potential problems from these patches, since I haven't worked that long on the TRL repo.
| 3,016 | 105 |
qgallouedec
| 2025-03-06T17:59:42 |
Is it related to #2963?
| 3,016 | 106 |
skoshx
| 2025-03-06T19:55:55 |
The second part is related, but that won't fix the original "AttributeError: 'dict' object has no attribute 'is_encoder_decoder'" error.
Also, I see that PR was merged, but still I'm not convinced it's even needed to call `self._register_deepspeed_module(self.module)`, like they do in that PR, since it gets called automatically in `__init__`? Am I missing something?
[Code line where hooks are automatically set up](https://github.com/deepspeedai/DeepSpeed/blob/c2c81993948fc28385542196c8544fb442017987/deepspeed/runtime/zero/parameter_offload.py#L177)
| 3,016 | 107 |
qgallouedec
| 2025-03-06T08:00:54 |
Thanks for reporting, how would you fix that?
| 3,015 | 108 |
Boltzmachine
| 2025-03-06T19:48:19 |
I clamp it for now
| 3,015 | 109 |
vagitablebirdcode
| 2025-03-14T09:55:29 |
I recommend implementing a similar `SoftClip` method in Pytorch as in TensorFlow Probability for truncation, its formula is similar to the following:

This activation function ensures that the output is smooth over the entire defined domain to prevent gradient explosion during backpropagation here.
| 3,015 | 110 |
Alex-HaochenLi
| 2025-05-18T10:34:56 |
Hi @Boltzmachine, I met the same issue. I am wondering how you set the clamp value?
| 3,015 | 111 |
August-murr
| 2025-03-06T07:59:50 |
@qgallouedec I'm gonna have to ask you to reproduce or at least rerun the code you used to train the https://github.com/huggingface/trl/pull/2873#issuecomment-2663793035 so I can calrify wether the problem is on my side and my script or TRL.
| 3,013 | 112 |
AndreiCComan
| 2025-03-06T17:47:02 |
@August-murr I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE I posted in #2856 and confirm you are facing the same issue?
| 3,013 | 113 |
cuiyuhao1996
| 2025-03-18T02:52:42 |
I ran into the same problem, even with the latest update.
| 3,013 | 114 |
cuiyuhao1996
| 2025-03-18T02:54:40 |
Have you solved the problem? :)
| 3,013 | 115 |
August-murr
| 2025-03-18T12:05:25 |
> Have you solved the problem? :)
@qgallouedec said he was working on it
@qgallouedec any updates?
| 3,013 | 116 |
Techie5879
| 2025-04-05T06:55:13 |
Current setup:
vLLM model running on GPU0, and in another notebook, have set GPU 1 to be only visible (for training). This is from - https://github.com/huggingface/trl/blob/main/docs/source/speeding_up_training.md
```
training_args = GRPOConfig(
output_dir="Llama-3.2-1B-GRPO4",
logging_steps=1,
save_steps=500,
learning_rate=5e-7,
adam_beta1 = 0.9,
adam_beta2 = 0.99,
weight_decay = 0.1,
warmup_ratio = 0.05,
max_grad_norm = 0.1,
max_steps = 10000,
per_device_train_batch_size=6,
num_generations=6,
lr_scheduler_type="cosine",
push_to_hub=False,
bf16=True,
report_to="wandb",
use_vllm=True,
max_prompt_length = max_prompt_length,
max_completion_length = 512,
)
```
```
trainer = GRPOTrainer(
model=MODEL_ID,
processing_class=tokenizer,
reward_funcs=[
xmlcount_reward_func,
soft_format_reward_func,
strict_format_reward_func,
int_reward_func,
correctness_reward_func,
],
args=training_args,
train_dataset=dataset,
# peft_config=lora_config,
)
```
With PEFT config, training just doesn't seem to work well, without the PEFT config, works much better and rewards are increasing. trl = 0.16.0, peft=0.15.1
| 3,013 | 117 |
qgallouedec
| 2025-04-05T17:52:09 |
We usually use a higher learning rate when using peft. Could you try this?
| 3,013 | 118 |
Techie5879
| 2025-04-05T18:26:44 |
@qgallouedec I've tried about 2e-5 with llama 3.2 1b. Using lora rank 64
Do you recommend something else/going higher?
| 3,013 | 119 |
qgallouedec
| 2025-03-07T16:36:09 |
This can be considered; have you tried implementing it?
| 3,010 | 120 |
radna0
| 2025-03-07T16:39:35 |
@qgallouedec I’m still experimenting with LMDeploy for inference, so not yet.
| 3,010 | 121 |
HuggingFaceDocBuilderDev
| 2025-03-11T14:34:17 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,009 | 122 |
qgallouedec
| 2025-03-22T18:19:41 |
## Benchmark packing
```python
import timeit
import numpy as np
from datasets import Dataset
from trl.data_utils import pack_examples, pack_dataset
# Create a larger dataset with sequence lengths following a gamma distribution
num_samples = 10_000
# Generate sequence lengths following a gamma distribution
seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100
seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf)
# Generate input sequences with random lengths based on gamma distribution
examples = {
"input_ids": [list(range(length)) for length in seq_lengths],
"attention_mask": [[1] * length for length in seq_lengths],
}
dataset = Dataset.from_dict(examples)
max_length = 128 # Set a fixed packing length
# Benchmark pack_dataset
time_pack_dataset = timeit.timeit(lambda: pack_dataset(dataset, max_length), number=10)
# Benchmark dataset.map with pack_examples
time_pack_examples = timeit.timeit(
lambda: dataset.map(pack_examples, batched=True, fn_kwargs={"seq_length": max_length}), number=10
)
print(f"pack_dataset time: {time_pack_dataset:.4f} seconds")
print(f"dataset.map(pack_examples) time: {time_pack_examples:.4f} seconds")
```
```
pack_dataset time: 0.0667 seconds
dataset.map(pack_examples) time: 19.3734 seconds
Speedup: 290.46x
```
| 3,009 | 123 |
qgallouedec
| 2025-03-22T18:22:40 |
## Benchmark truncate
```python
import timeit
import numpy as np
from datasets import Dataset
from trl.data_utils import truncate_dataset
def truncate_examples(example, max_length):
return {key: example[key][:max_length] for key in ["input_ids", "attention_mask"]}
# Create a larger dataset with sequence lengths following a gamma distribution
num_samples = 10_000
# Generate sequence lengths following a gamma distribution
seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100
seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf)
# Generate input sequences with random lengths based on gamma distribution
examples = {
"input_ids": [list(range(length)) for length in seq_lengths],
"attention_mask": [[1] * length for length in seq_lengths],
}
dataset = Dataset.from_dict(examples)
max_length = 128 # Set a fixed truncation length
# Benchmark truncate_dataset
time_truncate_dataset = timeit.timeit(lambda: truncate_dataset(dataset, max_length), number=10)
# Benchmark dataset.map with truncate_examples
time_truncate_examples = timeit.timeit(
lambda: dataset.map(truncate_examples, batched=True, fn_kwargs={"max_length": max_length}), number=10
)
print(f"truncate_dataset time: {time_truncate_dataset:.4f} seconds")
print(f"dataset.map(truncate_examples) time: {time_truncate_examples:.4f} seconds")
print(f"Speedup: {time_truncate_examples / time_truncate_dataset:.2f}x")
```
```
truncate_dataset time: 0.0611 seconds
dataset.map(truncate_examples) time: 6.3807 seconds
Speedup: 104.47x
```
| 3,009 | 124 |
qgallouedec
| 2025-03-05T17:22:33 |
Thanks for reporting. I can't reproduce right now. Can you try to provide the full code with a dataset and a model that allow to reproduce? Also, try downgrading to vLLM 0.7.2 and pull the latests commit from trl. Looking forward to know if it solves the issue.
| 3,008 | 125 |
iamansinha
| 2025-03-12T08:24:20 |
@qgallouedec Thanks for your reply!
[Line 705 of grpo_trainer.py](https://github.com/huggingface/trl/blob/3f0695a4ca6f27bd1b7d0280c71960e7aff0d298/trl/trainer/grpo_trainer.py#L705):
`device = self.accelerator.device` was giving just `"cuda"`.
So, I was able to patch the error by manually setting `device = 'cuda:0'` before Line 751.
I found out that I was facing this problem only with 2xA100 setup, and not with another machine with 4xA100. So it might be my machine specific issue if you are unable to reproduce this error. So, closing this issue for now.
| 3,008 | 126 |
luckyyangrun
| 2025-03-18T06:27:34 |
i face the same issue with 2*4090
| 3,008 | 127 |
Vanchrn
| 2025-03-22T03:26:38 |
same
| 3,008 | 128 |
HuggingFaceDocBuilderDev
| 2025-03-03T18:28:14 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,003 | 129 |
OctoSabercat
| 2025-03-03T17:38:34 |
@bot /style
| 3,002 | 130 |
HelloWorldLTY
| 2025-03-03T20:04:50 |
Hi, did you try the model and have any ideas? Thanks.
| 2,999 | 131 |
tastelikefeet
| 2025-03-14T02:43:39 |
May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh
We support train a 72B model with 4 A100 GPUs:
https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh
| 2,999 | 132 |
Wangbiao2
| 2025-03-15T10:27:54 |
> May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh We support train a 72B model with 4 A100 GPUs: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh
Thank you!
| 2,999 | 133 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.