user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
---|---|---|---|---|
not-lain
| 2024-11-05T10:58:03 |
thanks for the reply, I will close this one and open a new issue in the transformers library
| 2,313 | 600 |
qgallouedec
| 2024-11-05T17:54:27 |
That true, the documentation is outdated. Further information here https://github.com/huggingface/trl/issues/2314#issuecomment-2456683497. Closing in favour of #2314 (mostly duplicate)
| 2,312 | 601 |
tcz
| 2024-11-26T11:10:07 |
How would you make an `eval_data_collator` from DataCollatorForCompletionOnlyLM?
| 2,311 | 602 |
qgallouedec
| 2024-11-05T17:51:57 |
That's correct, thanks for reporting. Are you willing to submit a PR that fixes that?
| 2,309 | 603 |
qgallouedec
| 2024-11-22T17:43:44 |
Closed by #2360
| 2,309 | 604 |
staas-dnm
| 2024-11-02T05:54:23 |
Oh, open with wrong github id.
close issue.
If you can, remove this issue.
| 2,308 | 605 |
qgallouedec
| 2024-11-05T17:49:27 |
Indeed, it's not currently supported. And unless it's widely demanded, I don't think it will be.
Having said that, I think you can easily implement it. The following should work:
1. set `precompute_ref_log_probs=True` in `DPOConfig`
2. add a new parameter `ref_processing_class` in `DPOTrainer`
3. in `DPOTrainer.__init__`, create a new tokenized dataset with `ref_processing_class` something like
```python
fn_kwargs = {
"processing_class": ref_processing_class, # <-
"max_prompt_length": args.max_prompt_length,
"max_completion_length": args.max_completion_length,
# for enc-dec, we add the special tokens ([bos_token] + prompt + [eos_token]; completion + [eos_token])
"add_special_tokens": self.is_encoder_decoder,
}
self.ref_train_dataset = train_dataset.map( # <-
self.tokenize_row if not self.is_vision_model else self.process_row,
fn_kwargs=fn_kwargs,
num_proc=self.dataset_num_proc,
writer_batch_size=10,
desc="Tokenizing train dataset",
)
```
4. modify precomputing ref part here
https://github.com/huggingface/trl/blob/74e20cbbbcbac7ac8d426df09eda5f310c637def/trl/trainer/dpo_trainer.py#L682-L692
with
```python
data_loader = self.accelerator.prepare(DataLoader(self.ref_train_dataset, **dataloader_params))
| 2,307 | 606 |
HuggingFaceDocBuilderDev
| 2024-11-01T16:52:02 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2306). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,306 | 607 |
HuggingFaceDocBuilderDev
| 2024-11-01T10:35:40 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2305). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,305 | 608 |
c3ianwu
| 2024-11-01T23:23:17 |
Ok I think I have found the issue.
In my code I was calling `with unwrap_model_for_generation(model, accelerator) as unwrapped_model` multiple times in different places, opening and closing with the context manager e.g.
```
with unwrap_model_for_generation(model, accelerator) as unwrapped_model:
# generate A
# do some stuff
with unwrap_model_for_generation(model, accelerator) as unwrapped_model:
# generate B
# do more stuff
```
This problem seems to go away by just calling the context manager once:
````
with unwrap_model_for_generation(model, accelerator) as unwrapped_model:
# generate A
# do some stuff
# generate B
# do more stuff
````
| 2,304 | 609 |
qgallouedec
| 2024-11-05T09:45:36 |
> Code based on a forked version of trl.
> As this is based on my own modified version of trl I realise you might not be of much help
Indeed. Unfortunately we don't have time for tech support. The most we can do is help with the original codebase.
Great that you've found the solution! Thanks for sharing it!
| 2,304 | 610 |
HuggingFaceDocBuilderDev
| 2024-10-31T18:15:58 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2303). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,303 | 611 |
HuggingFaceDocBuilderDev
| 2024-10-31T14:03:37 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2302). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,302 | 612 |
HuggingFaceDocBuilderDev
| 2024-10-31T11:26:07 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2301). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,301 | 613 |
HuggingFaceDocBuilderDev
| 2024-10-31T09:33:01 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2300). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,300 | 614 |
qgallouedec
| 2024-11-05T17:38:17 |
Thanks for reporting, should be fixed in #2328
| 2,299 | 615 |
HuggingFaceDocBuilderDev
| 2024-10-30T16:46:23 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2298). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,298 | 616 |
qgallouedec
| 2024-10-31T08:27:16 |
Failing test are expected and will be solved later (in #2288 for example)
| 2,298 | 617 |
qgallouedec
| 2024-10-30T16:01:25 |
I think the readme should be kept as minimal as possible. While this graph is nice, I don't think it belongs in the readme. But happy to merge if @lewtun @edbeeching or @kashif think the opposite.
| 2,297 | 618 |
lewtun
| 2024-11-19T10:45:06 |
Yes I agree with @qgallouedec that we'd like to keep the README as lean as possible. Thank you very much for the proposal in any case! Closing this for now
| 2,297 | 619 |
qgallouedec
| 2024-11-05T09:39:06 |
Can you shortly describe what your collator is doing?
| 2,296 | 620 |
qgallouedec
| 2024-11-22T17:45:28 |
I'm closing because there is not enough information to answer the request. Feel free to reopen an issue, specifying your question more precisely.
| 2,296 | 621 |
qgallouedec
| 2024-11-05T18:10:50 |
I understand the motivation behind this proposal, but I feel that warning would make more sense directly in the tokenizer. At least, that's where I'd look for it. What's more, this data collator is only used in specific cases. Having it in a collator (which is initially designed to collate data, not process it) seems strange to me.
| 2,295 | 622 |
SwayamInSync
| 2024-11-07T10:31:38 |
Hey @qgallouedec ,
Thanks and yeah that totally make sense, I can instead try to put this in tokenizer or as a check before starting training with Trainer.
Feel free to close it here
| 2,295 | 623 |
qgallouedec
| 2024-11-07T11:27:31 |
Good. Please link this PR if you push this elsewhere
| 2,295 | 624 |
tanaybaswa
| 2024-11-21T23:17:30 |
I have a similar issue trying to fine tune a 12B model on 8xH100s
| 2,294 | 625 |
SwayamInSync
| 2024-10-29T14:02:43 |
On inspection it seems the addition of extra pad token is causing the issue of vocab size mismatch
```python
if not tokenizer.pad_token_id:
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("[PAD]")
```
I think it maybe nice to have a check and proper error message :)
I can drop a PR if needed and closing this issue
| 2,293 | 626 |
qgallouedec
| 2024-10-29T12:16:26 |
Thanks for reporting. Next time, please share your system info (as requested in the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and in the issue template). It would have been especially relevant here.
You're most likely using Transformers v4.46, which is not compatible with TRL<v0.12 (about to be released). Make sure to downgrade transformers
```
pip install transformers"<=4.45"
```
**OR**
Upgrade to TRL>0.12 (this won't work before the release)
```
pip install trl">=0.12"
```
for ref, this issue has been solved in #2246
| 2,292 | 627 |
MonolithFoundation
| 2024-10-30T02:25:29 |
Hi, am using transformers 4.47 and trl 0.11.4
Could u indicates me when would 0.12 release and why this error happens for trl 0.12?
| 2,292 | 628 |
monk1337
| 2024-11-04T04:55:23 |
> Thanks for reporting. Next time, please share your system info (as requested in the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and in the issue template). It would have been especially relevant here.
>
> You're most likely using Transformers v4.46, which is not compatible with TRL<v0.12 (about to be released). Make sure to downgrade transformers
>
> ```
> pip install transformers"<=4.45"
> ```
>
> **OR**
>
> Upgrade to TRL>0.12 (this won't work before the release)
>
> ```
> pip install trl">=0.12"
> ```
>
> for ref, this issue has been solved in #2246
Worked for me as well. i was using unsloth and getting this error.
| 2,292 | 629 |
MonolithFoundation
| 2024-11-04T07:12:21 |
I still didn't get the root reason for this. the APi changes so rapidly
| 2,292 | 630 |
qgallouedec
| 2024-11-04T09:30:03 |
In our `trl` trainers, we had the following method:
```python
def get_batch_samples(self, model, batch):
```
However, with the recent addition in [Hugging Face Transformers PR #34198](https://github.com/huggingface/transformers/pull/34198), `Trainer` now includes a new `get_batch_samples` method:
```python
def get_batch_samples(self, epoch_iterator, num_batches):
```
This new method has the same name but a different purpose and parameter structure.
Since our `trl` trainer inherits from the Transformers `Trainer` class, our original `get_batch_samples` method in `trl` is unintentionally overriding the new method in `Trainer`. This causes a conflict: when `self.get_batch_samples(epoch_iterator, num_batches)` is called, it actually tries to use our `trl` method signature (`get_batch_samples(model, batch)`) instead. This results in the following:
- `epoch_iterator` (expected by the new method as a generator) is passed as the `model` parameter.
- `num_batches` (expected as an integer) is passed as the `batch` parameter.
Consequently, when the method tries to execute `model.generate(...)`, it raises an `AttributeError` because `model` is now a generator (inherited from `epoch_iterator`) rather than an expected model with a `.generate` method. This leads to the error:
```
policy_output = model.generate(
^^^^^^^^^^^^^^
AttributeError: 'generator' object has no attribute 'generate'
```
To resolve this, we needed to rename the method in #2246
| 2,292 | 631 |
MonolithFoundation
| 2024-11-05T02:49:00 |
@qgallouedec So it is! However, after I upgraded trl to master branch, the error still persist why
| 2,292 | 632 |
qgallouedec
| 2024-11-05T09:18:31 |
Please share your system info with `trl env`
| 2,292 | 633 |
maziyarpanahi
| 2024-11-08T08:50:03 |
~~I am getting this error as well and I am also confused with the versions, backward compatibility, and the fix. What is the combination of `transformers` and `trl` libraries that resolves this issue? (which versions should we install for these 2 libraries so we don't see the error today)~~
Installed from the master and it worked. tnx
| 2,292 | 634 |
qgallouedec
| 2024-11-08T10:34:29 |
```
pip install --upgrade trl
```
| 2,292 | 635 |
qgallouedec
| 2024-10-31T15:02:25 |
Please share the MRE and your system info
| 2,291 | 636 |
mohit-raghavendra
| 2024-10-31T22:23:06 |
I observed this too when I used args through the TrainingArguments into DPOTrainer instead of a DPOConfig object. Using DPOConfig fixes the issue
| 2,291 | 637 |
qgallouedec
| 2024-11-05T09:41:11 |
`DPOTrainer` expects `DPOConfig` for `args`. `TrainingArguments` is not supported.
| 2,291 | 638 |
qgallouedec
| 2024-10-28T19:02:20 |
Thanks for reporting @danib08, it has been taken into account in #2162
| 2,290 | 639 |
qgallouedec
| 2024-11-05T18:13:24 |
What's the status of this PR? I've converted it to a draft because it seems to me that this is its current state.
| 2,289 | 640 |
qgallouedec
| 2024-11-18T13:13:51 |
I'm closing because there's no recent activity. Feel free to open a new PR if the status of this proposal changes.
| 2,289 | 641 |
HuggingFaceDocBuilderDev
| 2024-10-30T15:13:48 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2288). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,288 | 642 |
qgallouedec
| 2024-10-30T15:57:04 |
If at least one of you @muellerzr @SunMarc can take a look please 🙏
| 2,288 | 643 |
HuggingFaceDocBuilderDev
| 2024-10-27T17:35:29 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2287). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,287 | 644 |
HuggingFaceDocBuilderDev
| 2024-10-26T20:28:04 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2286). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,286 | 645 |
PhilipMay
| 2024-10-27T17:00:19 |
I don't think the CI problems have anything to do with the changes in this PR...
| 2,286 | 646 |
qgallouedec
| 2024-11-05T18:17:15 |
Thanks @PhilipMay! Do you mind updating your branch? I don't have the writing rights on your branch.
| 2,286 | 647 |
HuggingFaceDocBuilderDev
| 2024-10-28T10:49:49 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2285). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,285 | 648 |
qgallouedec
| 2024-10-25T13:00:16 |
Wonderfull! Thanks @ccs96307
Can you also replace `pytest.raises(...)` by `self.assertRaises(...)`?
| 2,283 | 649 |
HuggingFaceDocBuilderDev
| 2024-10-25T13:08:16 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2283). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,283 | 650 |
qgallouedec
| 2024-10-25T13:32:59 |
and make sure to run `make precommit`
| 2,283 | 651 |
ccs96307
| 2024-10-25T14:59:16 |
Hi @qgallouedec, thank you so much for taking the time to review my PR. I really appreciate your suggestions.
I'll replace `pytest.raises(...)` with `self.assertRaises(...)` as you recommended, and will also make sure to run `make precommit` to get everything aligned with the project's guidelines. Thanks again for your helpful feedback—I’ll get these changes pushed soon!
| 2,283 | 652 |
ccs96307
| 2024-10-26T06:48:24 |
Hi @qgallouedec, I've noticed that the `tests (3.11, windows-latest)` failed due to the following error:
```
FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_0_standard_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM
FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_1_conversational_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM
```
These errors seem to be unrelated to my changes, as the tests passed locally and the files I edited do not directly involve this functionality. I suspect this might be a network issue or a cached problem on Windows?
Could this be a common issue you've seen before? If there's anything I need to change or investigate further, please let me know.
| 2,283 | 653 |
qgallouedec
| 2024-10-28T15:15:48 |
> Could this be a common issue you've seen before? If there's anything I need to change or investigate further, please let me know.
Yes, don't worry, not related with your PR, it will be solved in #2276
| 2,283 | 654 |
August-murr
| 2024-10-28T07:12:44 |
@lewtun
@qgallouedec
Feedback would be appreciated!
| 2,282 | 655 |
qgallouedec
| 2024-11-05T18:21:30 |
Thanks a lot @August-murr for the work. Can you add documentation, and test?
| 2,282 | 656 |
HuggingFaceDocBuilderDev
| 2024-11-05T18:24:42 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2282). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,282 | 657 |
August-murr
| 2024-11-05T20:10:44 |
> Thanks a lot @August-murr for the work. Can you add documentation, and test?
I've already added most of the docs, as for the tests, unfortunately I won't be able to do it for a few days and if nobody else added them, I'll do it later.
| 2,282 | 658 |
August-murr
| 2024-11-14T18:22:33 |
The tests I added validate the success of the merge and I could expand it if necessary.
I also added docs to the callbacks file but was unable to produce the HTML file similar to the [callback docs](https://huggingface.co/docs/trl/main/en/callbacks) so I'd appreciate it if you could confirm whether the docs are properly generated or not.
| 2,282 | 659 |
August-murr
| 2024-11-18T09:14:53 |
> Thanks for iterating @August-murr ! The PR LGTM now and once the CI is green & @qgallouedec approves, I think we can merge it
The tests without optional dependency failed because Mergekit is an optional dependency
| 2,282 | 660 |
kashif
| 2024-11-18T09:27:03 |
@August-murr in the `import_utils` you can define a new `is_mergekit_available` helper and then in the tests you can skip the tests if its not available
| 2,282 | 661 |
qgallouedec
| 2024-11-18T13:07:35 |
Like here:
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/trl/import_utils.py#L39-L40
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/testing_utils.py#L42-L46
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/test_judges.py#L62-L63
Don't hesitate to ask for help if you want the maintainers to do it for you.
| 2,282 | 662 |
August-murr
| 2024-11-18T13:30:28 |
> Like here:
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/trl/import_utils.py#L39-L40
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/testing_utils.py#L42-L46
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/test_judges.py#L62-L63
>
> Don't hesitate to ask for help if you want the maintainers to do it for you.
Done!
| 2,282 | 663 |
qgallouedec
| 2024-11-18T13:35:47 |
Nice, thanks! Just running some tests, waiting for the CI to be green, and we're good to merge (expect some commits from me on this branch)
| 2,282 | 664 |
qgallouedec
| 2024-11-18T14:47:30 |
Another question that came up during the review: why have a new configuration class when we can use the mergekit one directly? I'm afraid of confusing the user, tempted to use :
```python
from mergekit import MergeConfiguration
from trl import MergeModelCallback
merge_callback = MergeModelCallback(MergeConfiguration())
```
| 2,282 | 665 |
August-murr
| 2024-11-18T17:11:24 |
> Another question that came up during the review: why have a new configuration class when we can use the mergekit one directly? I'm afraid of confusing the user, tempted to use :
>
> ```python
> from mergekit import MergeConfiguration
> from trl import MergeModelCallback
>
> merge_callback = MergeModelCallback(MergeConfiguration())
> ```
Actually, ease of use for the user was the reason why I had to write the class in mergekit_utils since mergekit uses a yaml file to get it's Merge config, which is easier to implement but more complicated for the user.
and if you wanted to use `MergeConfiguration` directly from mergekit:
```python
from mergekit.config import MergeConfiguration
merge_config_dict = {
"dtype": "float16",
"merge_method": "linear",
"models": [
{"model": "path_to_model_1", "parameters": {"weight": 0.4}},
{"model": "path_to_model_2", "parameters": {"weight": 0.6}},
],
}
config = MergeConfiguration.model_validate(merge_config_dict)
```
As you add more parameters to the configuration, the dictionary becomes increasingly nested.
The current implementation, although harder to maintain, simplifies everything for the user:
```python
from trl.mergekit_utils import MergeConfig
config = MergeConfig("linear")
config.policy_model_weight = 0.4
config.target_model_weight = 0.6
```
| 2,282 | 666 |
qgallouedec
| 2024-11-19T11:03:06 |
That makes sense.
Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`?
| 2,282 | 667 |
August-murr
| 2024-11-19T13:02:02 |
> That makes sense.
> Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`?
I'll figure it out.
| 2,282 | 668 |
August-murr
| 2024-11-19T19:10:37 |
> That makes sense. Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`?
The main issue with using Mergekit's `MergeConfiguration` directly is that it’s not really designed to work on its own. It relies heavily on dictionaries, usually loaded from a YAML file, or using a bunch of classes from `mergekit` to set things up:
```python
class MergeConfiguration(BaseModel):
merge_method: str
slices: Optional[List[OutputSliceDefinition]] = None
models: Optional[List[InputModelDefinition]] = None
parameters: Optional[Dict[str, ParameterSetting]] = None
base_model: Optional[ModelReference] = None
dtype: Optional[str] = None
tokenizer_source: Union[
Literal["union"], Literal["base"], ModelReference, None
] = None
tokenizer: Optional[TokenizerConfig] = None
chat_template: Optional[str] = None
out_dtype: Optional[str] = None
```
If someone wanted to set up the configuration manually, they’d either need to:
1. Write or add to a YAML file, or
2. Write a big, nested dictionary themselves (which only gets more complicated as you add more details), or
3. Use multiple classes from `mergekit` (e.g., `OutputSliceDefinition`, `InputModelDefinition`, etc.), as seen [here](https://github.com/arcee-ai/mergekit/blob/57e7d14e2a732f532970e2c9dada00e2d8f15a7a/mergekit/config.py#L85).
Neither option is user-friendly.
I admit the current implementation looks messy, but the alternative would create more complications for the user. Maybe in future versions, the Mergekit team will make `MergeConfiguration` simpler and easier to work with.
| 2,282 | 669 |
August-murr
| 2024-11-20T16:28:46 |
@qgallouedec
Anything else you'd want me to do?
| 2,282 | 670 |
qgallouedec
| 2024-11-21T11:21:56 |
LGTM thanks!
I've just applied some minor refinements:
- compat with windows file path
- use tmp dir in tests
- sort imports and function
- common method for saving and pushing in the callback
- add "trl" to model tags
| 2,282 | 671 |
August-murr
| 2024-11-21T11:53:12 |
@qgallouedec
About the failed tests:
The tests do not fail on Ubuntu; they only fail on Windows. I realized that the issue arose from a permission error from the temporary directory when trying to delete the merged files, specifically the `model.safetensors.`
| 2,282 | 672 |
qgallouedec
| 2024-11-21T11:58:22 |
> @qgallouedec About the failed tests: The tests do not fail on Ubuntu; they only fail on Windows. I realized that the issue arose from a permission error from the temporary directory when trying to delete the merged files, specifically the `model.safetensors.`
Ah thanks, I was debugging, but I don't have access to windows vm right now (explains https://github.com/huggingface/trl/pull/2282/commits/fa5bafe617793ed340303cf0ebded6ac03cab39f). Any idea how to solve it?
| 2,282 | 673 |
qgallouedec
| 2024-11-21T12:41:58 |
Found a solution with a57d88a1b317785fa85e3b09bd463ecb0b9eef06
| 2,282 | 674 |
August-murr
| 2024-11-21T13:10:22 |
@qgallouedec
Sorry I wasn't able to sort it out myself.
| 2,282 | 675 |
qgallouedec
| 2024-11-21T14:32:33 |
No worry, thanks a lot for this nice addition!
| 2,282 | 676 |
asparius
| 2024-12-10T19:06:24 |
This issue appears in PPO as well. This was introduced in 0.12.1 which is the rewritten version of the PPO. In the old version, masked_mean is used. I have also checked the PR and change log, there was no mention of this. @qgallouedec can enlighten us.
| 2,281 | 677 |
qgallouedec
| 2024-11-05T10:38:20 |
Is the use of this type of procedure common in the community/literature? Do you have any reference results?
| 2,280 | 678 |
qgallouedec
| 2024-10-25T14:37:34 |
Thanks for this. Indeed I realized it while working on #2209
| 2,279 | 679 |
HuggingFaceDocBuilderDev
| 2024-10-25T14:40:44 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2279). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,279 | 680 |
seanexp
| 2024-10-25T02:58:57 |
What is the primary difference between this PR and #1628 ?
| 2,278 | 681 |
mnoukhov
| 2024-10-25T14:06:30 |
This is an updated and multi-gpu extension of #1628. It is also work between @vwxyzjn and I!
Instead of keeping vllm models on the same GPU, we move them to another. It also uses the more flexible `vllm_utils.py` written by @vwxyzjn in `allenai/open_instruct` (https://github.com/allenai/open-instruct/blob/main/open_instruct/vllm_utils.py) which allows using any version of `vllm` as opposed to the fixed `0.4.2` from #1628.
Finally, this has been tested and verified to match regular Online DPO performance while being faster and more efficient, see our new preprint https://arxiv.org/abs/2410.18252
| 2,278 | 682 |
HuggingFaceDocBuilderDev
| 2024-10-28T13:17:03 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2278). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,278 | 683 |
Shiguang-Guo
| 2024-12-06T09:04:43 |
Hi, I have a question, if I have multiple machines with 8 cards, how would the allocation of gpus for vllm look like? Or is this feature currently supported?
| 2,278 | 684 |
Shiguang-Guo
| 2024-12-16T03:01:14 |
> Hi, I have a question, if I have multiple machines with 8 cards, how would the allocation of gpus for vllm look like? Or is this feature currently supported?
I solved this problem. The key point is to replace the part about `group_ranks` in `custom_initialize_model_parallel` and `init_world_group` in `vllm_utils.py` with `group_ranks = [[torch.distributed.get_rank()]]`. Maybe you can update it to the new version
| 2,278 | 685 |
fzyzcjy
| 2024-12-18T12:37:59 |
Hi, is it possible to use single GPU for both training and inference? Thanks!
| 2,278 | 686 |
fzyzcjy
| 2024-12-31T09:23:02 |
Hi, is there any updates? Thanks!
| 2,278 | 687 |
HuggingFaceDocBuilderDev
| 2024-10-25T13:20:43 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,277 | 688 |
HuggingFaceDocBuilderDev
| 2024-10-24T20:48:03 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2276). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,276 | 689 |
qgallouedec
| 2024-10-25T10:11:11 |
Results for a gemma reward model
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--reward_model_path Ray2333/GRM-Gemma-2B-rewardmodel-ft \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-GRM-Gemma \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--push_to_hub
```
https://wandb.ai/huggingface/huggingface/runs/520cnnjl
For ref, with Pair RM judge instead:
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--judge pair_rm \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-PairRM \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--push_to_hub
```
https://wandb.ai/huggingface/huggingface/runs/ffd4u5wa
<img width="1685" alt="Screenshot 2024-10-25 at 14 30 30" src="https://github.com/user-attachments/assets/433ba62a-8d76-48eb-9172-e0e61c3c9d3a">
| 2,276 | 690 |
qgallouedec
| 2024-10-28T15:00:07 |
> Have you done a test run of e.g. trying to optimise Qwen2.5-0.5B-Instruct with the 7B ArmoRM model?
ArmoRM is a custom classifier (its code for using it is not standard). So our `get_reward` function probably won't work for it. However, by modifying the code a little, I still manage to use it, and this is what I get:
https://wandb.ai/huggingface/huggingface/runs/merlfqgx (screenshot to come)
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--reward_model_path RLHFlow/ArmoRM-Llama3-8B-v0.1 \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-AutoRM \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--push_to_hub
```
<img width="1189" alt="Screenshot 2024-10-28 at 16 50 30" src="https://github.com/user-attachments/assets/da2deffd-8c84-42e5-a996-18ba47629b95">
| 2,276 | 691 |
qgallouedec
| 2024-10-24T20:30:53 |
The issue has been solved with #2246
TRL 0.11.4 is not compatible with Transformers 4.46.
We will release TRL 0.12 very soon
| 2,275 | 692 |
swamymushini
| 2024-10-30T17:15:44 |
What is the working fix for this issue now? which library versions we can use now for temp solution? should be downgrade transformers
| 2,275 | 693 |
bibhudutta-p
| 2024-10-30T17:19:19 |
Yes, use the latest version of TRL and v4.45.2 of Transformers. This fixed it for me.
| 2,275 | 694 |
swamymushini
| 2024-10-30T17:21:53 |
> Yes, use the latest version of TRL and v4.45.2 of Transformers. This fixed it for me.
u mean the TRL 0.11.4?
| 2,275 | 695 |
bibhudutta-p
| 2024-10-30T17:31:34 |
yes
| 2,275 | 696 |
swamymushini
| 2024-10-30T17:33:43 |
> yes
Really thanks.. it worked for me..
| 2,275 | 697 |
qgallouedec
| 2024-10-24T18:49:40 |
Nice! Thanks @zhanwenchen!
| 2,274 | 698 |
HuggingFaceDocBuilderDev
| 2024-10-24T18:54:10 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2274). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 2,274 | 699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.