Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Could you provide the Docker images?

#2
by lycfight - opened

Is nebius/SWE-rebench a newly extended SWE-bench-style dataset that is independent from nebius/SWE-bench-extra?
Could you provide the corresponding Docker images?

Nebius org

@lycfight This is a new, extended dataset built by enhancing the SWE-bench-extra data pipelines.
An official release with full details is coming soon — stay tuned!

@lycfight This is a new, extended dataset built by enhancing the SWE-bench-extra data pipelines.
An official release with full details is coming soon — stay tuned!

I previously attempted to extend the dataset as well, but ran into some difficulties when configuring the environment in the constants file. The official SWE-bench documentation does not clearly explain the details.

In our previous discussion under nebius/SWE-bench-extra, we talked about the limitations of using a single default constants configuration—for many repositories, the default setup fails, and as a result, only a small portion of the instances pass validation. The loss rate is high, and we ended up with only 10–20% usable data after filtering.

You managed to collect 21.3k instances, which is really impressive. Could you share how you efficiently collected them—especially how you handled the constants environment configuration part?

Nebius org

Hi!
Yes, that's true — the default configuration for installation has its limitations. In the current version, we used an automatic installation pipeline, so each task now has its own installation recipe (see the install_config column). These constants are no longer hardcoded, but stored directly inside each task.
You can read more details about the collection process in our tech report.

Hi nebius team, thanks for the awesome work! Want to follow up and wondering do we have any updates for releasing the docker images for both swe-rebench and swe-bench-extra so we can do some eval and training based on this?

Thanks so much!

Nebius org

Hi! Thank you for your kind words!
You can already use the SWE-bench fork to build the images yourself. We’re also planning to publish the Docker images to a public registry in the upcoming release — so stay tuned!

Thanks again for your interest!

Nebius org

@tech1984 We have also started releasing our Docker images.
First, we released images and task instances for SWE-rebench-leaderboard.
You can find them here.
In our next release, we will upload all the images that we used in our RL training.

Nebius org

Hi! @lycfight @tech1984
We’ve published a cleaned subset of ≈7,500 Docker images – the set we actively use for RL training. You can pull them from Docker Hub:
https://hub.docker.com/repositories/swerebench
(same naming scheme as the original SWE-bench)

To run them, either use our SWE-bench fork (README has a one-line example) or copy the small test_cmd/log-parser changes into your own setup.

We don’t plan to upload the full ~20 K images right now. However, you can build other images, if you wish. We’re preparing a larger, more refined release. If you train on this subset, please drop me a note at [email protected] – your feedback will help shape the next version.

Hi @ibragim-bad , thanks so much for following up on this! I just finished building the entire 20.1k images (tho some of them are failing and I will rebuild them) and will try to test them internally. Thanks so much for the awesome work!

BTW, just curious do you plan to open source the tooling of how you build those images ? Or how you get the install_config, requirements and environment so we can build the images? I think that would also be REALLY helpful if there gonna be a easier way to craft those data.

Hi @tech1984 ,

Thanks for the feedback on the SWE-rebench dataset and images!

Regarding your question about our tooling for building images — our pipeline primarily uses the install_config and is similar to the building process in the SWE-bench-fork. You can find more details about our data collection process in our paper: https://arxiv.org/abs/2505.20411. The appendix also contains the prompts we use to extract the install_config.

By the way, do you use the dataset for RL training?
We recently released a paper showing how we boosted Qwen Instruct performance from 11% → 39% using RL and a subset from SWE-rebench: https://arxiv.org/abs/2508.03501

If you can share your use cases for the dataset, that would be extremely helpful for us as we prepare our next expanded release.
Feel free to reach out directly at [email protected] – I’d be glad to hear about how you’re using it and discuss ideas.

Sign up or log in to comment