The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
---|
Edit3D-Bench
Paper | Project Page | Code
Edit3D-Bench is a benchmark for 3D editing evaluation, introduced in the paper VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space. This dataset comprises 100 high-quality 3D models, with 50 selected from Google Scanned Objects (GSO) and 50 from PartObjaverse-Tiny. For each model, we provide 3 distinct editing prompts. Each prompt is accompanied by a complete set of annotated 3D assets, including
- original 3D asset with rendered images
- 3D mask specifying the editing region with rendered images
- 2D mask of the edit region
- 2D edited image generated by FLUX.1 Fill
Preview
Explore our dataset in Project Page.
Which tasks will benefit from our dataset?
- 3D Editing
⚙️ Getting Started
Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/huanngzh/Edit3D-Bench
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/huanngzh/Edit3D-Bench
Dataset Structure
data/
├── metadata.json # Dataset metadata file (dataset, object_name, prompt)
├── GSO/ # Google Scanned Objects dataset
│ ├── [object_name]/
│ │ ├── source_model/ # Original 3D model
│ │ │ ├── model.glb # Original 3D model file (GLB format)
│ │ │ ├── render/ # Original model rendered images
│ │ │ ├── video_rgb.mp4 # Original model RGB video
│ │ │ ├── video_normal.mp4 # Original model normal video
│ │ │ └── video_mask.mp4 # Original model mask video
│ │ ├── prompt_1/ # Annotation for prompt 1
│ │ │ ├── 2d_edit.png # 2D edited image
│ │ │ ├── 2d_mask.png # 2D mask image for editing
│ │ │ ├── 2d_render.png # 2D render image of original model
│ │ │ ├── 2d_visual.png # 2D visualization image
│ │ │ ├── 3d_edit_region.glb # 3D edit region model
│ │ │ └── render/ # Rendered images of 3D mask
│ │ ├── prompt_2/ # Annotation for prompt 2
│ │ └── prompt_3/ # Annotation for prompt 3
│ └── ...
└── PartObjaverse-Tiny/ # PartObjaverse-Tiny dataset
├── [object_id]/
│ ├── source_model/ # Original 3D model
│ ├── prompt_1/ # Annotation for prompt 1
│ ├── prompt_2/ # Annotation for prompt 2
│ └── prompt_3/ # Annotation for prompt 3
└── ...
Evaluation
Check details in our github repo.
🧷 Citation
@article{li2025voxhammer,
title = {VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space},
author = {Li, Lin and Huang, Zehuan and Feng, Haoran and Zhuang, Gengxiong and Chen, Rui and Guo, Chunchao and Sheng, Lu},
journal = {arXiv preprint arXiv:2508.19247},
year = {2025},
url = {https://huggingface.co/papers/2508.19247}
}
- Downloads last month
- 933