Marigold Depth v1-0 Model Card

Image Depth diffusers Github Website arXiv Social License

NEW: Marigold Depth v1-1 Model

This is a model card for the marigold-depth-v1-0 model for monocular depth estimation from a single image. The model is fine-tuned from the stable-diffusion-2 model as described in our CVPR'2024 paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation".

  • Play with the interactive Hugging Face Spaces demo: check out how the model works with example images or upload your own.
  • Use it with diffusers to compute the results with a few lines of code.
  • Get to the bottom of things with our official codebase.

Model Details

  • Developed by: Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler.
  • Model type: Generative latent diffusion-based affine-invariant monocular depth estimation from a single image.
  • Language: English.
  • License: Apache License License Version 2.0.
  • Model Description: This model can be used to generate an estimated depth map of an input image.
    • Resolution: Even though any resolution can be processed, the model inherits the base diffusion model's effective resolution of roughly 768 pixels. This means that for optimal predictions, any larger input image should be resized to make the longer side 768 pixels before feeding it into the model.
    • Steps and scheduler: This model was designed for usage with the DDIM scheduler and between 10 and 50 denoising steps. It is possible to obtain good predictions with just one step by overriding the "timestep_spacing": "trailing" setting in the scheduler configuration file or by adding pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") after the pipeline is loaded in the code before the first usage. For compatibility reasons we kept this v1-0 model identical to the paper setting and provided a newer v1-1 model with optimal settings for all possible step configurations.
    • Outputs:
      • Affine-invariant depth map: The predicted values are between 0 and 1, interpolating between the near and far planes of the model's choice.
      • Uncertainty map: Produced only when multiple predictions are ensembled with ensemble size larger than 2.
  • Resources for more information: Project Website, Paper, Code.
  • Cite as:
@InProceedings{ke2023repurposing,
      title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
      author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2024}
}
Downloads last month
57,343
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support depth-estimation models for diffusers library.

Spaces using prs-eth/marigold-depth-v1-0 11