nielsr HF Staff commited on
Commit
0e9f3ac
·
verified ·
1 Parent(s): 23d9d5e

Improve model card: add pipeline tag, update license, links, and usage

Browse files

This PR significantly improves the model card for the Camera Depth Model (CDM) by:

* Updating the `license` to `apache-2.0` in the metadata, correcting it based on the license stated in the GitHub repository.
* Adding the `pipeline_tag: depth-estimation` to the metadata, which will enhance discoverability on the Hugging Face Hub at https://huggingface.co/models?pipeline_tag=depth-estimation.
* Linking directly to the official Hugging Face paper page: [Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots](https://huggingface.co/papers/2509.02530).
* Adding a prominent link to the main GitHub repository: https://github.com/ByteDance-Seed/manip-as-in-sim-suite.
* Including a "Sample Usage" section with a code snippet extracted from the project's GitHub README to demonstrate how to perform depth inference.

These additions provide more comprehensive information and improve the model's visibility and usability for researchers and practitioners.

Files changed (1) hide show
  1. README.md +20 -4
README.md CHANGED
@@ -1,9 +1,25 @@
1
  ---
2
- license: cc-by-nc-4.0
 
3
  ---
4
 
5
- This repository contains the camera depth model of the paper Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots.
6
 
7
- Model inference guide: https://github.com/ByteDance-Seed/manip-as-in-sim-suite/tree/main/cdm
8
 
9
- Project page: https://manipulation-as-in-simulation.github.io
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ pipeline_tag: depth-estimation
4
  ---
5
 
6
+ This repository contains the Camera Depth Model (CDM) of the paper [Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots](https://huggingface.co/papers/2509.02530).
7
 
8
+ The Camera Depth Models (CDMs) are proposed as a simple plugin on daily-use depth cameras, which take RGB images and raw depth signals as input and output denoised, accurate metric depth. This enables accurate geometry perception in robots by effectively bridging the sim-to-real gap for manipulation tasks.
9
 
10
+ Project page: https://manipulation-as-in-simulation.github.io/
11
+ Code: https://github.com/ByteDance-Seed/manip-as-in-sim-suite
12
+
13
+ ## Sample Usage
14
+
15
+ To run depth inference on RGB-D camera data, use the `infer.py` script provided in the `cdm` directory of the main repository.
16
+
17
+ ```bash
18
+ cd cdm
19
+ python infer.py \
20
+ --encoder vitl \
21
+ --model-path /path/to/model.pth \
22
+ --rgb-image /path/to/rgb.jpg \
23
+ --depth-image /path/to/depth.png \
24
+ --output result.png
25
+ ```