---
pipeline_tag: image-to-3d
tags:
- code
extra_gated_prompt: >-
### MeshCoder COMMUNITY LICENSE AGREEMENT MeshCoder Release Date: November 3,
2025 All the data and code within this repo are under [CC-BY-NC-SA
4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
extra_gated_fields:
First Name: text
Last Name: text
Email: text
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
Research interest: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the MeshCoder Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the MeshCoder Privacy Policy.
extra_gated_button_content: Submit
license: cc-by-nc-sa-4.0
language:
- en
datasets:
- InternRobotics/MeshCoderDataset
---
# MeshCoder: LLM-Powered Structured Mesh Code Generation from Point Clouds
[](https://daibingquan.github.io/MeshCoder)
[](https://huggingface.co/papers/2508.14879)
[](https://huggingface.co/InternRobotics/MeshCoder)
[](https://daibingquan.github.io/MeshCoder)
[Bingquan Dai*](https://openreview.net/profile?id=%7EBingQuan_Dai1), [Li Luo*](https://openreview.net/profile?id=%7ELuo_Li1), [Qihong Tang](https://openreview.net/profile?id=%7EQihong_Tang1),
[Jie Wang](https://roywangj.github.io/), [Xinyu Lian](https://openreview.net/profile?id=~Xinyu_Lian1), [Hao Xu](https://hoytxu.me/), [Minghan Qin](https://minghanqin.github.io/), [Xudong Xu](https://sheldontsui.github.io/), [Bo Dai](https://daibo.info/), [Haoqian Wang†](https://www.sigs.tsinghua.edu.cn/whq_en/main.htm), [Zhaoyang Lyu†](https://zhaoyanglyu.github.io/) [Jiangmiao Pang](https://oceanpang.github.io/)
\* Equal contribution
† Corresponding author
Project lead: Zhaoyang Lyu
## Overview
MeshCoder is a framework that converts 3D point clouds into editable Blender Python scripts, enabling programmatic reconstruction and editing of complex human-made objects. It overcomes prior limitations by developing expressive APIs for modeling intricate geometries, building a large-scale dataset of 1 million object-code pairs across 41 categories, and training a multimodal LLM to generate accurate, part-segmented code from point clouds. The approach outperforms existing methods in reconstruction quality, supports intuitive shape and topology editing via code modifications, and enhances 3D reasoning capabilities in LLMs.
## Usage
See the Github repository: https://github.com/InternRobotics/MeshCoder regarding installation, training and inference instructions.
``config.yaml`` and ``shape_tokenizer.pt`` are the configuration file and pretrained weights of the shape tokenizer.
``adapter_config.json`` and ``adapter_model.safetensors`` are the configuration file and pretrained weights of the LoRA model.
The folder ``Llama3.2-1B`` contains the original weights of the base model Llama3.2-1B.
## Join Us
We are seeking engineers, interns, researchers, and PhD candidates. If you have an interest in 3D content generation, please send your resume to lvzhaoyang@pjlab.org.cn.
## Citation
```
@article{dai2025meshcoder,
title={Meshcoder: Llm-powered structured mesh code generation from point clouds},
author={Dai, Bingquan and Luo, Li Ray and Tang, Qihong and Wang, Jie and Lian, Xinyu and Xu, Hao and Qin, Minghan and Xu, Xudong and Dai, Bo and Wang, Haoqian and others},
journal={arXiv preprint arXiv:2508.14879},
year={2025}
}
```