🤖 BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement

BlenderLLM is built using Qwen2.5-Coder-7B-Instruct as the base model. It has been fine-tuned on the BlendNet training dataset and further optimized through Self-improvement techniques to achieve the best performance.

For more details, please visit our GitHub repository or refer to our arXiv paper.

📖 Citation

@misc{du2024blenderllmtraininglargelanguage,
      title={BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement}, 
      author={Yuhao Du and Shunian Chen and Wenbo Zan and Peizhao Li and Mingxuan Wang and Dingjie Song and Bo Li and Yan Hu and Benyou Wang},
      year={2024},
      eprint={2412.14203},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/2412.14203}, 
}

We are from the School of Data Science (SDS), the Chinese University of Hong Kong, Shenzhen (CUHKSZ).

Downloads last month
1,115
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for FreedomIntelligence/BlenderLLM

Base model

Qwen/Qwen2.5-7B
Finetuned
(2529)
this model
Quantizations
2 models

Dataset used to train FreedomIntelligence/BlenderLLM