Update README.md
Browse files
README.md
CHANGED
@@ -1,23 +1,198 @@
|
|
1 |
---
|
2 |
-
base_model: Qwen2.5-Coder-32B-Instruct
|
3 |
-
tags:
|
4 |
-
- text-generation-inference
|
5 |
-
- transformers
|
6 |
-
- unsloth
|
7 |
-
- qwen2
|
8 |
-
- trl
|
9 |
-
- sft
|
10 |
license: apache-2.0
|
|
|
|
|
11 |
language:
|
12 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
-
- **Developed by:** qingy2024
|
18 |
-
- **License:** apache-2.0
|
19 |
-
- **Finetuned from model :** Qwen2.5-Coder-32B-Instruct
|
20 |
|
21 |
-
|
22 |
|
23 |
-
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- smirki/UI_REASONING_v1.01
|
5 |
language:
|
6 |
- en
|
7 |
+
base_model:
|
8 |
+
- Qwen/Qwen2.5-Coder-32B-Instruct
|
9 |
+
tags:
|
10 |
+
- code
|
11 |
+
- ui
|
12 |
+
- generation
|
13 |
+
- uigen
|
14 |
+
library_name: transformers
|
15 |
---
|
16 |
|
17 |
+

|
18 |
+
|
19 |
+
# **Model Card for UIGEN-T1.1**
|
20 |
+
|
21 |
+
New and Improved reasoning traces. Better ui generation. Smarter decisions. Better code generation! Trained on a 700+ dataset.
|
22 |
+
USE BUDGET FORCING (putting the word answer or think at the end of the assistant generation to keep generationg more thinking and use 'answer' to write code.)
|
23 |
+
SFT on 1 x H100 for 1 hour.
|
24 |
+
|
25 |
+
## **Model Summary**
|
26 |
+
UIGEN-T1.1-Qwen-32B is a **32-billion parameter transformer model** fine-tuned on **Qwen2.5-Coder-32B-Instruct**. It is designed for **reasoning-based UI generation**, leveraging a complex chain-of-thought approach to produce **robust HTML and CSS-based UI components**. Currently, it is limited to **basic applications such as dashboards, landing pages, and sign-up forms**.
|
27 |
+
|
28 |
+
## **Model Details**
|
29 |
+
|
30 |
+
### **Model Description**
|
31 |
+
UIGEN-T1.1-Qwen-32B generates **HTML and CSS-based UI layouts** by reasoning through design principles. While it has a strong **chain-of-thought reasoning process**, it is currently **limited to text-based UI elements and simpler frontend applications**. The model **excels at dashboards, landing pages, and sign-up forms**, but **lacks advanced interactivity** (e.g., JavaScript-heavy functionalities).
|
32 |
+
|
33 |
+
- **Dataset by:** [smirki](https://huggingface.co/smirki)
|
34 |
+
- **Developed by:** [smirki](https://huggingface.co/smirki)
|
35 |
+
- **Training Procedures and Scripts by:** [smirki](https://huggingface.co/smirki)
|
36 |
+
- **Trained by:** [qingy2024](https://huggingface.co/qingy2024)
|
37 |
+
- **Model by:** [qingy2024](https://huggingface.co/qingy2024)
|
38 |
+
- **Shared by:** [qingy2024](https://huggingface.co/qingy2024)
|
39 |
+
- **Model type:** Transformer-based
|
40 |
+
- **Language(s) (NLP):** English
|
41 |
+
- **License:** Apache 2.0
|
42 |
+
- **Finetuned from model:** Qwen/Qwen2.5-Coder-32B-Instruct
|
43 |
+
|
44 |
+
### **Model Sources**
|
45 |
+
- **Repository:** (Will be uploaded to GitHub soon)
|
46 |
+
- **Hosted on:** [Hugging Face](https://huggingface.co/qingy2024)
|
47 |
+
- **Demo:** Coming soon
|
48 |
+
|
49 |
+

|
50 |
+
|
51 |
+
|
52 |
+
## **Uses**
|
53 |
+
|
54 |
+
### **Direct Use**
|
55 |
+
- Generates HTML and CSS code for **basic UI elements**
|
56 |
+
- Best suited for **dashboards, landing pages, and sign-up forms**
|
57 |
+
- Requires **manual post-processing** to refine UI outputs
|
58 |
+
- **May require using the word "answer" at the end of the input prompt** to get better inference
|
59 |
+
|
60 |
+
### **Downstream Use (optional)**
|
61 |
+
- Can be fine-tuned further for **specific frontend frameworks (React, Vue, etc.)**
|
62 |
+
- May be integrated into **no-code/low-code UI generation tools**
|
63 |
+
|
64 |
+
### **Out-of-Scope Use**
|
65 |
+
- Not suitable for **complex frontend applications** involving JavaScript-heavy interactions
|
66 |
+
- May not generate **fully production-ready** UI code
|
67 |
+
- **Limited design variety** – biased towards **basic frontend layouts**
|
68 |
+
|
69 |
+
## **Bias, Risks, and Limitations**
|
70 |
+
|
71 |
+
### **Biases**
|
72 |
+
- **Strong bias towards basic frontend design patterns** (may not generate creative or advanced UI layouts)
|
73 |
+
- **May produce repetitive designs** due to limited training scope
|
74 |
+
|
75 |
+
### **Limitations**
|
76 |
+
- **Artifacting issues**: Some outputs may contain formatting artifacts
|
77 |
+
- **Limited generalization**: Performs best in **HTML + CSS UI generation**, but **not robust for complex app logic**
|
78 |
+
- **May require prompt engineering** (e.g., adding "answer" to input for better results)
|
79 |
+
|
80 |
+
## **How to Get Started with the Model**
|
81 |
+
|
82 |
+
### **Example Model Template**
|
83 |
+
```plaintext
|
84 |
+
<|im_start|>user
|
85 |
+
{question}<|im_end|>
|
86 |
+
<|im_start|>assistant
|
87 |
+
<|im_start|>think
|
88 |
+
{reasoning}<|im_end|>
|
89 |
+
<|im_start|>answer
|
90 |
+
```
|
91 |
+
|
92 |
+
### **Basic Inference Code**
|
93 |
+
```python
|
94 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
95 |
+
|
96 |
+
model_name = "qingy2024/UIGEN-T1.1-Qwen-32B"
|
97 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
98 |
+
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
|
99 |
+
|
100 |
+
prompt = """<|im_start|>user
|
101 |
+
Make a dark-themed dashboard for an oil rig.<|im_end|>
|
102 |
+
<|im_start|>assistant
|
103 |
+
<|im_start|>think
|
104 |
+
"""
|
105 |
+
|
106 |
+
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
107 |
+
outputs = model.generate(**inputs, max_new_tokens=12012, do_sample=True, temperature=0.7) #max tokens has to be greater than 12k
|
108 |
+
|
109 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
110 |
+
```
|
111 |
+
|
112 |
+
## **Training Details**
|
113 |
+
|
114 |
+
### **Training Data**
|
115 |
+
- **Based on:** Qwen2.5-Coder-32B-Instruct
|
116 |
+
- **Fine-tuned on:** UI-related datasets with reasoning-based HTML/CSS examples
|
117 |
+
|
118 |
+
### **Training Procedure**
|
119 |
+
- **Preprocessing:** Standard text-tokenization using Hugging Face transformers
|
120 |
+
- **Training Precision:** **bf16 mixed precision** quantized to q8
|
121 |
+
- **Training Method:** Full-precision LoRA for 1 epoch, then merged to 16 bit (this model).
|
122 |
+
|
123 |
+
## **Evaluation**
|
124 |
+
|
125 |
+
### **Testing Data, Factors & Metrics**
|
126 |
+
- **Testing Data:** Internal UI design-related datasets
|
127 |
+
- **Evaluation Factors:** Bias towards basic UI components, robustness in reasoning, output quality
|
128 |
+
- **Metrics:** Subjective evaluation based on UI structure, correctness, and usability
|
129 |
+
|
130 |
+
### **Results**
|
131 |
+
- **Strengths:**
|
132 |
+
- **Good at reasoning-based UI layouts**
|
133 |
+
- **Generates structured and valid HTML/CSS**
|
134 |
+
- **Weaknesses:**
|
135 |
+
- **Limited design diversity**
|
136 |
+
- **Artifacting in outputs**
|
137 |
+
|
138 |
+
## **Technical Specifications**
|
139 |
+
|
140 |
+
### **Model Architecture and Objective**
|
141 |
+
- **Architecture:** Transformer-based LLM fine-tuned for UI reasoning
|
142 |
+
- **Objective:** Generate **robust frontend UI layouts with chain-of-thought reasoning**
|
143 |
+
|
144 |
+
### **Compute Infrastructure**
|
145 |
+
- **Hardware Requirements:** > 24GB VRAM recommended
|
146 |
+
- **Software Requirements:**
|
147 |
+
- Transformers library (Hugging Face)
|
148 |
+
- PyTorch
|
149 |
+
|
150 |
+
## **Citation**
|
151 |
+
If using this model, please cite:
|
152 |
+
|
153 |
+
**BibTeX:**
|
154 |
+
```bibtex
|
155 |
+
@misc{smirki_UIGEN-T1.1,
|
156 |
+
title={UIGEN-T1.1.1: Chain-of-Thought UI Generation Model},
|
157 |
+
author={smirki},
|
158 |
+
year={2025},
|
159 |
+
publisher={Hugging Face},
|
160 |
+
url={https://huggingface.co/smirki/UIGEN-T1.11}
|
161 |
+
}
|
162 |
+
```
|
163 |
+
|
164 |
+
## **More Information**
|
165 |
+
- **GitHub Repository:** (Coming soon)
|
166 |
+
- **Web Demo:** (Coming soon)
|
167 |
+
|
168 |
+
## **Model Card Authors**
|
169 |
+
- **Author:** smirki
|
170 |
+
|
171 |
+
## **Model Card Contact**
|
172 |
+
- **Contact:** [smirki](https://huggingface.co/smirki), [qingy2024 on Hugging Face](https://huggingface.co/smirki)
|
173 |
+
|
174 |
+
|
175 |
+

|
176 |
+
|
177 |
+
|
178 |
+

|
179 |
+
|
180 |
+
|
181 |
+

|
182 |
+
|
183 |
+
|
184 |
+

|
185 |
+
|
186 |
+
|
187 |
+

|
188 |
+
|
189 |
+
|
190 |
+

|
191 |
+
|
192 |
+
|
193 |
+

|
194 |
|
|
|
|
|
|
|
195 |
|
196 |
+

|
197 |
|
198 |
+
---
|