Improve model card for Re:Form: add metadata, links, and usage example
Browse filesThis PR significantly enhances the model card for the `Re:Form` model by:
* Adding `pipeline_tag: text-generation` and `library_name: transformers` to the metadata, which improves discoverability and integration with the Hugging Face Hub features.
* Including relevant tags such as `code-generation`, `formal-verification`, `reinforcement-learning`, and `dafny` for better categorization.
* Providing a concise model description summarizing the paper's abstract.
* Including the project page link ([https://veri-code.github.io/ReForm-page](https://veri-code.github.io/ReForm-page)) and updating the GitHub repository link to point to the main project repository ([https://github.com/Veri-Code/Veri-Code](https://github.com/Veri-Code/Veri-Code)).
* Adding the overall pipeline image from the project's GitHub repository for visual context.
* Providing a runnable Python code snippet to demonstrate basic usage with the `transformers` library for Dafny code generation.
* Including the full BibTeX citation for proper academic attribution.
* Removing the unnecessary "File information" section to keep the model card clean and focused.
These updates greatly improve the model's documentation and user experience on the Hugging Face Hub.
@@ -1,3 +1,83 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- code-generation
|
7 |
+
- formal-verification
|
8 |
+
- reinforcement-learning
|
9 |
+
- dafny
|
10 |
+
---
|
11 |
+
|
12 |
+
# Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny
|
13 |
+
|
14 |
+
This repository hosts the `sft_0.5B` model, part of the **Re:Form** framework, which focuses on enhancing formal software verification using Large Language Models (LLMs) and Reinforcement Learning (RL).
|
15 |
+
|
16 |
+
**Re:Form** addresses the limitations of informal language-based LLMs by grounding them in rigorous formal systems, such as Dafny. This approach enables automatic and mathematically provable verification of the LLMs' reasoning processes and outcomes, crucial for large-scale, reliable formal software verification. The framework introduces an automatic and scalable data curation pipeline and integrates RL designs with feedback from a formal language verifier to reduce reliance on human priors.
|
17 |
+
|
18 |
+
- **Paper**: [Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny](https://huggingface.co/papers/2507.16331)
|
19 |
+
- **Project Page**: [https://veri-code.github.io/ReForm-page](https://veri-code.github.io/ReForm-page)
|
20 |
+
- **Code**: [https://github.com/Veri-Code/Veri-Code](https://github.com/Veri-Code/Veri-Code)
|
21 |
+
|
22 |
+
<p align="center">
|
23 |
+
<img src="https://github.com/Veri-Code/Veri-Code/raw/main/assets/pipeline.png" alt="Overall Pipeline" width="700"/>
|
24 |
+
<br>
|
25 |
+
<em>Overall Pipeline of Re:Form</em>
|
26 |
+
</p>
|
27 |
+
|
28 |
+
## Usage
|
29 |
+
|
30 |
+
You can load and use the `Re:Form` model with the `transformers` library for Dafny code generation.
|
31 |
+
|
32 |
+
```python
|
33 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
34 |
+
import torch
|
35 |
+
|
36 |
+
model_id = "Veri-Code/sft_0.5B" # Example checkpoint; other available include sft_1.5B, sft_3B, sft_7B, sft_14B, 14B-RL-entropy
|
37 |
+
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
39 |
+
model = AutoModelForCausalLM.from_pretrained(
|
40 |
+
model_id,
|
41 |
+
torch_dtype=torch.bfloat16, # Use bfloat16 for optimal performance if supported
|
42 |
+
device_map="auto",
|
43 |
+
trust_remote_code=True,
|
44 |
+
)
|
45 |
+
|
46 |
+
# Example prompt for Dafny code generation
|
47 |
+
# This prompt asks the model to implement a simple Max method in Dafny.
|
48 |
+
prompt = "method Max(a: int, b: int) returns (m: int)\
|
49 |
+
ensures m == a || m == b\
|
50 |
+
ensures m >= a && m >= b\
|
51 |
+
{\
|
52 |
+
"
|
53 |
+
|
54 |
+
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
55 |
+
|
56 |
+
# Generate Dafny code
|
57 |
+
generated_ids = model.generate(
|
58 |
+
**input_ids,
|
59 |
+
max_new_tokens=100,
|
60 |
+
temperature=0.7,
|
61 |
+
do_sample=True,
|
62 |
+
eos_token_id=tokenizer.eos_token_id,
|
63 |
+
)
|
64 |
+
|
65 |
+
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
66 |
+
print(generated_text)
|
67 |
+
```
|
68 |
+
|
69 |
+
## Citation
|
70 |
+
|
71 |
+
If you use this work in your research, please cite:
|
72 |
+
|
73 |
+
```bibtex
|
74 |
+
@misc{yan2025reformreducinghuman,
|
75 |
+
title={Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny},
|
76 |
+
author={Chuanhao Yan and Fengdi Che and Xuhan Huang and Xu Xu and Xin Li and Yizhi Li and Xingwei Qu and Jingzhe Shi and Zhuangzhuang He and Chenghua Lin and Yaodong Yang and Binhang Yuan and Hang Zhao and Yu Qiao and Bowen Zhou and Jie Fu},
|
77 |
+
year={2025},
|
78 |
+
eprint={2507.16331},
|
79 |
+
archivePrefix={arXiv},
|
80 |
+
primaryClass={cs.CL},
|
81 |
+
url={https://arxiv.org/abs/2507.16331},
|
82 |
+
}
|
83 |
+
```
|