STP-Lean-7B LoRA (v1)

LoRA rank-16 adapter fine-tuned from kfdong/STP_model_Lean_0320.


Model Details

Field Value
Developed by HAIE Lab
Model type 7 B-parameter causal-LM + LoRA r 16 α 32
Languages Lean syntax & English commentary
Finetuned from kfdong/STP_model_Lean_0320
Precision BF16 · Flash-Attention v2
Context length 1792 tokens
Hardware 1 × H100 80 GB

Training Hyperparameters

Setting Value
Precision / regime bf16 mixed precision
Epochs 1
Max sequence length 1792 tokens (right-padding)
Per-device train batch size 6
Per-device eval batch size 2
Gradient accumulation steps 1 (effective batch = 6)
Optimizer AdamW
Learning rate schedule 2 × 10⁻⁴ cosine, warm-up 3 %
Weight decay 0.01
LoRA rank / α / dropout r = 16, α = 32 (2 × r), dropout = 0.05
Gradient checkpointing Enabled (memory-efficient)
Flash-Attention v2 Enabled
Logging every 50 steps
Evaluation strategy once per epoch
Save strategy once per epoch
Seed 42
Hardware 1 × H100 80 GB

Results (1 epoch)

Metric Value
Final train loss 1.1432
Final train accuracy 0.7157 token-level
First-step loss (warm-up) 1.6098
Tokens processed 168,856,138
Grad-norm (final) 0.3202


How to Get Started

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "haielab/STP_model_Lean_0320-conjecture-base-FineTune-new-config"

# 1️⃣ Tokenizer ─ leave default right padding for STP
tok = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tok.pad_token = tok.eos_token      # STP uses </s> as PAD

# 2️⃣ Load base-plus-LoRA adapter on GPU (BF16)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"              # auto-dispatch to available GPU(s)
)

# 3️⃣ Build a Lean-style prompt
prompt = "<user>Theorem foo …</user><assistant>"

inputs = tok(prompt, return_tensors="pt").to(model.device)

# 4️⃣ Generate the next proof steps
out = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    top_p=0.9,
)
print(tok.decode(out[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for haielab/STP_model_Lean_0320-conjecture-base-FineTune-new-config-conjecture-base-FineTune-new-config-v2