File size: 2,097 Bytes
cc49567
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# Wraith Coder 7B

Signal-dense code generation model fine-tuned from Qwen2.5-Coder-7B-Instruct.

## Quick Start

### Installation

```bash
pip install transformers torch
```

### Basic Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "vanta-research/wraith-coder-7b",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("vanta-research/wraith-coder-7b")

messages = [
    {"role": "user", "content": "Implement binary search with complexity analysis."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

### Ollama Deployment

```bash
# Convert to GGUF (Q4_K_M recommended)
ollama create wraith-coder:7b -f Modelfile

# Run inference
ollama run wraith-coder:7b "Implement a LRU cache with O(1) operations"
```

## Key Features

- **62.6% more concise** than base Qwen2.5-Coder-7B while maintaining correctness
- **60% complexity analysis coverage** across diverse coding challenges
- **Multiple solution approaches** with trade-off discussions
- **Systems programming knowledge** integrated throughout
- **Production-ready** for senior engineering applications

## Performance Highlights

| Metric | Base Qwen | Wraith Coder | Improvement |
|--------|-----------|--------------|-------------|
| Avg Response Length | 2,900 chars | 1,084 chars | 62.6% shorter |
| Complexity Analysis | 40% | 60% | +50% coverage |
| Multiple Approaches | 35% | 65% | +86% frequency |
| Trade-off Discussion | 45% | 75% | +67% depth |

## Documentation

Full documentation available in [README.md](./README.md)

## License

Apache 2.0

## Citation

```bibtex
@misc{wraith-coder-7b,
  author = {Vanta Research},
  title = {Wraith Coder 7B: Signal-Dense Code Generation through Iterative Fine-Tuning},
  year = {2025},
  publisher = {Hugging Face}
}
```