Irfanuruchi's picture
Update README.md
2d85cc9 verified
metadata
tags:
  - computer-engineering
  - llama-3
  - 3b
  - lora
  - 4bit
license: llama3.2
license_link: https://llama.meta.com/llama3/license
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
datasets:
  - Wikitext-2-raw-v1
  - STEM-AI-mtl
  - custom-computer-engineering-corpus
  - technical-documentation
  - hardware-specs

🖥️ Llama-3.2-3B-Computer-Engineering-LLM

Specialized AI Assistant for Computer Engineering
Fine-tuned Meta-Llama-3-3B with 4-bit quantization + LoRA adapters

📜 License Compliance Notice

This model is derived from Meta's Llama 3.2 and is governed by the Llama 3.2 Community License. By using this model, you agree to:

  • Not use the model or its outputs to improve other LLMs
  • Not use the model for commercial purposes without separate agreement
  • Include attribution to Meta and this project
  • Accept the license's acceptable use policy

🛠️ Technical Specifications

Architecture

Component Implementation Details
Base Model Meta-Llama-3-3B-Instruct
Quantization 4-bit via BitsAndBytes
Adapter LoRA (r=16, alpha=32)
Training Framework PyTorch + HuggingFace Ecosystem
Context Window 8,192 tokens

Training Data

  • Curated computer engineering corpus
  • Key domains covered:
    • Computer architecture
    • Embedded systems
    • VLSI design
    • Hardware description languages
    • Low-level programming


from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Irfanuruchi/Llama-3.2-3B-Computer-Engineering-LLM"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="auto"
)

prompt = """You are a computer engineering expert. Explain concisely:
Q: What's the difference between RISC and CISC architectures?
A:"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
    **inputs,
    max_new_tokens=150,
    temperature=0.7,
    do_sample=True
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Responsible use

This model inherits all use restrictions from the Llama 3.2 license. Special considerations:

Not for production deployment without compliance review Outputs should be verified by domain experts Knowledge cutoff: July 2024

Citation

If using this model in research, please cite:

@misc{llama3.2-computer-eng,
  author = {Irfanuruchi},
  title = {Llama-3.2-3B-Computer-Engineering-LLM},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/Irfanuruchi/Llama-3.2-3B-Computer-Engineering-LLM}}
}