🏭 Phi-3 Mini Fine-tuned for Industrial Anomaly Detection
Fine-tuned version of Microsoft's Phi-3-mini-4k-instruct using QLoRA (Quantized Low-Rank Adaptation) for industrial IoT anomaly detection and interpretable diagnostics.
📋 Model Description
This model specializes in analyzing industrial sensor data and network telemetry to detect anomalies, identify potential security threats, and provide actionable insights for industrial automation systems.
Key Features:
- 🎯 Industrial anomaly classification
- 🔒 Security threat detection
- 📊 Sensor data interpretation
- 🚨 Real-time diagnostic recommendations
- 💡 Explainable AI responses
🔧 Training Details
Base Model
- Architecture: Phi-3-mini-4k-instruct (3.8B parameters)
- Context Length: 4096 tokens
- Quantization: 4-bit NF4 with double quantization
Fine-tuning Configuration
- Method: QLoRA (Quantized Low-Rank Adaptation)
- LoRA Rank: 32
- LoRA Alpha: 64
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Dropout: 0.05
Training Parameters
- Epochs: 5
- Batch Size: 4 per device
- Gradient Accumulation: 4 steps (effective batch size: 16)
- Learning Rate: 2e-5
- Optimizer: paged_adamw_8bit
- Scheduler: Cosine with warmup (100 steps)
- Mixed Precision: BF16
Dataset
- Name: Edge-Industrial-Anomaly-Phi3
- Training Samples: 10,749
- Evaluation Samples: 1,195
- Format: Conversational (user/assistant format)
📊 Evaluation Results
| Metric | Value |
|---|---|
| Eval Loss | 2.3992 |
| Token Accuracy | 54.51% |
| Eval Runtime | 81.12s |
| Samples/Second | 14.73 |
🚀 Usage
Using Transformers (Recommended)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"YOUR_USERNAME/phi3-industrial-anomaly",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"YOUR_USERNAME/phi3-industrial-anomaly",
trust_remote_code=True
)
# Prepare input
prompt = """<|user|>
Sensor Readings: Temperature: 95°C, Vibration: 5.8 m/s, Pressure: 120 kPa, Flow Rate: 6.2 L/min
<|end|>
<|assistant|>"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Using PEFT (Load Adapters Only)
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
import torch
# Load model with LoRA adapters
model = AutoPeftModelForCausalLM.from_pretrained(
"YOUR_USERNAME/phi3-industrial-anomaly",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"YOUR_USERNAME/phi3-industrial-anomaly",
trust_remote_code=True
)
Example Prompts
Network Security Analysis:
<|user|>
Network Telemetry: Arp.Opcode: 0.0, Icmp.Checksum: 0.0, Suspicious packet patterns detected
<|end|>
<|assistant|>
Sensor Diagnostics:
<|user|>
Sensor Readings: Temperature: 110°C, Vibration: 7.2 m/s, Pressure: 85 kPa, Flow Rate: 3.1 L/min
<|end|>
<|assistant|>
🎯 Use Cases
- Industrial IoT Monitoring: Real-time anomaly detection in manufacturing plants
- Predictive Maintenance: Early warning systems for equipment failure
- Security Operations: Network intrusion detection in OT/IT environments
- Edge Deployment: Lightweight inference on industrial gateways and edge devices
- Smart Manufacturing: Quality control and process optimization
🛠️ Edge Deployment
Model Formats Available
- PyTorch (this repo): Full model for transformers
- GGUF: For llama.cpp and edge devices (see releases)
- ONNX: For optimized inference (convert with Optimum)
Hardware Requirements
- GPU Inference: 8GB+ VRAM (with quantization)
- CPU Inference: 16GB+ RAM
- Edge Devices: Compatible with Jetson Nano, Raspberry Pi 5, Industrial PCs
📈 Performance Considerations
- Quantization: Model uses 4-bit quantization for efficient memory usage
- Inference Speed: ~14.7 samples/second on NVIDIA RTX GPUs
- Context Window: 4096 tokens (sufficient for detailed sensor logs)
- Generation: Typical response time 2-5 seconds on GPU
⚠️ Limitations
- Model may require domain-specific fine-tuning for your specific industrial environment
- Best performance with sensor data in the format seen during training
- Evaluation accuracy (54.51%) suggests room for improvement with more training epochs
- Not suitable for safety-critical decisions without human oversight
🔄 Version History
- v1.0 (2026-01-06): Initial release
- 5 epochs of QLoRA fine-tuning
- LoRA rank 32, alpha 64
- Trained on Edge-Industrial-Anomaly-Phi3 dataset
📄 Citation
If you use this model, please cite:
@misc{phi3-industrial-anomaly-2026,
author = {Your Name},
title = {Phi-3 Mini Fine-tuned for Industrial Anomaly Detection},
year = {2026},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/YOUR_USERNAME/phi3-industrial-anomaly}}
}
📜 License
This model is released under the MIT License. The base Phi-3 model is subject to Microsoft's Phi-3 license.
🙏 Acknowledgments
- Microsoft Research: For the Phi-3-mini-4k-instruct base model
- Hugging Face: For the transformers and PEFT libraries
- Dataset: ssam17/Edge-Industrial-Anomaly-Phi3
📞 Contact
For questions, issues, or collaboration opportunities, please open an issue in the repository or contact the model author.
- Downloads last month
- 27
Model tree for ssam17/phi3-industrial-anomaly
Base model
microsoft/Phi-3-mini-4k-instructDataset used to train ssam17/phi3-industrial-anomaly
Paper for ssam17/phi3-industrial-anomaly
Evaluation results
- Eval Lossself-reported2.399
- Token Accuracyself-reported0.545