Model Card for Model ID
This lora is trained for Deepseek R1 Qwen model for providing better replies in Engineering using smaller LLMs.
Model Details
Model Description
- Developed by: Just some student
- Model type: PEFT
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
- Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Training Hyperparameters
- Training regime: [More Information Needed]
Metrics
Train loss - 1.399200 Eval loss - 1.393927
[More Information Needed]
Results
[More Information Needed]
Hardware Type: 2 x Nvidia T4
Hours used: 2h
Cloud Provider: Kaggle
Compute Region: Russia
PEFT 0.14.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Ethencam/lora-deepseek-qwen-1.5B
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B