YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
VanillaKD-Pretrain-Qwen-1.2B - GGUF
- Model creator: https://huggingface.co/MiniLLM/
- Original model: https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-1.2B/
Original model description:
library_name: transformers license: apache-2.0 datasets: - monology/pile-uncopyrighted - MiniLLM/pile-tokenized language: - en metrics: - accuracy pipeline_tag: text-generation
VanillaKD-Pretrain-Qwen-1.2B
VanillaKD-Pretrain-Qwen-1.2B is a 1.2B model with Qwen achitecture pre-trained with vanilla token-level knowledge distillation on the Pile for 50B tokens. The teacher model is Qwen1.5-1.8B.
We also open-source the tokenized pre-training corpus for reproducibility.
It is used as the baseline for MiniLLM-Qwen-1.2B
Evaluation
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
Other Baselines
Citation
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
- Downloads last month
- 379
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.