File size: 921 Bytes
4f11513 abe361d f639177 4f11513 abe361d 4f11513 abe361d 4f11513 abe361d 4f11513 abe361d 4f11513 abe361d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
library_name: transformers
license: apache-2.0
datasets:
- Manual-Dataset-Creation-Project/Malum-230
- llm-jp/oasst2-33k-ja
language:
- ja
base_model:
- Qwen/Qwen2.5-7B
inference: false
---
# Matsu-7B
## Description
Matsu-7B is a model that was instruction-tuned on the oasst2 and Malum-230, using Qwen2.5-7B as its base model.
## Series
| Variant | Link |
| --- | --- |
| Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
| Take-7B | [Manual-Dataset-Creation-Project/Take-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Take-7B) |
## Contributors
- [Sudy](https://huggingface.co/sudy-super)
- [ほーりーふぉっくす](https://huggingface.co/Holy-fox)
## Acknowledgments
We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model. |