This model is a fine-tuned LLaMA (7B) model. This model is under a non-commercial license (see the LICENSE file). You should only use model after having been granted access to the base LLaMA model by filling out this form.

This model is a semantic parser for WikiData. Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/wikidata-emnlp23

Paper: https://aclanthology.org/2023.emnlp-main.353/

Wikidata

WikiSP
arXiv Github Stars

This model is trained on the WikiWebQuestions dataset and the Stanford Alpaca dataset.

Downloads last month
20
Safetensors
Model size
6.74B params
Tensor type
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Spaces using stanford-oval/llama-7b-wikiwebquestions 3