File size: 5,155 Bytes
f0fb49b 3bada6a f75577b 3bada6a f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b f0fb49b f75577b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
---
tags:
- transformers
- sentence-transformers
language:
- en
license: apache-2.0
library_name: transformers
base_model:
- RetroMAE
model_index:
- name: kpr-retromae
results:
---
# Knowledgeable Embedding: kpr-retromae
## Introduction
**Injecting dynamically updatable entity knowledge into embeddings to enhance RAG**
A key limitation of large language models (LLMs) is their inability to capture less-frequent or up-to-date entity knowledge, often leading to factual inaccuracies and hallucinations. Retrieval-augmented generation (RAG), which incorporates external knowledge through retrieval, is a common approach to mitigate this issue.
Although RAG typically relies on embedding-based retrieval, the embedding models themselves are also based on language models and therefore struggle with queries involving less-frequent entities, often failing to retrieve the crucial knowledge needed to overcome this limitation.
**Knowledgeable Embedding** addresses this challenge by injecting real-world entity knowledge into embeddings, making them more *knowledgeable*.
**The entity knowledge is pluggable and can be dynamically updated.**
For further details, refer to [our paper](https://arxiv.org/abs/2507.03922) or [GitHub repository](https://github.com/knowledgeable-embedding/knowledgeable-embedding).
## Model List
| Model | Model Size | Base Model |
| --- | --- | --- |
| [knowledgeable-ai/kpr-bert-base-uncased](https://huggingface.co/knowledgeable-ai/kpr-bert-base-uncased) | 112M | [bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) |
| [knowledgeable-ai/kpr-retromae](https://huggingface.co/knowledgeable-ai/kpr-retromae) | 112M | [RetroMAE](https://huggingface.co/Shitao/RetroMAE) |
| [knowledgeable-ai/kpr-bge-base-en](https://huggingface.co/knowledgeable-ai/kpr-bge-base-en) | 112M | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) |
| [knowledgeable-ai/kpr-bge-base-en-v1.5](https://huggingface.co/knowledgeable-ai/kpr-bge-base-en-v1.5) | 112M | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) |
| [knowledgeable-ai/kpr-bge-large-en-v1.5](https://huggingface.co/knowledgeable-ai/kpr-bge-large-en-v1.5) | 340M | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) |
For practical use, we recommend `knowledgeable-ai/kpr-bge-*`, which significantly outperforms state-of-the-art models on queries involving less-frequent entities while performing comparably on other queries, as reported in [our paper](https://arxiv.org/abs/2507.03922).
Regarding the model size, we do not count the entity embeddings since they are stored in CPU memory and have a negligible impact on runtime performance. See [this page](https://github.com/knowledgeable-embedding/knowledgeable-embedding/wiki/Internals-of-Knowledgeable-Embedding) for details.
## Model Details
- Model Name: kpr-retromae
- Base Model: [RetroMAE](https://huggingface.co/Shitao/RetroMAE)
- Maximum Sequence Length: 512
- Embedding Dimension: 768
## Usage
This model can be used via [Hugging Face Transformers](https://github.com/huggingface/transformers) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers):
### Hugging Face Transformers
```python
from transformers import AutoTokenizer, AutoModel
import torch
MODEL_NAME_OR_PATH = "knowledgeable-ai/kpr-retromae"
input_texts = [
"Who founded Dominican Liberation Party?",
"Who owns Mompesson House?"
]
# Load model and tokenizer from the Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, trust_remote_code=True)
model = AutoModel.from_pretrained(MODEL_NAME_OR_PATH, trust_remote_code=True)
# Preprocess the text
preprocessed_inputs = tokenizer(input_texts, return_tensors="pt", padding=True)
# Compute embeddings
with torch.no_grad():
embeddings = model.encode(**preprocessed_inputs)
print("Embeddings:", embeddings)
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
MODEL_NAME_OR_PATH = "knowledgeable-ai/kpr-retromae"
input_texts = [
"Who founded Dominican Liberation Party?",
"Who owns Mompesson House?"
]
# Load model from the Hugging Face Hub
model = SentenceTransformer(MODEL_NAME_OR_PATH, trust_remote_code=True)
# Compute embeddings
embeddings = model.encode(input_texts)
print("Embeddings:", embeddings)
```
**IMPORTANT:** This code will be supported in versions of Sentence Transformers later than v5.1.0, which have not yet been released at the time of writing. Until then, please install the library directly from GitHub:
```bash
pip install git+https://github.com/UKPLab/sentence-transformers.git
```
## License
This model is licensed under the Apache License, Version 2.0.
## Citation
If you use this model in your research, please cite the following paper:
[Dynamic Injection of Entity Knowledge into Dense Retrievers](https://arxiv.org/abs/2507.03922)
```bibtex
@article{yamada2025kpr,
title={Dynamic Injection of Entity Knowledge into Dense Retrievers},
author={Ikuya Yamada and Ryokan Ri and Takeshi Kojima and Yusuke Iwasawa and Yutaka Matsuo},
journal={arXiv preprint arXiv:2507.03922},
year={2025}
}
``` |