ikuyamada's picture
Upload README.md with huggingface_hub
39dd35f verified
metadata
tags:
  - transformers
  - sentence-transformers
language:
  - en
license: apache-2.0
library_name: transformers
base_model:
  - bge-base-en-v1.5
model_index:
  - name: kpr-bge-base-en-v1.5
results: null

Knowledgeable Embedding: kpr-bge-base-en-v1.5

Introduction

Injecting dynamically updatable entity knowledge into embeddings to enhance RAG

A key limitation of large language models (LLMs) is their inability to capture less-frequent or up-to-date entity knowledge, often leading to factual inaccuracies and hallucinations. Retrieval-augmented generation (RAG), which incorporates external knowledge through retrieval, is a common approach to mitigate this issue.

Although RAG typically relies on embedding-based retrieval, the embedding models themselves are also based on language models and therefore struggle with queries involving less-frequent entities, often failing to retrieve the crucial knowledge needed to overcome this limitation.

Knowledgeable Embedding addresses this challenge by injecting real-world entity knowledge into embeddings, making them more knowledgeable.

The entity knowledge is pluggable and can be dynamically updated.

For further details, refer to our paper or GitHub repository.

Model List

For practical use, we recommend knowledgeable-ai/kpr-bge-*, which significantly outperforms state-of-the-art models on queries involving less-frequent entities while performing comparably on other queries, as reported in our paper.

Regarding the model size, we do not count the entity embeddings since they are stored in CPU memory and have a negligible impact on runtime performance. See this page for details.

Model Details

  • Model Name: kpr-bge-base-en-v1.5
  • Base Model: bge-base-en-v1.5
  • Maximum Sequence Length: 512
  • Embedding Dimension: 768

Usage

This model can be used via Hugging Face Transformers or Sentence Transformers:

Hugging Face Transformers

from transformers import AutoTokenizer, AutoModel
import torch

MODEL_NAME_OR_PATH = "knowledgeable-ai/kpr-bge-base-en-v1.5"

input_texts = [
    "Who founded Dominican Liberation Party?",
    "Who owns Mompesson House?"
]

# Load model and tokenizer from the Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, trust_remote_code=True)
model = AutoModel.from_pretrained(MODEL_NAME_OR_PATH, trust_remote_code=True)

# Preprocess the text
preprocessed_inputs = tokenizer(input_texts, return_tensors="pt", padding=True)

# Compute embeddings
with torch.no_grad():
    embeddings = model.encode(**preprocessed_inputs)

print("Embeddings:", embeddings)

Sentence Transformers

from sentence_transformers import SentenceTransformer

MODEL_NAME_OR_PATH = "knowledgeable-ai/kpr-bge-base-en-v1.5"

input_texts = [
    "Who founded Dominican Liberation Party?",
    "Who owns Mompesson House?"
]

# Load model from the Hugging Face Hub
model = SentenceTransformer(MODEL_NAME_OR_PATH, trust_remote_code=True)

# Compute embeddings
embeddings = model.encode(input_texts)

print("Embeddings:", embeddings)

IMPORTANT: This code will be supported in versions of Sentence Transformers later than v5.1.0, which have not yet been released at the time of writing. Until then, please install the library directly from GitHub:

pip install git+https://github.com/UKPLab/sentence-transformers.git

License

This model is licensed under the Apache License, Version 2.0.

Citation

If you use this model in your research, please cite the following paper: Dynamic Injection of Entity Knowledge into Dense Retrievers

@article{yamada2025kpr,
title={Dynamic Injection of Entity Knowledge into Dense Retrievers},
author={Ikuya Yamada and Ryokan Ri and Takeshi Kojima and Yusuke Iwasawa and Yutaka Matsuo},
journal={arXiv preprint arXiv:2507.03922},
year={2025}
}