michael-guenther's picture
Add example on how to use the model with transformers without trust_remote_code
72bc140 verified
|
raw
history blame
7.48 kB
metadata
base_model:
  - Qwen/Qwen2.5-Coder-0.5B
license: cc-by-nc-4.0



Jina AI: Your Search Foundation, Supercharged!

The code embedding model trained by Jina AI.

Jina Code Embeddings: A Small but Performant Code Embedding Model

Intended Usage & Model Info

jina-code-embeddings is an embedding model for code retrieval. The model supports various types of code retrieval (text-to-code, code-to-code, code-to-text, code-to-completion) and technical question answering across 15+ programming languages.

Built on Qwen/Qwen2.5-Coder-0.5B, jina-code-embeddings-0.5b features:

  • Multilingual support (15+ programming languages) and compatibility with a wide range of domains, including web development, software development, machine learning, data science, and educational coding problems.
  • Task-specific instruction prefixes for NL2Code, Code2Code, Code2NL, Code2Completion, and Technical QA, which can be selected at inference time.
  • Flexible embedding size: dense embeddings are 896-dimensional by default but can be truncated to as low as 64 with minimal performance loss.

Summary of features:

Feature Jina Code Embeddings 0.5B
Base Model Qwen2.5-Coder-0.5B
Supported Tasks nl2code, code2code, code2nl, code2completion, qa
Model DType BFloat 16
Max Sequence Length 32768
Embedding Vector Dimension 896
Matryoshka dimensions 64, 128, 256, 512, 896
Pooling Strategy Last-token pooling
Attention Mechanism FlashAttention2

Usage

Requirements

The following Python packages are required:

  • transformers>=4.53.0
  • torch>=2.7.1

Optional / Recommended

  • flash-attention: Installing flash-attention is recommended for improved inference speed and efficiency, but not mandatory.
  • sentence-transformers: If you want to use the model via the sentence-transformers interface, install this package as well.
via transformers (AutoModel with trust_remote_code=True)
# !pip install transformers>=4.53.0 torch>=2.7.1

from transformers import AutoModel
import torch

# Initialize the model
model = AutoModel.from_pretrained("jinaai/jina-code-embeddings-0.5b", trust_remote_code=True)
model.to("cuda")

# Configure truncate_dim, max_length, batch_size in the encode function if needed

# Encode query
query_embeddings = model.encode(
    ["print hello world in python"],
    task="nl2code",
    prompt_name="query",
)

# Encode passage
passage_embeddings = model.encode(
    ["print('Hello World!')"],
    task="nl2code",
    prompt_name="passage",
)
via transformers (using Qwen2Model without trust_remote_code)
# !pip install transformers>=4.53.0 torch>=2.7.1

import torch
import torch.nn.functional as F

from transformers.models.qwen2 import Qwen2Model
from transformers.models.qwen2.tokenization_qwen2_fast import Qwen2TokenizerFast

INSTRUCTION_CONFIG = {
    "nl2code": {
        "query": "Find the most relevant code snippet given the following query:\n",
        "passage": "Candidate code snippet:\n"
    },
    "qa": {
        "query": "Find the most relevant answer given the following question:\n",
        "passage": "Candidate answer:\n"
    },
    "code2code": {
        "query": "Find an equivalent code snippet given the following code snippet:\n",
        "passage": "Candidate code snippet:\n"
    },
    "code2nl": {
        "query": "Find the most relevant comment given the following code snippet:\n",
        "passage": "Candidate comment:\n"
    },
    "code2completion": {
        "query": "Find the most relevant completion given the following start of code snippet:\n",
        "passage": "Candidate completion:\n"
    }
}

MAX_LENGTH = 8192

def cosine_similarity(x,y):
    x = F.normalize(x, p=2, dim=1)
    y = F.normalize(y, p=2, dim=1)
    return x @ y.T

def last_token_pool(last_hidden_states, attention_mask):
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]

def add_instruction(instruction, query):
    return f'{instruction}{query}'

# The queries and documents to embed
queries = [
    add_instruction(INSTRUCTION_CONFIG["nl2code"]["query"], "print hello world in python"),
    add_instruction(INSTRUCTION_CONFIG["nl2code"]["query"], "initialize array of 5 zeros in c++")
]
documents = [
    add_instruction(INSTRUCTION_CONFIG["nl2code"]["passage"], "print('Hello World!')"),
    add_instruction(INSTRUCTION_CONFIG["nl2code"]["passage"], "int arr[5] = {0, 0, 0, 0, 0};")
]
all_inputs = queries + documents

tokenizer = Qwen2TokenizerFast.from_pretrained('jinaai/jina-code-embeddings-0.5b')
model = Qwen2Model.from_pretrained('jinaai/jina-code-embeddings-0.5b')

batch_dict = tokenizer(
    all_inputs,
    padding=True,
    truncation=True,
    max_length=MAX_LENGTH,
    return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
query_embeddings = embeddings[:2]
passage_embeddings = embeddings[2:]

# Compute the (cosine) similarity between the query and document embeddings
scores = cosine_similarity(query_embeddings, passage_embeddings)
print(scores)
# tensor([[0.8168, 0.1236],
#         [0.1204, 0.5525]], grad_fn=<MmBackward0>)
via sentence-transformers
# !pip install sentence_transformers>=5.0.0 torch>=2.7.1

import torch
from sentence_transformers import SentenceTransformer

# Load the model
model = SentenceTransformer(
    "jinaai/jina-code-embeddings-0.5b",
    model_kwargs={
        "torch_dtype": torch.bfloat16,
        "attn_implementation": "flash_attention_2",
        "device_map": "cuda"
    }
)

# The queries and documents to embed
queries = [
    "print hello world in python",
    "initialize array of 5 zeros in c++"
]
documents = [
    "print('Hello World!')",
    "int arr[5] = {0, 0, 0, 0, 0};"
]

query_embeddings = model.encode(queries, prompt_name="nl2code_query")
document_embeddings = model.encode(documents, prompt_name="nl2code_document")

# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
# tensor([[0.8157, 0.1222],
#         [0.1201, 0.5500]])

Training & Evaluation

Please refer to our technical report of jina-code-embeddings for training details and benchmarks.

Contact

Join our Discord community and chat with other community members about ideas.