spacemanidol commited on
Commit
98a993a
·
verified ·
1 Parent(s): d5780b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -3
README.md CHANGED
@@ -1,3 +1,116 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - en
6
+ tags:
7
+ - TREC-RAG
8
+ - RAG
9
+ - MSMARCO
10
+ - MSMARCOV2.1
11
+ - Snowflake
12
+ - gte
13
+ - gte-en-v1.5
14
+ pretty_name: TREC-RAG-Embedding-Baseline gte-en-v1.5
15
+ size_categories:
16
+ - 100M<n<1B
17
+ configs:
18
+ - config_name: corpus
19
+ data_files:
20
+ - split: train
21
+ path: corpus/*
22
+ ---
23
+
24
+ # Alibaba GTE-Large-V1.5 Embeddings for MSMARCO V2.1 for TREC-RAG
25
+
26
+ This dataset contains the embeddings for the MSMARCO-V2.1 dataset which is used as the corpora for [TREC RAG](https://trec-rag.github.io/)
27
+ All embeddings are created using [GTE Large V1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) and are intended to serve as a simple baseline for dense retrieval-based methods.
28
+ Note, that the embeddings are not normalized so you will need to normalize them before usage.
29
+
30
+
31
+ ## Retrieval Performance
32
+ Retrieval performance for the TREC DL21-23, MSMARCOV2-Dev and Raggy Queries can be found below with BM25 as a baseline. For both systems, retrieval is at the segment level and Doc Score = Max (passage score).
33
+ Retrieval is done via a dot product and happens in BF16.
34
+
35
+
36
+ ## Loading the dataset
37
+
38
+ ### Loading the document embeddings
39
+
40
+ You can either load the dataset like this:
41
+ ```python
42
+ from datasets import load_dataset
43
+ docs = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5", split="train")
44
+ ```
45
+
46
+ Or you can also stream it without downloading it before:
47
+ ```python
48
+ from datasets import load_dataset
49
+ docs = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5", split="train", streaming=True)
50
+ for doc in docs:
51
+ doc_id = j['docid']
52
+ url = doc['url']
53
+ text = doc['text']
54
+ emb = doc['embedding']
55
+ ```
56
+
57
+
58
+ Note, The full dataset corpus is ~ 620GB so it will take a while to download and may not fit on some devices/
59
+
60
+ ## Search
61
+ A full search example (on the first 1,000 paragraphs):
62
+ ```python
63
+ from datasets import load_dataset
64
+ import torch
65
+ from transformers import AutoModel, AutoTokenizer
66
+ import numpy as np
67
+
68
+
69
+ top_k = 100
70
+ docs_stream = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5,split="train", streaming=True)
71
+
72
+ docs = []
73
+ doc_embeddings = []
74
+
75
+ for doc in docs_stream:
76
+ docs.append(doc)
77
+ doc_embeddings.append(doc['embedding'])
78
+ if len(docs) >= top_k:
79
+ break
80
+
81
+ doc_embeddings = np.asarray(doc_embeddings)
82
+
83
+
84
+ model = AutoModel.from_pretrained('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True)
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-large-en-v1.5')
87
+ model.eval()
88
+
89
+ query_prefix = ''
90
+ queries = ['how do you clean smoke off walls']
91
+ queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
92
+ query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
93
+
94
+ # Compute token embeddings
95
+ with torch.no_grad():
96
+ query_embeddings = model(**query_tokens)[0][:, 0]
97
+
98
+
99
+ # normalize embeddings
100
+ query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
101
+ doc_embeddings = torch.nn.functional.normalize(doc_embeddings, p=2, dim=1)
102
+
103
+ # Compute dot score between query embedding and document embeddings
104
+ dot_scores = np.matmul(query_embeddings, doc_embeddings.transpose())[0]
105
+ top_k_hits = np.argpartition(dot_scores, -top_k)[-top_k:].tolist()
106
+
107
+ # Sort top_k_hits by dot score
108
+ top_k_hits.sort(key=lambda x: dot_scores[x], reverse=True)
109
+
110
+ # Print results
111
+ print("Query:", queries[0])
112
+ for doc_id in top_k_hits:
113
+ print(docs[doc_id]['doc_id'])
114
+ print(docs[doc_id]['text'])
115
+ print(docs[doc_id]['url'], "\n")
116
+ ```