tomaarsen HF Staff alvarobartt HF Staff commited on
Commit
4328cf2
·
verified ·
1 Parent(s): 84fccfe

Add `text-embeddings-inference` tag & snippet (#14)

Browse files

- Add `text-embeddings-inference` tag & snippet (54591597ec6b010533f416d3aba6347e70666397)
- embeddings models -> embedding models (b6cfe6936aeb7c9e45d1099430094b23ccf9712b)


Co-authored-by: Alvaro Bartolome <[email protected]>

Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -57,6 +57,7 @@ tags:
57
  - feature-extraction
58
  - sentence-similarity
59
  - transformers
 
60
  language_bcp47:
61
  - fr-ca
62
  - pt-br
@@ -100,9 +101,9 @@ from transformers import AutoTokenizer, AutoModel
100
  import torch
101
 
102
 
103
- #Mean Pooling - Take attention mask into account for correct averaging
104
  def mean_pooling(model_output, attention_mask):
105
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
106
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
107
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
108
 
@@ -121,7 +122,7 @@ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tenso
121
  with torch.no_grad():
122
  model_output = model(**encoded_input)
123
 
124
- # Perform pooling. In this case, average pooling
125
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
126
 
127
  print("Sentence embeddings:")
@@ -129,6 +130,33 @@ print(sentence_embeddings)
129
  ```
130
 
131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
132
 
133
  ## Full Model Architecture
134
  ```
 
57
  - feature-extraction
58
  - sentence-similarity
59
  - transformers
60
+ - text-embeddings-inference
61
  language_bcp47:
62
  - fr-ca
63
  - pt-br
 
101
  import torch
102
 
103
 
104
+ # Mean Pooling - Take attention mask into account for correct averaging
105
  def mean_pooling(model_output, attention_mask):
106
+ token_embeddings = model_output[0] # First element of model_output contains all token embeddings
107
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
108
  return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
109
 
 
122
  with torch.no_grad():
123
  model_output = model(**encoded_input)
124
 
125
+ # Perform pooling. In this case, mean pooling
126
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
127
 
128
  print("Sentence embeddings:")
 
130
  ```
131
 
132
 
133
+ ## Usage (Text Embeddings Inference (TEI))
134
+
135
+ [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) is a blazing fast inference solution for text embedding models.
136
+
137
+ - CPU:
138
+ ```bash
139
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id sentence-transformers/paraphrase-multilingual-mpnet-base-v2 --pooling mean --dtype float16
140
+ ```
141
+
142
+ - NVIDIA GPU:
143
+ ```bash
144
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cuda-latest --model-id sentence-transformers/paraphrase-multilingual-mpnet-base-v2 --pooling mean --dtype float16
145
+ ```
146
+
147
+ Send a request to `/v1/embeddings` to generate embeddings via the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create):
148
+ ```bash
149
+ curl http://localhost:8080/v1/embeddings \
150
+ -H "Content-Type: application/json" \
151
+ -d '{
152
+ "model": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
153
+ "input": "This is an example sentence"
154
+ }'
155
+ ```
156
+
157
+ Or check the [Text Embeddings Inference API specification](https://huggingface.github.io/text-embeddings-inference/) instead.
158
+
159
+
160
 
161
  ## Full Model Architecture
162
  ```