sahithkumar7 commited on
Commit
36894ac
·
verified ·
1 Parent(s): 2b881e2

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,704 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:80
9
+ - loss:MatryoshkaLoss
10
+ - loss:MultipleNegativesRankingLoss
11
+ base_model: microsoft/mpnet-base
12
+ widget:
13
+ - source_sentence: How many different active substances were detected in surface water
14
+ across all catchment areas?
15
+ sentences:
16
+ - 'metabolites were not detected in the water bodies.
17
+
18
+ 2.1.1. Antibiotics/Enzyme-Inhibitors and
19
+
20
+ Abacavir in Surface-Water
21
+
22
+ Fifty detections were found in all catchment areas in surface water, which corresponds
23
+ to 15 different active substances:
24
+
25
+ 12 antibiotics, two enzyme inhibitors, and one antiviral. The number of detections
26
+ per sampling station ranged from 0 to 7
27
+
28
+ different active substances. The Ave river-Prazins (Santo Tirso) and Serzedelo
29
+ I and II (Guimar ã es) as well as Ria
30
+
31
+ Formosa-coastal water (Faro and Olh ã o), each one with two sampling sites, showed
32
+ the most detected compounds in'
33
+ - '2. Results
34
+
35
+ 2.1. Frequency of Detections:
36
+
37
+ Antibiotics/Enzyme-Inhibitors and Abacavir
38
+
39
+ in Surface-Groundwater
40
+
41
+ During the screening framework beyond the antibiotics/enzyme-inhibitors, the antiviral
42
+ abacavir was detected. Therefore,
43
+
44
+ given the relevance of this compound, it was included in the present study. Although
45
+ enzyme inhibitors belong to the
46
+
47
+ antibiotic group, their specific pharmacological properties and detection were
48
+ sorted apart. In the present study, antibiotic
49
+
50
+ metabolites were not detected in the water bodies.
51
+
52
+ 2.1.1. Antibiotics/Enzyme-Inhibitors and
53
+
54
+ Abacavir in Surface-Water'
55
+ - 'surface water. The relatively higher detection of substances downstream of the
56
+ effluent discharge points compared with a
57
+
58
+ low detection in upstream samples could be attributed to the low efficiency in
59
+ urban wastewater treatment plants or
60
+
61
+ agricultural pressure. The environmental impact is more critical due to active
62
+ substances in drinking water or premix
63
+
64
+ medicated feeds in the veterinary site.
65
+
66
+ Furthermore, the detection of substances of exclusive human use (abacavir, tazobactam
67
+ and cilastatin) prove the weak'
68
+ - source_sentence: What group of pharmaceuticals was sulfamethazine matched to when
69
+ its quantity was missing?
70
+ sentences:
71
+ - 'ciprofloxacin
72
+
73
+ 43%
74
+
75
+ (3/7), enrofloxacin, norfloxacin, trimethoprim, lincomycin (29% (2/7), abacavir
76
+ and tetracycline
77
+
78
+ 14% (1/7). The enzyme inhibitors, namely clavulanic acid and cilastatin, were
79
+ detected once in an urban region located
80
+
81
+ well. This catchment point showed the most significant
82
+
83
+ number of pharmaceuticals. West/Tejo and Centre were the regions with the most
84
+ considerable number of substances in
85
+
86
+ groundwater, accounting for 43%. All groundwater
87
+
88
+ samples were contaminated by at least one antibiotic. Supplemental Tables S2 and
89
+ S4 contain a detailed description of
90
+
91
+ the'
92
+ - 'clarithromycin) were the only ones that demonstrated the potential to concentrate
93
+ in living organisms (log Kow ≥ 3) [14].
94
+
95
+ All the remaining antibiotics showed a relatively low log Kow and were expected
96
+ to be present mainly in surface water.
97
+
98
+ However, the soil mobility/adsorption detected The detected pharmaceuticals showed
99
+ high to moderate water solubility
100
+
101
+ and are small ionisable molecules (MW ≤ 900 g/mol). Regarding the octanol/water
102
+ partitioning coefficient (log Kow) data,'
103
+ - 'missing quantity for sulfamethazine, the sulfonamides group has been matched.
104
+
105
+ Consumption (Kg) of the detected pharmaceuticals in Portugal (2017).
106
+
107
+ 1 Amount from ESVAC Report-2017; 2 Match the sulfonamides amount; NA-not available.
108
+
109
+ Amount of detected pharmaceuticals consumption per Portuguese region. Amount of
110
+ detected pharmaceuticals
111
+
112
+ consumption per Portuguese region.'
113
+ - source_sentence: What directive sets environmental quality standards for substances
114
+ in surface waters?
115
+ sentences:
116
+ - 'As much as the specificities of each member state should be considered this issue
117
+ has become one of the European
118
+
119
+ community''s main concerns [8].
120
+
121
+ The strategies against water pollution are provided in the Water Framework Directive
122
+ [9] and the Directive on
123
+
124
+ Environmental Quality Standards that set environmental quality standards (EQS)
125
+ for the substances in surface waters
126
+
127
+ and confirm their designation as priority or priority hazardous substances [10].
128
+ Evidence of potential impacts and'
129
+ - 'seems to undertake a similar fate in the environment.
130
+
131
+ Nevertheless, due to stronger adsorption, with higher emergence in sediment, its
132
+ occurrence in the surface water is lower
133
+
134
+ [71]. The use of tetracyclines, mainly as medicated premix and oral solution for
135
+ food-producing animals [72], and the very
136
+
137
+ low bioavailability (e.g. in pig feed) [43] contribute to increasing its release
138
+ into the environment. Regarding macrolides,
139
+
140
+ erythromycin and clarithromycin exhibit a remarkable frequency of detection in
141
+ surface water samples. The most'
142
+ - 'low flows; otherwise, POCIS might be damage. In ground-waters was used one POCIS
143
+ unit/well. Due to the high sorption
144
+
145
+ capacity, POCIS was deployed approximately for 30 days, allowing the polar organic
146
+ compounds adsorbed to be in the
147
+
148
+ equilibrium stage with the active substances in an aqueous medium. In the laboratory,
149
+ POCIS disks were frozen until
150
+
151
+ extraction.
152
+
153
+ 4.2.2. Qualitative Analysis Method Used
154
+
155
+ for the Characterisation of Antibiotics in
156
+
157
+ Surface-Groundwater'
158
+ - source_sentence: What is the molecular weight range of the detected pharmaceuticals?
159
+ sentences:
160
+ - '2.3. Physicochemical Properties and Key Pharmacokinetic Features of Detected
161
+ Pharmaceuticals 2.3. Physicochemical
162
+
163
+ Properties and Key Pharmacokinetic Features of Detected Pharmaceuticals
164
+
165
+ The detected pharmaceuticals showed high to moderate water solubility and are
166
+ small ionisable molecules (MW ≤ 900
167
+
168
+ g/mol). Regarding the octanol/water partitioning coefficient (log Kow) data, macrolide
169
+ antibiotics (azithromycin and
170
+
171
+ clarithromycin) were the only ones that demonstrated the potential to concentrate
172
+ in living organisms (log Kow ≥ 3) [14].'
173
+ - 'As much as the specificities of each member state should be considered this issue
174
+ has become one of the European
175
+
176
+ community''s main concerns [8].
177
+
178
+ The strategies against water pollution are provided in the Water Framework Directive
179
+ [9] and the Directive on
180
+
181
+ Environmental Quality Standards that set environmental quality standards (EQS)
182
+ for the substances in surface waters
183
+
184
+ and confirm their designation as priority or priority hazardous substances [10].
185
+ Evidence of potential impacts and'
186
+ - 'passive samplers in groundwater considered the well technical features; the depth
187
+ and groundwater level were previously
188
+
189
+ determined since they should be detected at the superficial levels. The passive
190
+ sampler was placed using a water level
191
+
192
+ meter, 2 m below the groundwater level. The sampler always remained immersed in
193
+ water, avoiding extractions and the
194
+
195
+ regional lowering of the water table [104]. For the sampling stations, sites of
196
+ different environmental pressures were
197
+
198
+ considered, specifically urban, agricultural area/animal production, and aquaculture.
199
+ The information regarding the'
200
+ - source_sentence: What was the most frequently identified pharmaceutical in the groundwater
201
+ samples?
202
+ sentences:
203
+ - 'Pharmacokinetic characteristics may represent key features in understanding antibiotics
204
+ occurrence [62]. Most antibiotics
205
+
206
+ are not completely metabolised in humans and animals; thus, a high percentage
207
+ of the active substance (40-90%) is
208
+
209
+ excreted in urine/faeces in the unchanged form. These molecules are discharged
210
+ into water and soil through wastewater,
211
+
212
+ animal manure, and sewage sludge, frequently used as fertilisers to agricultural
213
+ lands. Also, it is expected that the
214
+
215
+ hospital effluent will contribute partly to the pharmaceutical load in the wastewater
216
+ treatment plant influence [63].'
217
+ - 'many domestic and livestock animals. Several formulations of powder for administration
218
+ in drinking water and medicated
219
+
220
+ premix are available for poultry and pigs. The excretion of amoxicillin is predominantly
221
+ renal; more than 80% of the parent
222
+
223
+ drug is recovered unchanged in the urine. While bioavailability of 75 to 80% is
224
+ reported in humans, a low value (~30%)
225
+
226
+ was observed in pigs, calves, foals, and pigeons [26,52]. Maybe this last group
227
+ of animals contribute more sharply to the'
228
+ - 'from one to five compounds. The most frequently identified pharmaceuticals, in
229
+ decreasing order, were ciprofloxacin 43%
230
+
231
+ (3/7), enrofloxacin, norfloxacin, trimethoprim, lincomycin (29% (2/7), abacavir
232
+ and tetracycline 14% (1/7). The enzyme
233
+
234
+ inhibitors, namely clavulanic acid and cilastatin, were detected once in an urban
235
+ region located well. This catchment point
236
+
237
+ showed the most significant number of pharmaceuticals. West/Tejo and Centre were
238
+ the regions with the most
239
+
240
+ considerable number of substances in groundwater, accounting for 43%. All groundwater
241
+ samples were contaminated by'
242
+ pipeline_tag: sentence-similarity
243
+ library_name: sentence-transformers
244
+ metrics:
245
+ - cosine_accuracy
246
+ model-index:
247
+ - name: SentenceTransformer based on microsoft/mpnet-base
248
+ results:
249
+ - task:
250
+ type: triplet
251
+ name: Triplet
252
+ dataset:
253
+ name: initial test
254
+ type: initial_test
255
+ metrics:
256
+ - type: cosine_accuracy
257
+ value: 0.7799999713897705
258
+ name: Cosine Accuracy
259
+ - task:
260
+ type: triplet
261
+ name: Triplet
262
+ dataset:
263
+ name: final test
264
+ type: final_test
265
+ metrics:
266
+ - type: cosine_accuracy
267
+ value: 0.8199999928474426
268
+ name: Cosine Accuracy
269
+ - type: cosine_accuracy
270
+ value: 0.8999999761581421
271
+ name: Cosine Accuracy
272
+ - type: cosine_accuracy
273
+ value: 0.8999999761581421
274
+ name: Cosine Accuracy
275
+ - type: cosine_accuracy
276
+ value: 0.9200000166893005
277
+ name: Cosine Accuracy
278
+ ---
279
+
280
+ # SentenceTransformer based on microsoft/mpnet-base
281
+
282
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
283
+
284
+ ## Model Details
285
+
286
+ ### Model Description
287
+ - **Model Type:** Sentence Transformer
288
+ - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
289
+ - **Maximum Sequence Length:** 512 tokens
290
+ - **Output Dimensionality:** 768 dimensions
291
+ - **Similarity Function:** Cosine Similarity
292
+ - **Training Dataset:**
293
+ - json
294
+ <!-- - **Language:** Unknown -->
295
+ <!-- - **License:** Unknown -->
296
+
297
+ ### Model Sources
298
+
299
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
300
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
301
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
302
+
303
+ ### Full Model Architecture
304
+
305
+ ```
306
+ SentenceTransformer(
307
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'MPNetModel'})
308
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
309
+ )
310
+ ```
311
+
312
+ ## Usage
313
+
314
+ ### Direct Usage (Sentence Transformers)
315
+
316
+ First install the Sentence Transformers library:
317
+
318
+ ```bash
319
+ pip install -U sentence-transformers
320
+ ```
321
+
322
+ Then you can load this model and run inference.
323
+ ```python
324
+ from sentence_transformers import SentenceTransformer
325
+
326
+ # Download from the 🤗 Hub
327
+ model = SentenceTransformer("sahithkumar7/mpnet-base-matryoshka-iter02")
328
+ # Run inference
329
+ sentences = [
330
+ 'What was the most frequently identified pharmaceutical in the groundwater samples?',
331
+ 'from one to five compounds. The most frequently identified pharmaceuticals, in decreasing order, were ciprofloxacin 43%\n(3/7), enrofloxacin, norfloxacin, trimethoprim, lincomycin (29% (2/7), abacavir and tetracycline 14% (1/7). The enzyme\ninhibitors, namely clavulanic acid and cilastatin, were detected once in an urban region located well. This catchment point\nshowed the most significant number of pharmaceuticals. West/Tejo and Centre were the regions with the most\nconsiderable number of substances in groundwater, accounting for 43%. All groundwater samples were contaminated by',
332
+ 'Pharmacokinetic characteristics may represent key features in understanding antibiotics occurrence [62]. Most antibiotics\nare not completely metabolised in humans and animals; thus, a high percentage of the active substance (40-90%) is\nexcreted in urine/faeces in the unchanged form. These molecules are discharged into water and soil through wastewater,\nanimal manure, and sewage sludge, frequently used as fertilisers to agricultural lands. Also, it is expected that the\nhospital effluent will contribute partly to the pharmaceutical load in the wastewater treatment plant influence [63].',
333
+ ]
334
+ embeddings = model.encode(sentences)
335
+ print(embeddings.shape)
336
+ # [3, 768]
337
+
338
+ # Get the similarity scores for the embeddings
339
+ similarities = model.similarity(embeddings, embeddings)
340
+ print(similarities)
341
+ # tensor([[1.0000, 0.8234, 0.5626],
342
+ # [0.8234, 1.0000, 0.6069],
343
+ # [0.5626, 0.6069, 1.0000]])
344
+ ```
345
+
346
+ <!--
347
+ ### Direct Usage (Transformers)
348
+
349
+ <details><summary>Click to see the direct usage in Transformers</summary>
350
+
351
+ </details>
352
+ -->
353
+
354
+ <!--
355
+ ### Downstream Usage (Sentence Transformers)
356
+
357
+ You can finetune this model on your own dataset.
358
+
359
+ <details><summary>Click to expand</summary>
360
+
361
+ </details>
362
+ -->
363
+
364
+ <!--
365
+ ### Out-of-Scope Use
366
+
367
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
368
+ -->
369
+
370
+ ## Evaluation
371
+
372
+ ### Metrics
373
+
374
+ #### Triplet
375
+
376
+ * Datasets: `initial_test`, `final_test`, `final_test`, `final_test` and `final_test`
377
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
378
+
379
+ | Metric | initial_test | final_test |
380
+ |:--------------------|:-------------|:-----------|
381
+ | **cosine_accuracy** | **0.78** | **0.92** |
382
+
383
+ <!--
384
+ ## Bias, Risks and Limitations
385
+
386
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
387
+ -->
388
+
389
+ <!--
390
+ ### Recommendations
391
+
392
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
393
+ -->
394
+
395
+ ## Training Details
396
+
397
+ ### Training Dataset
398
+
399
+ #### json
400
+
401
+ * Dataset: json
402
+ * Size: 80 training samples
403
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
404
+ * Approximate statistics based on the first 80 samples:
405
+ | | anchor | positive | negative |
406
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
407
+ | type | string | string | string |
408
+ | details | <ul><li>min: 9 tokens</li><li>mean: 16.14 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 125.65 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 122.97 tokens</li><li>max: 211 tokens</li></ul> |
409
+ * Samples:
410
+ | anchor | positive | negative |
411
+ |:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
412
+ | <code>Which two macrolide antibiotics are frequently detected in surface water samples?</code> | <code>seems to undertake a similar fate in the environment.<br>Nevertheless, due to stronger adsorption, with higher emergence in sediment, its occurrence in the surface water is lower<br>[71]. The use of tetracyclines, mainly as medicated premix and oral solution for food-producing animals [72], and the very<br>low bioavailability (e.g. in pig feed) [43] contribute to increasing its release into the environment. Regarding macrolides,<br>erythromycin and clarithromycin exhibit a remarkable frequency of detection in surface water samples. The most</code> | <code>Nonetheless, besides the sorption capacity, these antibiotics have high solubility in water. Crucial routes for these<br>substances into the environment are manure from animal production and sewage sludge from wastewater treatment<br>plant (WWTP) used as fertilisers. Therefore, these substances have been evidenced in topsoil samples [68]. These<br>quinolones and other antibiotics, for instance, norfloxacin and tetracycline, have been identified in groundwater samples<br>despite being influenced by sorption processes. They were not readily degraded; instead, the input into groundwater</code> |
413
+ | <code>What antimicrobial drugs were identified in the survey besides macrolides?</code> | <code>is one of the most frequently pharmaceutical in representative rivers [74,75]. The three macrolides identified in our<br>detection survey are included since 2018 in the first 'watch list' [76].<br>Another group of antimicrobial drugs identified in our survey were sulfamethoxazole/trimethoprim and sulfamethazine.<br>Sulfamethoxazole/trimethoprim are often used combined since the effectiveness of sulfonamides is enhanced. In the<br>present study, the detection of both substances was comparable; however, trimethoprim was detected in groundwater.</code> | <code>upstream samples obtained in rural locations was demonstrated and could be attributed to a low efficiency in the urban<br>wastewater treatment plants or due to agricultural pressure.<br>The higher frequency of detection for most substances was observed in the Ave river and Ria Formosa, confirming that<br>several effluents impact these water bodies from urban wastewater treatment plants and livestock production.<br>Pharmacokinetic characteristics may represent key features in understanding antibiotics occurrence [62]. Most antibiotics</code> |
414
+ | <code>How long was the observational period of the antibiotic survey in Portugal?</code> | <code>of antibiotics and their metabolites in surface- groundwater. It seeks to reflect the current demographic, spatial, drug<br>consumption, and drug profile on an observational period of 3 years in Portugal. The greatest challenge of this survey<br>data will be to promote the ecopharmacovigilance framework development shortly to implement measures for avoiding<br>misuse/overuse of antibiotics and slow down emission and antibiotic resistance.<br>2. Results<br>2.1. Frequency of Detections:<br>Antibiotics/Enzyme-Inhibitors and Abacavir<br>in Surface-Groundwater</code> | <code>despite being influenced by sorption processes. They were not readily degraded; instead, the input into groundwater<br>could be due to livestock farming pressure, namely by spreading manure in the soil or the possible sewage sludge<br>application in the area. High clay and low sand content in soils can decrease the mobility of pharmaceuticals, which is<br>attributed to clay intense exchange capacity. Thus, soil properties (e.g. particle composition) are a significant, influential</code> |
415
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
416
+ ```json
417
+ {
418
+ "loss": "MultipleNegativesRankingLoss",
419
+ "matryoshka_dims": [
420
+ 768,
421
+ 512,
422
+ 256,
423
+ 128,
424
+ 64
425
+ ],
426
+ "matryoshka_weights": [
427
+ 1,
428
+ 1,
429
+ 1,
430
+ 1,
431
+ 1
432
+ ],
433
+ "n_dims_per_step": -1
434
+ }
435
+ ```
436
+
437
+ ### Evaluation Dataset
438
+
439
+ #### json
440
+
441
+ * Dataset: json
442
+ * Size: 20 evaluation samples
443
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
444
+ * Approximate statistics based on the first 20 samples:
445
+ | | anchor | positive | negative |
446
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
447
+ | type | string | string | string |
448
+ | details | <ul><li>min: 11 tokens</li><li>mean: 16.4 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 113.65 tokens</li><li>max: 148 tokens</li></ul> | <ul><li>min: 89 tokens</li><li>mean: 118.8 tokens</li><li>max: 162 tokens</li></ul> |
449
+ * Samples:
450
+ | anchor | positive | negative |
451
+ |:-----------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
452
+ | <code>What percentage of unchanged excretion did the most significant number of detected substances show?</code> | <code>coefficients were not available for lincomycin, clavulanic acid and cilastatin.<br>Physicochemical properties of detected pharmaceuticals.<br>1 Data retrieved from [16]; 2 Data retrieved from [17]; 3 Data retrieved from [18]; 4 Data retrieved from [19]; 5<br>Data retrieved from [20];<br>6 Data retrieved from [21]; 7 Data retrieved from [22]; 8 Data retrieved from [23]; 9 Data retrieved from [24]; 10<br>Data retrieved from [25];<br>NA-not available.<br>The most significant number of detected substances showed a percentage of unchanged excretion higher than 40%.</code> | <code>1. Introduction<br>Antibiotics are a critical component of human and veterinary modern medicine, developed to produce desirable or<br>beneficial effects on infections induced by pathogens. Like most pharmaceuticals, antibiotics tend to be small organic<br>polar compounds, generally ionisable, ordinarily subject to a metabolism or biotransformation process by the organism to<br>be eliminated more efficiently [1,2]. The excretion of these compounds and their metabolites occurs mainly through urine,</code> |
453
+ | <code>How many kilograms of abacavir were detected in Portugal in 2017?</code> | <code>Regarding the different regions, it has been concluded that North and West/Tejo were the regions with the higher<br>consuming values. Both regions presented a significant value (33%) for the abacavir. For the detected antiviral abacavir,<br>an amount of 1458 kg has been observed.<br>Regarding antibiotics used in veterinary medicine, the regional amount was not available. Likewise, due to the reported<br>missing quantity for sulfamethazine, the sulfonamides group has been matched.<br>Consumption (Kg) of the detected pharmaceuticals in Portugal (2017).</code> | <code>43%<br>(3/7), enrofloxacin, norfloxacin, trimethoprim, lincomycin (29% (2/7), abacavir and tetracycline<br>14% (1/7). The enzyme inhibitors, namely clavulanic acid and cilastatin, were detected once in an urban region located<br>well. This catchment point showed the most significant<br>number of pharmaceuticals. West/Tejo and Centre were the regions with the most considerable number of substances in<br>groundwater, accounting for 43%. All groundwater<br>samples were contaminated by at least one antibiotic. Supplemental Tables S2 and S4 contain a detailed description of<br>the</code> |
454
+ | <code>What must marketing authorisation procedures for medicines include since 2006?</code> | <code>substances in passive samplers [7]. Since 2006, marketing authorisation procedures for both human and veterinary<br>medicines must include an environmental risk assessment that comprises a prospective exposure assessment,<br>underestimating the possible impact and the occurrence of antibiotics after years of consumption. Ultimately, the potential<br>risk may not be correctly anticipated. It becomes urgent to generate new data, mainly to refine exposure assessments.<br>As much as the specificities of each member state should be considered this issue has become one of the European</code> | <code>clarithromycin/erythromycin, tetracycline, sulfamethoxazole, and abacavir. In groundwater, enrofloxacin/ciprofloxacin,<br>norfloxacin, trimethoprim, lincomycin, abacavir and tetracycline were recovered. Metabolites were not detected in water<br>bodies. Noticeable was the detection of enzyme inhibitors, tazobactam and cilastatin, which are both for exclusive<br>hospital use. The North region and Algarve (South) were the areas with the most significant frequency of substances in<br>surface water. The relatively higher detection of substances downstream of the effluent discharge points compared with a</code> |
455
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
456
+ ```json
457
+ {
458
+ "loss": "MultipleNegativesRankingLoss",
459
+ "matryoshka_dims": [
460
+ 768,
461
+ 512,
462
+ 256,
463
+ 128,
464
+ 64
465
+ ],
466
+ "matryoshka_weights": [
467
+ 1,
468
+ 1,
469
+ 1,
470
+ 1,
471
+ 1
472
+ ],
473
+ "n_dims_per_step": -1
474
+ }
475
+ ```
476
+
477
+ ### Training Hyperparameters
478
+ #### Non-Default Hyperparameters
479
+
480
+ - `eval_strategy`: steps
481
+ - `per_device_train_batch_size`: 16
482
+ - `per_device_eval_batch_size`: 16
483
+ - `num_train_epochs`: 1
484
+ - `warmup_ratio`: 0.1
485
+ - `fp16`: True
486
+ - `batch_sampler`: no_duplicates
487
+
488
+ #### All Hyperparameters
489
+ <details><summary>Click to expand</summary>
490
+
491
+ - `overwrite_output_dir`: False
492
+ - `do_predict`: False
493
+ - `eval_strategy`: steps
494
+ - `prediction_loss_only`: True
495
+ - `per_device_train_batch_size`: 16
496
+ - `per_device_eval_batch_size`: 16
497
+ - `per_gpu_train_batch_size`: None
498
+ - `per_gpu_eval_batch_size`: None
499
+ - `gradient_accumulation_steps`: 1
500
+ - `eval_accumulation_steps`: None
501
+ - `torch_empty_cache_steps`: None
502
+ - `learning_rate`: 5e-05
503
+ - `weight_decay`: 0.0
504
+ - `adam_beta1`: 0.9
505
+ - `adam_beta2`: 0.999
506
+ - `adam_epsilon`: 1e-08
507
+ - `max_grad_norm`: 1.0
508
+ - `num_train_epochs`: 1
509
+ - `max_steps`: -1
510
+ - `lr_scheduler_type`: linear
511
+ - `lr_scheduler_kwargs`: {}
512
+ - `warmup_ratio`: 0.1
513
+ - `warmup_steps`: 0
514
+ - `log_level`: passive
515
+ - `log_level_replica`: warning
516
+ - `log_on_each_node`: True
517
+ - `logging_nan_inf_filter`: True
518
+ - `save_safetensors`: True
519
+ - `save_on_each_node`: False
520
+ - `save_only_model`: False
521
+ - `restore_callback_states_from_checkpoint`: False
522
+ - `no_cuda`: False
523
+ - `use_cpu`: False
524
+ - `use_mps_device`: False
525
+ - `seed`: 42
526
+ - `data_seed`: None
527
+ - `jit_mode_eval`: False
528
+ - `use_ipex`: False
529
+ - `bf16`: False
530
+ - `fp16`: True
531
+ - `fp16_opt_level`: O1
532
+ - `half_precision_backend`: auto
533
+ - `bf16_full_eval`: False
534
+ - `fp16_full_eval`: False
535
+ - `tf32`: None
536
+ - `local_rank`: 0
537
+ - `ddp_backend`: None
538
+ - `tpu_num_cores`: None
539
+ - `tpu_metrics_debug`: False
540
+ - `debug`: []
541
+ - `dataloader_drop_last`: False
542
+ - `dataloader_num_workers`: 0
543
+ - `dataloader_prefetch_factor`: None
544
+ - `past_index`: -1
545
+ - `disable_tqdm`: False
546
+ - `remove_unused_columns`: True
547
+ - `label_names`: None
548
+ - `load_best_model_at_end`: False
549
+ - `ignore_data_skip`: False
550
+ - `fsdp`: []
551
+ - `fsdp_min_num_params`: 0
552
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
553
+ - `fsdp_transformer_layer_cls_to_wrap`: None
554
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
555
+ - `deepspeed`: None
556
+ - `label_smoothing_factor`: 0.0
557
+ - `optim`: adamw_torch
558
+ - `optim_args`: None
559
+ - `adafactor`: False
560
+ - `group_by_length`: False
561
+ - `length_column_name`: length
562
+ - `ddp_find_unused_parameters`: None
563
+ - `ddp_bucket_cap_mb`: None
564
+ - `ddp_broadcast_buffers`: False
565
+ - `dataloader_pin_memory`: True
566
+ - `dataloader_persistent_workers`: False
567
+ - `skip_memory_metrics`: True
568
+ - `use_legacy_prediction_loop`: False
569
+ - `push_to_hub`: False
570
+ - `resume_from_checkpoint`: None
571
+ - `hub_model_id`: None
572
+ - `hub_strategy`: every_save
573
+ - `hub_private_repo`: None
574
+ - `hub_always_push`: False
575
+ - `gradient_checkpointing`: False
576
+ - `gradient_checkpointing_kwargs`: None
577
+ - `include_inputs_for_metrics`: False
578
+ - `include_for_metrics`: []
579
+ - `eval_do_concat_batches`: True
580
+ - `fp16_backend`: auto
581
+ - `push_to_hub_model_id`: None
582
+ - `push_to_hub_organization`: None
583
+ - `mp_parameters`:
584
+ - `auto_find_batch_size`: False
585
+ - `full_determinism`: False
586
+ - `torchdynamo`: None
587
+ - `ray_scope`: last
588
+ - `ddp_timeout`: 1800
589
+ - `torch_compile`: False
590
+ - `torch_compile_backend`: None
591
+ - `torch_compile_mode`: None
592
+ - `include_tokens_per_second`: False
593
+ - `include_num_input_tokens_seen`: False
594
+ - `neftune_noise_alpha`: None
595
+ - `optim_target_modules`: None
596
+ - `batch_eval_metrics`: False
597
+ - `eval_on_start`: False
598
+ - `use_liger_kernel`: False
599
+ - `eval_use_gather_object`: False
600
+ - `average_tokens_across_devices`: False
601
+ - `prompts`: None
602
+ - `batch_sampler`: no_duplicates
603
+ - `multi_dataset_batch_sampler`: proportional
604
+ - `router_mapping`: {}
605
+ - `learning_rate_mapping`: {}
606
+
607
+ </details>
608
+
609
+ ### Training Logs
610
+ | Epoch | Step | Training Loss | initial_test_cosine_accuracy | final_test_cosine_accuracy |
611
+ |:-----:|:----:|:-------------:|:----------------------------:|:--------------------------:|
612
+ | -1 | -1 | - | 0.7800 | - |
613
+ | 0.2 | 1 | 15.6011 | - | - |
614
+ | 0.4 | 2 | 12.9289 | - | - |
615
+ | 0.6 | 3 | 15.1921 | - | - |
616
+ | 0.8 | 4 | 14.4243 | - | - |
617
+ | 1.0 | 5 | 16.8067 | - | - |
618
+ | -1 | -1 | - | - | 0.8200 |
619
+ | 0.2 | 1 | 14.317 | - | - |
620
+ | 0.4 | 2 | 12.326 | - | - |
621
+ | 0.6 | 3 | 14.0337 | - | - |
622
+ | 0.8 | 4 | 11.1261 | - | - |
623
+ | 1.0 | 5 | 8.9671 | - | - |
624
+ | 1.2 | 6 | 10.716 | - | - |
625
+ | 1.4 | 7 | 9.496 | - | - |
626
+ | 1.6 | 8 | 9.0035 | - | - |
627
+ | 1.8 | 9 | 7.3839 | - | - |
628
+ | 2.0 | 10 | 11.0917 | - | - |
629
+ | -1 | -1 | - | - | 0.9000 |
630
+ | 0.2 | 1 | 11.3791 | - | - |
631
+ | 0.4 | 2 | 5.6417 | - | - |
632
+ | 0.6 | 3 | 5.7289 | - | - |
633
+ | 0.8 | 4 | 3.5917 | - | - |
634
+ | 1.0 | 5 | 2.3028 | - | - |
635
+ | -1 | -1 | - | - | 0.9200 |
636
+
637
+
638
+ ### Framework Versions
639
+ - Python: 3.11.13
640
+ - Sentence Transformers: 5.0.0
641
+ - Transformers: 4.52.4
642
+ - PyTorch: 2.6.0+cu124
643
+ - Accelerate: 1.8.1
644
+ - Datasets: 3.6.0
645
+ - Tokenizers: 0.21.2
646
+
647
+ ## Citation
648
+
649
+ ### BibTeX
650
+
651
+ #### Sentence Transformers
652
+ ```bibtex
653
+ @inproceedings{reimers-2019-sentence-bert,
654
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
655
+ author = "Reimers, Nils and Gurevych, Iryna",
656
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
657
+ month = "11",
658
+ year = "2019",
659
+ publisher = "Association for Computational Linguistics",
660
+ url = "https://arxiv.org/abs/1908.10084",
661
+ }
662
+ ```
663
+
664
+ #### MatryoshkaLoss
665
+ ```bibtex
666
+ @misc{kusupati2024matryoshka,
667
+ title={Matryoshka Representation Learning},
668
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
669
+ year={2024},
670
+ eprint={2205.13147},
671
+ archivePrefix={arXiv},
672
+ primaryClass={cs.LG}
673
+ }
674
+ ```
675
+
676
+ #### MultipleNegativesRankingLoss
677
+ ```bibtex
678
+ @misc{henderson2017efficient,
679
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
680
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
681
+ year={2017},
682
+ eprint={1705.00652},
683
+ archivePrefix={arXiv},
684
+ primaryClass={cs.CL}
685
+ }
686
+ ```
687
+
688
+ <!--
689
+ ## Glossary
690
+
691
+ *Clearly define terms in order to be accessible across audiences.*
692
+ -->
693
+
694
+ <!--
695
+ ## Model Card Authors
696
+
697
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
698
+ -->
699
+
700
+ <!--
701
+ ## Model Card Contact
702
+
703
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
704
+ -->
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MPNetModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-05,
14
+ "max_position_embeddings": 514,
15
+ "model_type": "mpnet",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 1,
19
+ "relative_attention_num_buckets": 32,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.4",
22
+ "vocab_size": 30527
23
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.52.4",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d696bb0710c9eb042719b39dd1c9021b2540b83db02ee42b3a7306abddf49ff4
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": false,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "extra_special_tokens": {},
58
+ "mask_token": "<mask>",
59
+ "model_max_length": 512,
60
+ "pad_token": "<pad>",
61
+ "sep_token": "</s>",
62
+ "strip_accents": null,
63
+ "tokenize_chinese_chars": true,
64
+ "tokenizer_class": "MPNetTokenizer",
65
+ "unk_token": "[UNK]"
66
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff