ek-id commited on
Commit
3c5a49e
·
1 Parent(s): 1508b83

Add Transformers.js and WebNN example to README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -6
README.md CHANGED
@@ -10,6 +10,7 @@ license: apache-2.0
10
  pipeline_tag: text-classification
11
  tags:
12
  - Intel
 
13
  model-index:
14
  - name: polite-guard
15
  results:
@@ -20,10 +21,10 @@ model-index:
20
  type: polite-guard
21
  metrics:
22
  - type: accuracy
23
- value: 92.4
24
  name: Accuracy
25
  - type: f1
26
- value: 92.4
27
  name: F1 Score
28
  ---
29
  # Polite Guard
@@ -82,8 +83,8 @@ The code for the synthetic data generation and fine-tuning can be found [here](h
82
 
83
  Here are the key performance metrics of the model on the test dataset containing both synthetic and manually annotated data:
84
 
85
- - **Accuracy**: 92.4% on the Polite Guard test dataset.
86
- - **F1-Score**: 92.4% on the Polite Guard test dataset.
87
 
88
  ## How to Use
89
 
@@ -92,9 +93,33 @@ You can use this model directly with a pipeline for categorizing text into class
92
  ```python
93
  from transformers import pipeline
94
 
95
- classifier = pipeline("text-classification", model="Intel/polite-guard")
96
  text = "Your input text"
97
- print(classifier(text))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  ```
99
  ## Articles
100
 
 
10
  pipeline_tag: text-classification
11
  tags:
12
  - Intel
13
+ - transformers.js
14
  model-index:
15
  - name: polite-guard
16
  results:
 
21
  type: polite-guard
22
  metrics:
23
  - type: accuracy
24
+ value: 92
25
  name: Accuracy
26
  - type: f1
27
+ value: 92
28
  name: F1 Score
29
  ---
30
  # Polite Guard
 
83
 
84
  Here are the key performance metrics of the model on the test dataset containing both synthetic and manually annotated data:
85
 
86
+ - **Accuracy**: 92% on the Polite Guard test dataset.
87
+ - **F1-Score**: 92% on the Polite Guard test dataset.
88
 
89
  ## How to Use
90
 
 
93
  ```python
94
  from transformers import pipeline
95
 
96
+ classifier = pipeline("text-classification", "Intel/polite-guard")
97
  text = "Your input text"
98
+ output = classifier(text)
99
+ print(output)
100
+ ```
101
+
102
+ The next example demonstrates how to run this model in the browser using Hugging Face's `transformers.js` library with `webnn-gpu` for hardware acceleration.
103
+
104
+ ```html
105
+ <!DOCTYPE html>
106
+ <html>
107
+ <body>
108
+ <h1>WebNN Transformers.js Intel/polite-guard</h1>
109
+ <script type="module">
110
+ import { pipeline } from "https://cdn.jsdelivr.net/npm/@huggingface/transformers";
111
+
112
+ const classifier = await pipeline("text-classification", "Intel/polite-guard", {
113
+ dtype: "fp32",
114
+ device: "webnn-gpu", // You can also try: "webgpu", "webnn", "webnn-npu", "webnn-cpu", "wasm"
115
+ });
116
+
117
+ const text = "Your input text";
118
+ const output = await classifier(text);
119
+ console.log(`${text}: ${output[0].label}`);
120
+ </script>
121
+ </body>
122
+ </html>
123
  ```
124
  ## Articles
125