Add/update the quantized ONNX model files and README.md for Transformers.js v3
Browse files## Applied Quantizations
### ✅ Based on `model.onnx` *with* slimming
↳ ✅ `q4f16`: `model_q4f16.onnx` (added)
### ✅ Based on `model.onnx` *with* slimming
↳ ✅ `q4f16`: `model_q4f16.onnx` (added)
- README.md +3 -3
- onnx/model_q4f16.onnx +3 -0
README.md
CHANGED
@@ -7,15 +7,15 @@ https://huggingface.co/facebook/wav2vec2-base-960h with ONNX weights to be compa
|
|
7 |
|
8 |
## Usage (Transformers.js)
|
9 |
|
10 |
-
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@
|
11 |
```bash
|
12 |
-
npm i @
|
13 |
```
|
14 |
|
15 |
You can then use the model for speech recognition with:
|
16 |
|
17 |
```js
|
18 |
-
import { pipeline } from '@
|
19 |
|
20 |
// Create an Automatic Speech Recognition pipeline
|
21 |
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/wav2vec2-base-960h');
|
|
|
7 |
|
8 |
## Usage (Transformers.js)
|
9 |
|
10 |
+
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
|
11 |
```bash
|
12 |
+
npm i @huggingface/transformers
|
13 |
```
|
14 |
|
15 |
You can then use the model for speech recognition with:
|
16 |
|
17 |
```js
|
18 |
+
import { pipeline } from '@huggingface/transformers';
|
19 |
|
20 |
// Create an Automatic Speech Recognition pipeline
|
21 |
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/wav2vec2-base-960h');
|
onnx/model_q4f16.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d5a7a3e9f32bdb9cca98404fa006d6e4bafae233c3f3171aacbc7a34e90d5f28
|
3 |
+
size 66469883
|