Add/update the quantized ONNX model files and README.md for Transformers.js v3
#1
by
whitphx
HF Staff
- opened
Applied Quantizations
✅ Based on model.onnx
with slimming
↳ ❌ int8
: model_int8.onnx
(added but JS-based E2E test failed)
/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25
__classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
^
Error: Could not find an implementation for ConvInteger(10) node with name '/model/backbone/conv_encoder/model/conv1/Conv_quant'
at new OnnxruntimeSessionHandler (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25:92)
at Immediate.<anonymous> (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:67:29)
at process.processImmediate (node:internal/timers:485:21)
Node.js v22.16.0
↳ ✅ uint8
: model_uint8.onnx
(added)
↳ ✅ q4
: model_q4.onnx
(added)
↳ ✅ q4f16
: model_q4f16.onnx
(added)
↳ ✅ bnb4
: model_bnb4.onnx
(added)
Xenova
changed pull request status to
merged