https://huggingface.co/WhereIsAI/UAE-Large-V1 with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @xenova/transformers

You can then use the model to compute embeddings like this:

import { pipeline } from '@xenova/transformers';

// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/UAE-Large-V1', {
    quantized: true, // Set this to false to use the full (unquantized) model
});

// Compute sentence embeddings
const sentences = ['That is a happy person', 'That is a very happy person'];
const output = await extractor(sentences, { pooling: 'cls' });
console.log(output);
// Tensor {
//   dims: [ 2, 1024 ],
//   type: 'float32',
//   data: Float32Array(2048) [ -0.1308155655860901, 0.44334232807159424, ... ],
//   size: 2048
// }

Compute cosine similarity between the two sentences:

import { cos_sim } from '@xenova/transformers';
console.log(cos_sim(output[0].data, output[1].data))
// 0.9586893906734091

You can convert the output Tensor to a nested JavaScript array using .tolist():

console.log(output.tolist());
// [
//   [ -0.1308155655860901, 0.44334232807159424, -0.12212765961885452, ... ],
//   [ 0.03931744396686554,   0.30553528666496277,  -0.19462820887565613, ... ]
// ]

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

Downloads last month
569
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support feature-extraction models for transformers.js library.

Model tree for Xenova/UAE-Large-V1

Quantized
(4)
this model