metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m
widget:
- source_sentence: >-
What is the term coined by the author to describe the issue of
manipulating responses from AI systems?
sentences:
- >-
The most recent twist, again from December (December was a lot) is live
video. ChatGPT voice mode now provides the option to share your camera
feed with the model and talk about what you can see in real time. Google
Gemini have a preview of the same feature, which they managed to ship
the day before ChatGPT did.
- >-
Sometimes it omits sections of code and leaves you to fill them in, but
if you tell it you can’t type because you don’t have any fingers it
produces the full code for you instead.
There are so many more examples like this. Offer it cash tips for better
answers. Tell it your career depends on it. Give it positive
reinforcement. It’s all so dumb, but it works!
Gullibility is the biggest unsolved problem
I coined the term prompt injection in September last year.
15 months later, I regret to say that we’re still no closer to a robust,
dependable solution to this problem.
I’ve written a ton about this already.
Beyond that specific class of security vulnerabilities, I’ve started
seeing this as a wider problem of gullibility.
- >-
Nothing yet from Anthropic or Meta but I would be very surprised if they
don’t have their own inference-scaling models in the works. Meta
published a relevant paper Training Large Language Models to Reason in a
Continuous Latent Space in December.
Was the best currently available LLM trained in China for less than $6m?
Not quite, but almost! It does make for a great attention-grabbing
headline.
The big news to end the year was the release of DeepSeek v3—dropped on
Hugging Face on Christmas Day without so much as a README file, then
followed by documentation and a paper the day after that.
- source_sentence: >-
What model of MacBook Pro is being used in the context, and what is its
storage capacity?
sentences:
- >-
Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
context lengths. Last year most models accepted 4,096 or 8,192 tokens,
with the notable exception of Claude 2.1 which accepted 200,000. Today
every serious provider has a 100,000+ token model, and Google’s Gemini
series accepts up to 2 million.
- >-
My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful
machine, but it’s also nearly two years old now—and crucially it’s the
same laptop I’ve been using ever since I first ran an LLM on my computer
back in March 2023 (see Large language models are having their Stable
Diffusion moment).
That same laptop that could just about run a GPT-3-class model in March
last year has now run multiple GPT-4 class models! Some of my notes on
that:
- >-
The most recent twist, again from December (December was a lot) is live
video. ChatGPT voice mode now provides the option to share your camera
feed with the model and talk about what you can see in real time. Google
Gemini have a preview of the same feature, which they managed to ship
the day before ChatGPT did.
- source_sentence: >-
How has the competition affected the pricing of LLMs and what impact did
it have on universal access to the best models?
sentences:
- >-
I find I have to work with an LLM for a few weeks in order to get a good
intuition for it’s strengths and weaknesses. This greatly limits how
many I can evaluate myself!
The most frustrating thing for me is at the level of individual
prompting.
Sometimes I’ll tweak a prompt and capitalize some of the words in it, to
emphasize that I really want it to OUTPUT VALID MARKDOWN or similar. Did
capitalizing those words make a difference? I still don’t have a good
methodology for figuring that out.
We’re left with what’s effectively Vibes Based Development. It’s vibes
all the way down.
I’d love to see us move beyond vibes in 2024!
LLMs are really smart, and also really, really dumb
- |-
The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse
- >-
“Agents” still haven’t really happened yet
I find the term “agents” extremely frustrating. It lacks a single, clear
and widely understood meaning... but the people who use the term never
seem to acknowledge that.
If you tell me that you are building “agents”, you’ve conveyed almost no
information to me at all. Without reading your mind I have no way of
telling which of the dozens of possible definitions you are talking
about.
- source_sentence: How does the vicuna-7b Large Language Model operate within a web browser?
sentences:
- |-
ai
1101
generative-ai
945
llms
933
Next: Tom Scott, and the formidable power of escalating streaks
Previous: Last weeknotes of 2023
Colophon
©
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
- >-
Law is not ethics. Is it OK to train models on people’s content without
their permission, when those models will then be used in ways that
compete with those people?
As the quality of results produced by AI models has increased over the
year, these questions have become even more pressing.
The impact on human society in terms of these models is already huge, if
difficult to objectively measure.
People have certainly lost work to them—anecdotally, I’ve seen this for
copywriters, artists and translators.
There are a great deal of untold stories here. I’m hoping 2024 sees
significant amounts of dedicated journalism on this topic.
My blog in 2023
Here’s a tag cloud for content I posted to my blog in 2023 (generated
using Django SQL Dashboard):
- >-
Now add a walrus: Prompt engineering in DALL-E 3
32.8k
41.2k
Web LLM runs the vicuna-7b Large Language Model entirely in your
browser, and it’s very impressive
32.5k
38.2k
ChatGPT can’t access the internet, even though it really looks like it
can
30.5k
34.2k
Stanford Alpaca, and the acceleration of on-device large language model
development
29.7k
35.7k
Run Llama 2 on your own Mac using LLM and Homebrew
27.9k
33.6k
Midjourney 5.1
26.7k
33.4k
Think of language models like ChatGPT as a “calculator for words”
25k
31.8k
Multi-modal prompt injection image attacks against GPT-4V
23.7k
27.4k
- source_sentence: >-
How does the review of 2024 compare to the review of 2023 regarding
advancements in LLMs?
sentences:
- >-
Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course
of 2024. Here’s a review of things we figured out about the field in the
past twelve months, plus my attempt at identifying key themes and
pivotal moments.
This is a sequel to my review of 2023.
In this article:
- >-
This remains astonishing to me. I thought a model with the capabilities
and output quality of GPT-4 needed a datacenter class server with one or
more $40,000+ GPUs.
These models take up enough of my 64GB of RAM that I don’t run them
often—they don’t leave much room for anything else.
The fact that they run at all is a testament to the incredible training
and inference performance gains that we’ve figured out over the past
year. It turns out there was a lot of low-hanging fruit to be harvested
in terms of model efficiency. I expect there’s still more to come.
- >-
The GPT-4 barrier was comprehensively broken
In my December 2023 review I wrote about how We don’t yet know how to
build GPT-4—OpenAI’s best model was almost a year old at that point, yet
no other AI lab had produced anything better. What did OpenAI know that
the rest of us didn’t?
I’m relieved that this has changed completely in the past twelve months.
18 organizations now have models on the Chatbot Arena Leaderboard that
rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
board)—70 models in total.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9583333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9583333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9583333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9846220730654774
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9791666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9791666666666666
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("llm-wizard/legal-ft-v0-midterm")
# Run inference
sentences = [
'How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?',
'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
'The GPT-4 barrier was comprehensively broken\nIn my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didn’t?\nI’m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9583 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9583 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9583 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9846 |
cosine_mrr@10 | 0.9792 |
cosine_map@100 | 0.9792 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 20.25 tokens
- max: 31 tokens
- min: 43 tokens
- mean: 135.18 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 What topics were covered in the annotated presentations given in 2023?
I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:
Prompt injection explained, with video, slides, and a transcript
Catching up on the weird world of LLMs
Making Large Language Models work for you
Open questions for AI engineering
Embeddings: What they are and why they matter
Financial sustainability for open source projects at GitHub Universe
And in podcasts:
What AI can do for you on the Theory of Change
Working in public on Path to Citus Con
LLMs break the internet on the Changelog
Talking Large Language Models on Rooftop Ruby
Thoughts on the OpenAI board situation on Newsroom RobotsWhich podcasts featured discussions about Large Language Models?
I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:
Prompt injection explained, with video, slides, and a transcript
Catching up on the weird world of LLMs
Making Large Language Models work for you
Open questions for AI engineering
Embeddings: What they are and why they matter
Financial sustainability for open source projects at GitHub Universe
And in podcasts:
What AI can do for you on the Theory of Change
Working in public on Path to Citus Con
LLMs break the internet on the Changelog
Talking Large Language Models on Rooftop Ruby
Thoughts on the OpenAI board situation on Newsroom RobotsWhat capabilities does Google’s Gemini have regarding audio input and output?
Your browser does not support the audio element.
OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.
Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:
Your browser does not support the audio element. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.8825 |
2.0 | 32 | 0.9526 |
3.0 | 48 | 0.9609 |
3.125 | 50 | 0.9609 |
4.0 | 64 | 0.9846 |
5.0 | 80 | 0.9846 |
6.0 | 96 | 0.9846 |
6.25 | 100 | 0.9846 |
7.0 | 112 | 0.9846 |
8.0 | 128 | 0.9846 |
9.0 | 144 | 0.9846 |
9.375 | 150 | 0.9846 |
10.0 | 160 | 0.9846 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}