legal-ft-v0-midterm / README.md
llm-wizard's picture
Add new SentenceTransformer model
d7f334e verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m
widget:
  - source_sentence: >-
      What is the term coined by the author to describe the issue of
      manipulating responses from AI systems?
    sentences:
      - >-
        The most recent twist, again from December (December was a lot) is live
        video. ChatGPT voice mode now provides the option to share your camera
        feed with the model and talk about what you can see in real time. Google
        Gemini have a preview of the same feature, which they managed to ship
        the day before ChatGPT did.
      - >-
        Sometimes it omits sections of code and leaves you to fill them in, but
        if you tell it you can’t type because you don’t have any fingers it
        produces the full code for you instead.

        There are so many more examples like this. Offer it cash tips for better
        answers. Tell it your career depends on it. Give it positive
        reinforcement. It’s all so dumb, but it works!

        Gullibility is the biggest unsolved problem

        I coined the term prompt injection in September last year.

        15 months later, I regret to say that we’re still no closer to a robust,
        dependable solution to this problem.

        I’ve written a ton about this already.

        Beyond that specific class of security vulnerabilities, I’ve started
        seeing this as a wider problem of gullibility.
      - >-
        Nothing yet from Anthropic or Meta but I would be very surprised if they
        don’t have their own inference-scaling models in the works. Meta
        published a relevant paper Training Large Language Models to Reason in a
        Continuous Latent Space in December.

        Was the best currently available LLM trained in China for less than $6m?

        Not quite, but almost! It does make for a great attention-grabbing
        headline.

        The big news to end the year was the release of DeepSeek v3—dropped on
        Hugging Face on Christmas Day without so much as a README file, then
        followed by documentation and a paper the day after that.
  - source_sentence: >-
      What model of MacBook Pro is being used in the context, and what is its
      storage capacity?
    sentences:
      - >-
        Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
        context lengths. Last year most models accepted 4,096 or 8,192 tokens,
        with the notable exception of Claude 2.1 which accepted 200,000. Today
        every serious provider has a 100,000+ token model, and Google’s Gemini
        series accepts up to 2 million.
      - >-
        My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful
        machine, but it’s also nearly two years old now—and crucially it’s the
        same laptop I’ve been using ever since I first ran an LLM on my computer
        back in March 2023 (see Large language models are having their Stable
        Diffusion moment).

        That same laptop that could just about run a GPT-3-class model in March
        last year has now run multiple GPT-4 class models! Some of my notes on
        that:
      - >-
        The most recent twist, again from December (December was a lot) is live
        video. ChatGPT voice mode now provides the option to share your camera
        feed with the model and talk about what you can see in real time. Google
        Gemini have a preview of the same feature, which they managed to ship
        the day before ChatGPT did.
  - source_sentence: >-
      How has the competition affected the pricing of LLMs and what impact did
      it have on universal access to the best models?
    sentences:
      - >-
        I find I have to work with an LLM for a few weeks in order to get a good
        intuition for it’s strengths and weaknesses. This greatly limits how
        many I can evaluate myself!

        The most frustrating thing for me is at the level of individual
        prompting.

        Sometimes I’ll tweak a prompt and capitalize some of the words in it, to
        emphasize that I really want it to OUTPUT VALID MARKDOWN or similar. Did
        capitalizing those words make a difference? I still don’t have a good
        methodology for figuring that out.

        We’re left with what’s effectively Vibes Based Development. It’s vibes
        all the way down.

        I’d love to see us move beyond vibes in 2024!

        LLMs are really smart, and also really, really dumb
      - |-
        The GPT-4 barrier was comprehensively broken
        Some of those GPT-4 models run on my laptop
        LLM prices crashed, thanks to competition and increased efficiency
        Multimodal vision is common, audio and video are starting to emerge
        Voice and live camera mode are science fiction come to life
        Prompt driven app generation is a commodity already
        Universal access to the best models lasted for just a few short months
        “Agents” still haven’t really happened yet
        Evals really matter
        Apple Intelligence is bad, Apple’s MLX library is excellent
        The rise of inference-scaling “reasoning” models
        Was the best currently available LLM trained in China for less than $6m?
        The environmental impact got better
        The environmental impact got much, much worse
      - >-
        “Agents” still haven’t really happened yet

        I find the term “agents” extremely frustrating. It lacks a single, clear
        and widely understood meaning... but the people who use the term never
        seem to acknowledge that.

        If you tell me that you are building “agents”, you’ve conveyed almost no
        information to me at all. Without reading your mind I have no way of
        telling which of the dozens of possible definitions you are talking
        about.
  - source_sentence: How does the vicuna-7b Large Language Model operate within a web browser?
    sentences:
      - |-
        ai
                    1101


                    generative-ai
                    945


                    llms
                    933

        Next: Tom Scott, and the formidable power of escalating streaks
        Previous: Last weeknotes of 2023


         
         


        Colophon
        ©
        2002
        2003
        2004
        2005
        2006
        2007
        2008
        2009
        2010
        2011
        2012
        2013
        2014
        2015
        2016
        2017
        2018
        2019
        2020
        2021
        2022
        2023
        2024
        2025
      - >-
        Law is not ethics. Is it OK to train models on people’s content without
        their permission, when those models will then be used in ways that
        compete with those people?

        As the quality of results produced by AI models has increased over the
        year, these questions have become even more pressing.

        The impact on human society in terms of these models is already huge, if
        difficult to objectively measure.

        People have certainly lost work to them—anecdotally, I’ve seen this for
        copywriters, artists and translators.

        There are a great deal of untold stories here. I’m hoping 2024 sees
        significant amounts of dedicated journalism on this topic.

        My blog in 2023

        Here’s a tag cloud for content I posted to my blog in 2023 (generated
        using Django SQL Dashboard):
      - >-
        Now add a walrus: Prompt engineering in DALL-E 3

        32.8k

        41.2k



        Web LLM runs the vicuna-7b Large Language Model entirely in your
        browser, and it’s very impressive

        32.5k

        38.2k



        ChatGPT can’t access the internet, even though it really looks like it
        can

        30.5k

        34.2k



        Stanford Alpaca, and the acceleration of on-device large language model
        development

        29.7k

        35.7k



        Run Llama 2 on your own Mac using LLM and Homebrew

        27.9k

        33.6k



        Midjourney 5.1

        26.7k

        33.4k



        Think of language models like ChatGPT as a “calculator for words”

        25k

        31.8k



        Multi-modal prompt injection image attacks against GPT-4V

        23.7k

        27.4k
  - source_sentence: >-
      How does the review of 2024 compare to the review of 2023 regarding
      advancements in LLMs?
    sentences:
      - >-
        Things we learned about LLMs in 2024






















        Simon Willison’s Weblog

        Subscribe







        Things we learned about LLMs in 2024

        31st December 2024

        A lot has happened in the world of Large Language Models over the course
        of 2024. Here’s a review of things we figured out about the field in the
        past twelve months, plus my attempt at identifying key themes and
        pivotal moments.

        This is a sequel to my review of 2023.

        In this article:
      - >-
        This remains astonishing to me. I thought a model with the capabilities
        and output quality of GPT-4 needed a datacenter class server with one or
        more $40,000+ GPUs.

        These models take up enough of my 64GB of RAM that I don’t run them
        often—they don’t leave much room for anything else.

        The fact that they run at all is a testament to the incredible training
        and inference performance gains that we’ve figured out over the past
        year. It turns out there was a lot of low-hanging fruit to be harvested
        in terms of model efficiency. I expect there’s still more to come.
      - >-
        The GPT-4 barrier was comprehensively broken

        In my December 2023 review I wrote about how We don’t yet know how to
        build GPT-4—OpenAI’s best model was almost a year old at that point, yet
        no other AI lab had produced anything better. What did OpenAI know that
        the rest of us didn’t?

        I’m relieved that this has changed completely in the past twelve months.
        18 organizations now have models on the Chatbot Arena Leaderboard that
        rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
        board)—70 models in total.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9583333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9583333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9583333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9846220730654774
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9791666666666666
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9791666666666666
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("llm-wizard/legal-ft-v0-midterm")
# Run inference
sentences = [
    'How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?',
    'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
    'The GPT-4 barrier was comprehensively broken\nIn my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didn’t?\nI’m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9583
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9583
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9583
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9846
cosine_mrr@10 0.9792
cosine_map@100 0.9792

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 20.25 tokens
    • max: 31 tokens
    • min: 43 tokens
    • mean: 135.18 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    What topics were covered in the annotated presentations given in 2023? I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:

    Prompt injection explained, with video, slides, and a transcript
    Catching up on the weird world of LLMs
    Making Large Language Models work for you
    Open questions for AI engineering
    Embeddings: What they are and why they matter
    Financial sustainability for open source projects at GitHub Universe

    And in podcasts:


    What AI can do for you on the Theory of Change

    Working in public on Path to Citus Con

    LLMs break the internet on the Changelog

    Talking Large Language Models on Rooftop Ruby

    Thoughts on the OpenAI board situation on Newsroom Robots
    Which podcasts featured discussions about Large Language Models? I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:

    Prompt injection explained, with video, slides, and a transcript
    Catching up on the weird world of LLMs
    Making Large Language Models work for you
    Open questions for AI engineering
    Embeddings: What they are and why they matter
    Financial sustainability for open source projects at GitHub Universe

    And in podcasts:


    What AI can do for you on the Theory of Change

    Working in public on Path to Citus Con

    LLMs break the internet on the Changelog

    Talking Large Language Models on Rooftop Ruby

    Thoughts on the OpenAI board situation on Newsroom Robots
    What capabilities does Google’s Gemini have regarding audio input and output? Your browser does not support the audio element.

    OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.
    Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:


    Your browser does not support the audio element.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.8825
2.0 32 0.9526
3.0 48 0.9609
3.125 50 0.9609
4.0 64 0.9846
5.0 80 0.9846
6.0 96 0.9846
6.25 100 0.9846
7.0 112 0.9846
8.0 128 0.9846
9.0 144 0.9846
9.375 150 0.9846
10.0 160 0.9846

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}