Dracones's picture
Upload folder using huggingface_hub
6b37994 verified
metadata
base_model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
library_name: transformers
tags:
  - exl2
  - mergekit
  - merge
license: other
license_name: eva-llama3.3
language:
  - en
datasets:
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - Nopm/Opus_WritingStruct
  - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
  - Gryphe/Sonnet3.5-Charcard-Roleplay
  - Gryphe/ChatGPT-4o-Writing-Prompts
  - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
  - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
  - nothingiisreal/Reddit-Dirty-And-WritingPrompts
  - allura-org/Celeste-1.x-data-mixture
  - cognitivecomputations/dolphin-2.9.3

EVA-LLaMA-3.33-70B-v0.1 - EXL2 5.0bpw

This is a 5.0bpw EXL2 quant of EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1

Details about the model can be found at the above model page.

Perplexity Scoring

Below are the perplexity scores for the EXL2 models. A lower score is better.

Quant Level Perplexity Score
5.0 5.5571
4.5 5.6595
4.0 5.7828
3.5 6.1092
3.0 10.1510
2.75 14.8796
2.5 8.3789
2.25 8.7860