L3.3-Cu-Mai-R1-70b

Model banner

-Model Information

L3.3-Cu-Mai-R1-70b

L3.3 = Llama 3.3 SCE Merge R1 = Deepseek R1 70b Parameters v0.5.A

Model Composition

Model Info

Cu-Mai, a play on San-Mai for Copper-Steel Damascus, represents a significant evolution in the three-part model series alongside San-Mai (OG) and Mokume-Gane. While maintaining the grounded and reliable nature of San-Mai, Cu-Mai introduces its own distinct "flavor" in terms of prose and overall vibe. The model demonstrates strong adherence to prompts while offering a unique creative expression.

Technical Architecture

L3.3-Cu-Mai-R1-70b integrates specialized components through the SCE merge method:

  • EVA and EURYALE foundations for creative expression and scene comprehension
  • Cirrus and Hanami elements for enhanced reasoning capabilities
  • Anubis components for detailed scene description
  • Negative_LLAMA integration for balanced perspective and response

User Experience & Capabilities

Users consistently praise Cu-Mai for its:

  • Exceptional prose quality and natural dialogue flow
  • Strong adherence to prompts and creative expression
  • Improved coherency and reduced repetition
  • Performance on par with the original model

While some users note slightly reduced intelligence compared to the original, this trade-off is generally viewed as minimal and doesn't significantly impact the overall experience. The model's reasoning capabilities can be effectively activated through proper prompting techniques.

Model Series Context

Cu-Mai (Version A) is part of a three-model series:

  • L3.3-San-Mai-R1-70b (OG model) - The original foundation
  • L3.3-Cu-Mai-R1-70b (Version A) - Enhanced creative expression
  • L3.3-Mokume-Gane-R1-70b (Version C) - Distinct variation with unique characteristics

Base Architecture

At its core, L3.3-Cu-Mai-R1-70b utilizes the entirely custom Hydroblated-R1 base model, specifically engineered for stability, enhanced reasoning, and performance. The SCE merge method, with settings finely tuned based on community feedback from evaluations of Experiment-Model-Ver-0.5, Experiment-Model-Ver-0.5.A, Experiment-Model-Ver-0.5.B, Experiment-Model-Ver-0.5.C, Experiment-Model-Ver-0.5.D, L3.3-Exp-Nevoria-R1-70b-v0.1 and L3.3-Exp-Nevoria-70b-v0.1, enables precise and effective component integration while maintaining model coherence and reliability.

UGI-Benchmark Results:

πŸ† Latest benchmark results as of 02/20/2025. View Full Leaderboard β†’

Core Metrics

UGI Score 45.01
Willingness Score 4.5/10
Natural Intelligence 48.97
Coding Ability 22

Model Information

Political Lean -9.1%
Ideology Liberalism
Parameters 70B
Aggregated Scores
Diplomacy 62.2%
Government 44.6%
Economy 43.1%
Society 60.7%
Individual Scores
Federal 46.0% Unitary
Democratic 67.5% Autocratic
Security 47.5% Freedom
Nationalism 39.0% Int'l
Militarist 32.9% Pacifist
Assimilationist 41.5% Multiculturalist
Collectivize 43.3% Privatize
Planned 42.3% LaissezFaire
Isolationism 43.8% Globalism
Irreligious 57.9% Religious
Progressive 59.8% Traditional
Acceleration 64.4% Bioconservative

Recommended Sampler Settings: By @Geechan

Static Temperature:

1 - 1.05

Min P

0.02

DRY Settings: (optional)

Multiplier 0.8
Base 1.75
Length 4

Recommended Templates & Prompts

LeCeption β†’ by @Steel > A completly revamped XML version of Llam@ception 1.5.2 with stepped thinking and Reasoning added

LECEPTION REASONING CONFIGURATION:

Start Reply With:

'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'

Reasoning Formatting (no spaces):

Prefix: '<think>'
Suffix: '</think>'

-Support & Community:

Special Thanks

  • @Geechan for feedback and sampler settings
  • @Konnect for their feedback and templates
  • @Kistara for their feedback and help with the model mascot design
  • @Thana Alt for their feedback and Quants
  • @Lightning_missile for their feedback
  • The Arli community for feedback and testers
  • The BeaverAI communty for feedback and testers

I wish I could add everyone but im pretty sure it would be as long as the card!

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Inasity/L3.3-Cu-Mai-R1-70b-4.0bpw-h8-exl2

Quantized
(5)
this model