This is a new kind of model optimization. This model is based on Meta-Llama-3.1-8B.

A paper on the technique is currently being written.

This research was supported with hardware from the appliedAI Institute, whose goal is to generate and communicate high-quality knowledge about trustworthy AI.

Quickstart

import transformers
import torch

model_id = "meta-llama/Meta-Llama-3.1-8B"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)

pipeline("Hey how are you doing today?")

SHAMELESS ADVERTISING BREAK

I’m on the hunt for new challenges and a chance to dive into some exciting research opportunities. Oh, and did I mention I just snagged a top spot on the Open LLM leaderboard? πŸŽ‰

Profile

Innovation enthusiast, AI strategist, and interdisciplinary-tech nerd – that's me! With over a decade of experience in research and project management, my professional journey has been largely shaped by my passion for artificial intelligence and its potential to transform various industries. With a solid background in artificial intelligence and machine learning, coupled with a knack for innovation and problem-solving (and a healthy dose of curiosity), I'm excited to bring my skills to a new team.

Originally from Australia, where I earned my degrees in Organic Chemistry and Biochemistry, I moved to Germany in 2004. My academic pursuit continued with a PhD. in Chemistry at the Max Planck Institute of Biochemistry. Today, I leverage my robust educational background and diverse industry experience to drive AI innovations in a wide range of applications. Hobbies? Lots: I've also built the world's most powerful espresso machine and am working to bring GLaDOS to life.


I'm based out of Munich, Germany, but I would be interested in working remotely for a team with more compute than my 2x 4090s πŸš€

Reach out via LinkedIn - Dr David Noel Ng

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 26.44
IFEval (0-Shot) 76.85
BBH (3-Shot) 31.09
MATH Lvl 5 (4-Shot) 11.33
GPQA (0-shot) 2.35
MuSR (0-shot) 7.68
MMLU-PRO (5-shot) 29.33
Downloads last month
3,580
Safetensors
Model size
8.68B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for dnhkng/RYS-Llama-3.1-8B-Instruct

Quantizations
3 models

Space using dnhkng/RYS-Llama-3.1-8B-Instruct 1

Evaluation results