AI & ML interests

None defined yet.

Recent Activity

l-salewski  updated a Space about 2 hours ago
Aleph-Alpha/README
l-salewski  published a Space about 2 hours ago
Aleph-Alpha/README
janmetzen-aa  updated a model about 20 hours ago
Aleph-Alpha/tfree-hat-pretrained-7b-base
View all activity

Aleph Alpha is dedicated to building sovereign and trustworthy AI systems. Our research has produced state-of-the-art multi-modal models (MAGMA), explainability techniques for transformer-based models (AtMan), and a comprehensive evaluation framework for large-scale model assessment. We have also researched how to move beyond traditional tokenizers. Our work on tokenizer-free architectures uses byte-level trigrams to create more resilient and adaptable models in non-english languages and new domains. Key models demonstrating the effectiveness of our innovative Hierarchical Autoregressive Transformer (HAT) architecture include:

  • llama-3_1-tfree-hat models: This model family replaces the Llama 3.1 tokenizer with our HAT architecture. The 8b-dpo model is tuned for helpfulness and reduced refusal in sensitive applications, while the larger 70b-sft model is trained on English/German for improved text compression and adaptability.

  • TFree-HAT-Pretrained-7B-Base: This 7B model was pretrained from scratch in English & German and has a context length of 32,900 words. It shows strong proficiency in German and beats Llama 3.1 on many English benchmarks.

We also published a SOTA German Dataset (data, arXiv), which can be used to enhance German LLM capabilities.

Our future work is dedicated to advancing reasoning models, de-biasing frontier models, understanding the role of data in model training, comprehensive and realistic model evaluation, pushing the boundaries of small models, and advancing tokenizer-free architectures. We will continue to concentrate on creating transparent, trustworthy, and auditable systems that provide users with greater control and insight into the decision-making processes of AI models.

Want to shape the future of sovereign AI? Work with us.