About Us

Aleph Alpha was founded in 2019 with the mission to research and build sovereign, human-centric AI for a better world. With a team of international scientists, engineers, and innovators, we research and build transformative AI solutions. 

Core Values

Since Aleph Alpha was founded in 2019 our work is guided by two values central to our mission

Join Our Team

Jonas Andrulis
Chief Executive Officer & Founder
Samuel Weinbach
Co-Founder & Co-Chief Research Officer
Carsten Dirks
Chief Operating Officer
Tobias Haar
General Counsel
Dr. Yasser Jadidi
Co-Chief Research Officer
Christopher Kraenzler
Vice President Product
Andy Lippert
Vice President Infrastructure
Ramin Mirza
Chief Customer Officer
Benjamin Oppl
Vice President People & Culture

Our History

2024
September
Launch of TFree
A new architecture that slows LLMs to learn low-resource languages, out-of-distribution knowledge and rare knowledge
2024
August
Launch of PhariaAI
Pharia AI is an end-to-end stack for generative AI systems and innovations in an enterprise-ready setup with unique capabilities for control, transparency and compliance
2024
July
Launch of F13
The generative AI assistant for over 500.000 civil servants
2022
April
First major AI model launch
Aleph Alpha releases Luminous, the world’s first multimodal and multilanguage large language model
2022
October
Launch of LUMI
The world’s first generative AI chatbot for the public sector
2022
September
Opening of alpha ONE
The fastest European commercial AI Data Centre. Located in Bavaria and equipped with 512 NVIDIA A100 GPUs, Alpha One offers 7.625 petaflops of computational power.
2022
June
Launch of creance.ai
Generative AI solution for compliance
2023
November
$500M Series B funding round
2023
April
Introduction of world’s first explainability function for LLMs
2021
July
Series A funding round – €23M
2021
January
€5.3M Seed funding round
Aleph Alpha secures its first round of funding to build Europe-based generative AI technology
2019
January
Aleph Alpha is founded
Apple R&D Manager and serial entrepreneur Jonas Andrulis and Deloitte AI Expert Samuel Weinbach found Aleph Alpha in Heidelberg, Germany, with the vision to research and build the foundational technology for an era of strong AI

Aleph Alpha products are built to ensure technological sovereignty in the AI era for the world’s best enterprises and governments.

Our Commitment to AI Compliance and Excellence

We are actively aligning our practices with upcoming regulatory frameworks, including the EU AI Act, and have obtained ISO 27001 certification as a testament to our exceptional commitment to information security and regulatory compliance.

Insights

Research

T-Free: Hierarchical Autoregressive Transformers for Language Fairness and Sovereignty

In this blog post, we want to take a closer look at a tokenizer-free approach, which we proposed in a recent paper and termed Hierarchical Autoregressive Transformers (HAT). In particular, we want to showcase how such a model can be pre-trained in English and efficiently adapted to learn a new, previously unseen language.
Read more
Research

In awe at the scale of these tensors – a gentle introduction to Unit-Scaled Maximal Update Parametrization

Together with Graphcore, we recently developed u-μP as a new paradigm to parametrize neural networks in terms of width and depth. Our approach combines μP, developed by G. Yang et. al., with Unit Scaling, a concept introduced by Graphcore.
Read more
Research

Words don’t come easy (… to LLMs): Universal Text-Encoding for dynamic, multi-lingual alphabets revolutionizing efficiency and effectiveness for LLM training and inference

The remarkable advancements of Large Language Models (LLMs) frequently capture attention as they become valuable collaborators in daily situations, all while progressing towards breakthroughs beyond simple language completion.
Read more
Research

Introducing Pharia-1-LLM: transparent and compliant

We are pleased to announce our new foundation model family that includes Pharia-1-LLM-7B-control and Pharia-1-LLM-7B-control-aligned, now publicly available under the Open Aleph License, which explicitly allows for non-commercial research and educational use.
Read more
Research

Open-sourcing Codebase Scaling for Non-commercial Research

Aleph Alpha’s model training codebase Scaling is publicly available under the Open Aleph License, which explicitly allows for non-commercial research and educational use. Scaling was used to develop our concurrently released new models Pharia-1-LLM-control and Pharia-1-LLM-control-aligned.
Read more
Research

Quality Diversity through AI Feedback

Language models carry implicit distributional biases based on their training data, which can reinforce existing norms. In this work, we take one step towards addressing the challenge of unwanted biases by enabling language models to return outputs with a broader spectrum of attribute traits, specified by a user. This is achieved by asking language models to evaluate and modify their outputs.
Read more