Advancing Safe and Transparent Generative AI

A collaboration between Aleph Alpha Research and TU Darmstadt to pioneer the future of explainable AI

 

 

Lab 1141 is a joint initiative between Aleph Alpha Research and TU Darmstadt, created to push the boundaries of explainable and interpretable Generative AI. With expertise from both academia and industry, we aim to deliver cutting-edge innovations that prioritize safety and transparency in AI applications.

Research Areas

Our research in Lab 1141 is focused on three main areas that are critical to the future of AI.

AI Safety, Alignment, and Security

This area focuses on ensuring that AI systems operate reliably, ethically, and in line with human values, while also addressing potential risks and vulnerabilities.

AI Robustness, Hallucination Detection

This area explores the robustness and reliability of AI models, with a focus on understanding model confidence and preventing errors or hallucinations.

Explainable & Interpretable AI

This area focuses on developing high-performing AI systems that are transparent, trustworthy, and understandable to humans.

Collaborate with Lab1141

Lab 1141 represents a unique collaboration between Aleph Alpha’s industrial expertise and TU Darmstadt’s academic excellence. We bring together resources, knowledge, and talent to develop AI solutions that bridge the gap between theory and practice.

PhD Fellowships

Funded positions are available at TU Darmstadt through Aleph Alpha Research Fellowships.

Internships and Symposia

We offer PhD students and researchers opportunities to work closely with our teams through internships at Aleph Alpha and joint symposiums.

Highlight Contributions

Research

Introducing Pharia-1-LLM: transparent and compliant

We are pleased to announce our new foundation model family that includes Pharia-1-LLM-7B-control and Pharia-1-LLM-7B-control-aligned, now publicly available under the Open Aleph License, which explicitly allows for non-commercial research and educational use.
Read more
Research

In awe at the scale of these tensors – a gentle introduction to Unit-Scaled Maximal Update Parametrization

Together with Graphcore, we recently developed u-μP as a new paradigm to parametrize neural networks in terms of width and depth. Our approach combines μP, developed by G. Yang et. al., with Unit Scaling, a concept introduced by Graphcore.
Read more
Research

T-Free: Hierarchical Autoregressive Transformers for Language Fairness and Sovereignty

In this blog post, we want to take a closer look at a tokenizer-free approach, which we proposed in a recent paper and termed Hierarchical Autoregressive Transformers (HAT). In particular, we want to showcase how such a model can be pre-trained in English and efficiently adapted to learn a new, previously unseen language.
Read more