Q
We're a research lab solving generalization – the core open problem in AI.
We're building learning algorithms beyond gradient descent where generalization scales with compute alone, enabling orders-of-magnitude improvements in data efficiency, towards a practical approximation of Solomonoff Induction. Read more about our research direction.
Our investors include Jeff Dean, YC, among many others. If interested, reach out at research@qlabs.sh
Progress
02
NewNanoGPT Slowrun
March 2026
An open effort to implement data-efficient learning algorithms. 5.5x data efficiency in the first week and improving.
01
The Generalization Problem
August 2025
Our core framework for solving generalization. Combining search and generalization theory to create algorithms that approach optimal learning.