Skip Navigation

Introducing Superalignment

openai.com Introducing Superalignment

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for e...

Introducing Superalignment

Our goal is to build a roughly human-level automated alignment researcher.

We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment.

This new team’s work is in addition to existing work at OpenAI aimed at improving the safety of current models like ChatGPT, as well as understanding and mitigating other risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others.

4

You're viewing a single thread.

4 comments