Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
46 posts
Research Agendas
Game Theory
Center on Long-Term Risk (CLR)
Risks of Astronomical Suffering (S-risks)
Mechanism Design
Suffering
Fairness
Blackmail / Extortion
Group Rationality
Terminology / Jargon (meta)
Reading Group
Mind Crime
65 posts
Iterated Amplification
Debate (AI safety technique)
Factored Cognition
Humans Consulting HCH
Ought
Adversarial Collaboration
Delegation
44
«Boundaries», Part 3b: Alignment problems in terms of boundaries
Andrew_Critch
6d
2
33
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
285
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
181
Some conceptual alignment research projects
Richard_Ngo
3mo
14
174
Unifying Bargaining Notions (1/2)
Diffractor
4mo
38
16
Theories of impact for Science of Deep Learning
Marius Hobbhahn
19d
0
80
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
Diffractor
2mo
11
125
«Boundaries», Part 1: a key missing concept from utility theory
Andrew_Critch
4mo
26
84
Unifying Bargaining Notions (2/2)
Diffractor
4mo
11
20
Distilled Representations Research Agenda
Hoagy
2mo
2
33
Announcing: Mechanism Design for AI Safety - Reading Group
Rubi J. Hudson
4mo
3
73
CLR's recent work on multi-agent systems
JesseClifton
1y
1
143
The Commitment Races problem
Daniel Kokotajlo
3y
39
102
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
42
Notes on OpenAI’s alignment plan
Alex Flint
12d
5
25
Take 9: No, RLHF/IDA/debate doesn't solve outer alignment.
Charlie Steiner
8d
14
63
A Library and Tutorial for Factored Cognition with Language Models
stuhlmueller
2mo
0
49
Ought will host a factored cognition “Lab Meeting”
jungofthewon
3mo
1
61
Rant on Problem Factorization for Alignment
johnswentworth
4mo
48
114
Supervise Process, not Outcomes
stuhlmueller
8mo
8
49
A Small Negative Result on Debate
Sam Bowman
8mo
11
111
Debate update: Obfuscated arguments problem
Beth Barnes
1y
21
114
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda
Chi Nguyen
2y
21
78
Imitative Generalisation (AKA 'Learning the Prior')
Beth Barnes
1y
14
66
Why I'm excited about Debate
Richard_Ngo
1y
12
44
Garrabrant and Shah on human modeling in AGI
Rob Bensinger
1y
10
70
A guide to Iterated Amplification & Debate
Rafael Harth
2y
10
91
Writeup: Progress on AI Safety via Debate
Beth Barnes
2y
18