Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
2040 posts
AI
Careers
Audio
Infra-Bayesianism
Interviews
SERI MATS
Redwood Research
Formal Proof
Organization Updates
AXRP
Adversarial Examples
Domain Theory
197 posts
AI Takeoff
AI Timelines
Dialogue (format)
DeepMind
344
(My understanding of) What Everyone in Technical Alignment is Doing and Why
Thomas Larsen
3mo
83
314
How To Get Into Independent Research On Alignment/Agency
johnswentworth
1y
33
310
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya Cotra
5mo
89
303
What should you change in response to an "emergency"? And AI risk
AnnaSalamon
5mo
60
265
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
259
We Choose To Align AI
johnswentworth
11mo
15
247
DeepMind: Generally capable agents emerge from open-ended play
Daniel Kokotajlo
1y
53
245
Visible Thoughts Project and Bounty Announcement
So8res
1y
104
243
Don't die with dignity; instead play to your outs
Jeffrey Ladish
8mo
58
237
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist
1y
29
235
The Plan
johnswentworth
1y
77
235
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky
1y
143
235
Contra Hofstadter on GPT-3 Nonsense
rictic
6mo
22
232
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
364
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
287
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
269
Why I think strong general AI is coming soon
porby
2mo
126
255
Are we in an AI overhang?
Andy Jones
2y
109
247
Why Agent Foundations? An Overly Abstract Explanation
johnswentworth
9mo
54
217
What do ML researchers think about AI in 2022?
KatjaGrace
4mo
33
212
Fun with +12 OOMs of Compute
Daniel Kokotajlo
1y
78
207
Draft report on AI timelines
Ajeya Cotra
2y
56
195
A concrete bet offer to those with short AI timelines
Matthew Barnett
8mo
104
195
Brain Efficiency: Much More than You Wanted to Know
jacob_cannell
11mo
87
191
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky
1y
181
181
Biology-Inspired AGI Timelines: The Trick That Never Works
Eliezer Yudkowsky
1y
143
174
human psycholinguists: a critical appraisal
nostalgebraist
2y
59
167
Jeff Hawkins on neuromorphic AGI within 20 years
Steven Byrnes
3y
24