Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
25 posts
LessWrong
Centre for the Study of Existential Risk
Future of Life Institute
Centre for the Governance of AI
Lightcone Infrastructure
Center for Security and Emerging Technology
Rationality community
8 posts
Defense in depth
Future of Humanity Institute
Anders Sandberg
Compound existential risk
27
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
5
CFAR Anki deck
Will Aldred
7d
3
158
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
Habryka
1y
7
212
Shallow evaluations of longtermist organizations
NunoSempere
1y
34
36
Consider participating in ACX Meetups Everywhere
Habryka
4mo
1
83
What (standalone) LessWrong posts would you recommend to most EA community members?
Vaidehi Agarwalla
10mo
19
80
The LessWrong Team is now Lightcone Infrastructure, come work with us!
Habryka
1y
2
85
The Centre for the Governance of AI is becoming a nonprofit
MarkusAnderljung
1y
7
9
Looping
Jarred Filmer
2mo
4
45
Low-Commitment Less Wrong Book (EG Article) Club
Jeremy
10mo
25
14
FLI is hiring a new Director of US Policy
aaguirre
4mo
0
46
The Centre for the Governance of AI has Relaunched
GovAI
1y
0
21
I'm interviewing Max Tegmark about AI safety and more. What shouId I ask him?
Robert_Wiblin
7mo
2
55
LessWrong is now a book, available for pre-order!
jacobjacob
2y
1
45
What “defense layers” should governments, AI labs, and businesses use to prevent catastrophic AI failures?
alexlintz
1y
3
8
The aestivation hypothesis for resolving Fermi’s paradox (Sandberg, Armstrong & Cirkovic, 2017)
Will Aldred
7mo
0
32
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction'
Pablo
2y
3
28
The Web of Prevention
Will Bradshaw
2y
7
13
[link] Centre for the Governance of AI 2020 Annual Report
MarkusAnderljung
1y
5
27
Combination Existential Risks
ozymandias
3y
5
9
Future of Humanity Institute is hiring
Andrew_SB
7y
6
3
FHI is hiring a project manager
kdbscott
5y
0