Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
162 posts
Donation writeup
Impact assessment
Donor lotteries
AI Impacts
Nonlinear Fund
Machine Intelligence Research Institute
Ought
Berkeley Existential Risk Initiative
OpenAI
AI interpretability
Survival and Flourishing
Global Catastrophic Risk Institute
39 posts
Charity evaluation
LessWrong
Future of Humanity Institute
Future of Life Institute
Centre for the Study of Existential Risk
Centre for the Governance of AI
Defense in depth
All-Party Parliamentary Group for Future Generations
Rationality community
Anders Sandberg
Centre for Long-Term Resilience
Lightcone Infrastructure
197
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
171
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
163
2020 AI Alignment Literature Review and Charity Comparison
Larks
1y
16
148
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
147
Donor Lottery Debrief
TimothyTelleenLawton
2y
17
140
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
132
Valuing research works by eliciting comparisons from EA researchers
NunoSempere
9mo
22
117
2017 Donor Lottery Report
AdamGleave
4y
9
116
My Q1 2019 EA Hotel donation
vipulnaik
3y
6
111
We're Redwood Research, we do applied alignment research, AMA
Buck
1y
49
105
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
104
ARC is hiring alignment theory researchers
Paul_Christiano
1y
3
100
Relative Impact of the First 10 EA Forum Prize Winners
NunoSempere
1y
35
90
Impact is very complicated
Justis
7mo
12
212
Shallow evaluations of longtermist organizations
NunoSempere
1y
34
158
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
Habryka
1y
7
85
The Centre for the Governance of AI is becoming a nonprofit
MarkusAnderljung
1y
7
83
What (standalone) LessWrong posts would you recommend to most EA community members?
Vaidehi Agarwalla
10mo
19
80
The LessWrong Team is now Lightcone Infrastructure, come work with us!
Habryka
1y
2
77
Proposal: Impact List -- like the Forbes List except for impact via donations
Elliot_Olds
6mo
30
63
Concerns about AMF from GiveWell reading - Part 3
JPHoughton
11mo
6
55
LessWrong is now a book, available for pre-order!
jacobjacob
2y
1
46
The Centre for the Governance of AI has Relaunched
GovAI
1y
0
45
What “defense layers” should governments, AI labs, and businesses use to prevent catastrophic AI failures?
alexlintz
1y
3
45
Low-Commitment Less Wrong Book (EG Article) Club
Jeremy
10mo
25
36
Consider participating in ACX Meetups Everywhere
Habryka
4mo
1
32
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction'
Pablo
2y
3
30
We’re discontinuing the standout charity designation
GiveWell
1y
11