Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
162 posts
Donation writeup
Impact assessment
Donor lotteries
AI Impacts
Nonlinear Fund
Machine Intelligence Research Institute
Ought
Berkeley Existential Risk Initiative
OpenAI
AI interpretability
Survival and Flourishing
Global Catastrophic Risk Institute
39 posts
Charity evaluation
LessWrong
Future of Humanity Institute
Future of Life Institute
Centre for the Study of Existential Risk
Centre for the Governance of AI
Defense in depth
All-Party Parliamentary Group for Future Generations
Rationality community
Anders Sandberg
Centre for Long-Term Resilience
Lightcone Infrastructure
21
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
50
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
53
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
47
Join the interpretability research hackathon
Esben Kran
1mo
0
22
The limited upside of interpretability
Peter S. Park
1mo
3
35
Mildly Against Donor Lotteries
Jeff Kaufman
1mo
20
126
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
51
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
8
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
109
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
170
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
165
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
94
Impact is very complicated
Justis
7mo
12
6
How likely are malign priors over objectives? [aborted WIP]
David Johnston
1mo
0
26
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
3
CFAR Anki deck
Will Aldred
7d
3
77
Proposal: Impact List -- like the Forbes List except for impact via donations
Elliot_Olds
6mo
30
32
Consider participating in ACX Meetups Everywhere
Habryka
4mo
1
190
Shallow evaluations of longtermist organizations
NunoSempere
1y
34
140
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
Habryka
1y
7
17
Looping
Jarred Filmer
2mo
4
2
How can one evaluate a charity's capacity to utilize funds beyond its annual budget?
haywyer
1mo
0
67
What (standalone) LessWrong posts would you recommend to most EA community members?
Vaidehi Agarwalla
10mo
19
60
Concerns about AMF from GiveWell reading - Part 3
JPHoughton
11mo
6
14
FLI is hiring a new Director of US Policy
aaguirre
4mo
0
39
Low-Commitment Less Wrong Book (EG Article) Club
Jeremy
10mo
25
78
The Centre for the Governance of AI is becoming a nonprofit
MarkusAnderljung
1y
7
65
The LessWrong Team is now Lightcone Infrastructure, come work with us!
Habryka
1y
2