Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
162 posts
Donation writeup
Impact assessment
Donor lotteries
AI Impacts
Nonlinear Fund
Machine Intelligence Research Institute
Ought
Berkeley Existential Risk Initiative
OpenAI
AI interpretability
Survival and Flourishing
Global Catastrophic Risk Institute
39 posts
Charity evaluation
LessWrong
Future of Humanity Institute
Future of Life Institute
Centre for the Study of Existential Risk
Centre for the Governance of AI
Defense in depth
All-Party Parliamentary Group for Future Generations
Rationality community
Anders Sandberg
Centre for Long-Term Resilience
Lightcone Infrastructure
26
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
19
The limited upside of interpretability
Peter S. Park
1mo
3
26
Mildly Against Donor Lotteries
Jeff Kaufman
1mo
20
36
AMA: Ought
stuhlmueller
4mo
52
52
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
9
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
41
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
193
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
46
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
86
Valuing research works by eliciting comparisons from EA researchers
NunoSempere
9mo
22
112
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
11
The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal)
Larks
1y
3
113
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
42
Charity Navigator acquired ImpactMatters and is starting to mention "cost-effectiveness" as important
Luke Freeman
2y
6
1
CFAR Anki deck
Will Aldred
7d
3
25
Looping
Jarred Filmer
2mo
4
77
Proposal: Impact List -- like the Forbes List except for impact via donations
Elliot_Olds
6mo
30
28
Consider participating in ACX Meetups Everywhere
Habryka
4mo
1
50
The LessWrong Team is now Lightcone Infrastructure, come work with us!
Habryka
1y
2
25
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
15
Centre for the Study of Existential Risk update
Sean_o_h
6y
2
58
The Centre for the Governance of AI has Relaunched
GovAI
1y
0
6
Learning From Less Wrong: Special Threads, and Making This Forum More Useful
Evan_Gaensbauer
8y
21
14
FLI is hiring a new Director of US Policy
aaguirre
4mo
0
15
I'm interviewing Max Tegmark about AI safety and more. What shouId I ask him?
Robert_Wiblin
7mo
2
5
Lesswrong Diaspora survey
elo
6y
5
41
LessWrong is now a book, available for pre-order!
jacobjacob
2y
1
10
New positions and recent hires at the Centre for the Study of Existential Risk
Sean_o_h
7y
2