Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
201 posts
Donation writeup
Impact assessment
Donor lotteries
AI Impacts
Charity evaluation
Future of Humanity Institute
LessWrong
Centre for the Study of Existential Risk
Future of Life Institute
Nonlinear Fund
Machine Intelligence Research Institute
Centre for the Governance of AI
90 posts
Effective Altruism Funds
Long-Term Future Fund
Effective Altruism Infrastructure Fund
University groups
Building the field of AI safety
AI Safety Camp
Student projects
Longtermist Entrepreneurship Fellowship
3
CFAR Anki deck
Will Aldred
7d
3
21
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
22
The limited upside of interpretability
Peter S. Park
1mo
3
35
Mildly Against Donor Lotteries
Jeff Kaufman
1mo
20
41
AMA: Ought
stuhlmueller
4mo
52
53
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
8
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
50
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
17
Looping
Jarred Filmer
2mo
4
182
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
51
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
77
Proposal: Impact List -- like the Forbes List except for impact via donations
Elliot_Olds
6mo
30
109
Valuing research works by eliciting comparisons from EA researchers
NunoSempere
9mo
22
126
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
116
The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?
Markus Amalthea Magnuson
14d
7
23
Analysis of AI Safety surveys for field-building insights
Ash Jafari
15d
6
69
Public reports are now optional for EA Funds grantees
abergal
1y
32
65
Update on Harvard AI Safety Team and MIT AI Alignment
Xander Davies
18d
3
81
AI Safety groups should imitate career development clubs
Joshc
1mo
5
31
Stress Externalities More in AI Safety Pitches
NickGabs
2mo
13
68
Long-Term Future Fund: December 2021 grant recommendations
abergal
4mo
19
124
Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding
Vael Gates
4mo
6
58
EA Funds has a Public Grants Database
calebp
2mo
7
64
Announcing an Empirical AI Safety Program
Joshc
3mo
7
65
Why You Should Give a TEDX Talk
Kearney Capuano
4mo
6
68
The Tree of Life: Stanford AI Alignment Theory of Change
Gabriel Mukobi
5mo
5
5
Oxford / Cambridge Union Societies
dotsam
3mo
3
23
Challenges for EA Student Group Organizers
michel
4mo
4