Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
201 posts
Donation writeup
Impact assessment
Donor lotteries
AI Impacts
Charity evaluation
Future of Humanity Institute
LessWrong
Centre for the Study of Existential Risk
Future of Life Institute
Nonlinear Fund
Machine Intelligence Research Institute
Centre for the Governance of AI
90 posts
Effective Altruism Funds
Long-Term Future Fund
Effective Altruism Infrastructure Fund
University groups
Building the field of AI safety
AI Safety Camp
Student projects
Longtermist Entrepreneurship Fellowship
21
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
26
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
50
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
53
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
47
Join the interpretability research hackathon
Esben Kran
1mo
0
22
The limited upside of interpretability
Peter S. Park
1mo
3
35
Mildly Against Donor Lotteries
Jeff Kaufman
1mo
20
126
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
51
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
3
CFAR Anki deck
Will Aldred
7d
3
8
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
109
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
170
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
165
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
27
Results for a survey of tool use and workflows in alignment research
jacquesthibs
1d
0
116
The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?
Markus Amalthea Magnuson
14d
7
81
Announcing the Cambridge Boston Alignment Initiative [Hiring!]
kuhanj
18d
0
65
Update on Harvard AI Safety Team and MIT AI Alignment
Xander Davies
18d
3
23
Analysis of AI Safety surveys for field-building insights
Ash Jafari
15d
6
81
AI Safety groups should imitate career development clubs
Joshc
1mo
5
13
Grantmaking Bowl: An EA Student Competition Idea
Cullen_OKeefe
14d
0
38
[Closing Nov 20th] University Group Accelerator Program Applications are Open
jessica_mccurdy
1mo
0
58
EA Funds has a Public Grants Database
calebp
2mo
7
124
Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding
Vael Gates
4mo
6
70
Establishing Oxford’s AI Safety Student Group: Lessons Learnt and Our Model
Wilkin1234
3mo
0
128
Announcing the Harvard AI Safety Team
Xander Davies
5mo
4
64
Announcing an Empirical AI Safety Program
Joshc
3mo
7
84
Some advice the CEA groups team gives to new university group organizers
jessica_mccurdy
4mo
3