Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
291 posts
Donation writeup
Effective Altruism Funds
Impact assessment
Long-Term Future Fund
Donor lotteries
Effective Altruism Infrastructure Fund
AI Impacts
University groups
Charity evaluation
Future of Humanity Institute
Centre for the Study of Existential Risk
LessWrong
1439 posts
Effective altruism funding
Criticism of effective altruism culture
Prize
FTX Foundation
Effective altruism culture
Certificate of impact
Grantmaking
Markets for altruism
Sam Bankman-Fried
Future Fund
Funding opportunities
Mechanism design
26
Results for a survey of tool use and workflows in alignment research
jacquesthibs
1d
0
26
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
98
The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?
Markus Amalthea Magnuson
14d
7
78
Announcing the Cambridge Boston Alignment Initiative [Hiring!]
kuhanj
18d
0
63
Update on Harvard AI Safety Team and MIT AI Alignment
Xander Davies
18d
3
25
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
41
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
82
AI Safety groups should imitate career development clubs
Joshc
1mo
5
24
Analysis of AI Safety surveys for field-building insights
Ash Jafari
15d
6
16
Grantmaking Bowl: An EA Student Competition Idea
Cullen_OKeefe
14d
0
37
[Closing Nov 20th] University Group Accelerator Program Applications are Open
jessica_mccurdy
1mo
0
52
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
57
Join the interpretability research hackathon
Esben Kran
1mo
0
77
Establishing Oxford’s AI Safety Student Group: Lessons Learnt and Our Model
Wilkin1234
3mo
0
104
Process for Returning FTX Funds Announced
Molly
22h
9
11
[The Guardian] FTX seeks to claw back donations to politicians and charities
Guy Raveh
4h
3
211
I'm less approving of the EA community now than before the FTX collapse
throwaway790
4d
90
32
Should I set up a fund?
Doggypile
1d
8
7
Posit: Most AI safety people should work on alignment/safety challenges for AI tools that already have users (Stable Diffusion, GPT)
nonzerosum
6h
1
24
Introducing the Effective Institutions Project Innovation Fund, a new regranting option for donors
IanDavidMoss
1d
0
267
Learning from non-EAs who seek to do good
Siobhan_M
12d
22
825
We must be very clear: fraud in the service of effective altruism is unacceptable
evhub
1mo
90
758
Some comments on recent FTX-related events
Holden Karnofsky
1mo
77
742
The FTX Future Fund team has resigned
Nick_Beckstead
1mo
290
121
Reflections on Vox's "How effective altruism let SBF happen"
Richard Y Chappell
8d
34
530
The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance
Fods12
1mo
55
104
Cryptocurrency is not all bad. We should stay away from it anyway.
titotal
9d
38
477
IMPCO, don't injure yourself by returning FTXFF money for services you already provided
EliezerYudkowsky
1mo
43