Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
33 posts
LessWrong
Future of Humanity Institute
Centre for the Study of Existential Risk
Future of Life Institute
Centre for the Governance of AI
Defense in depth
Lightcone Infrastructure
Center for Security and Emerging Technology
Rationality community
Anders Sandberg
Compound existential risk
6 posts
Charity evaluation
All-Party Parliamentary Group for Future Generations
Centre for Long-Term Resilience
25
An appraisal of the Future of Life Institute AI existential risk program
PabloAMC
9d
0
25
Looping
Jarred Filmer
2mo
4
168
Shallow evaluations of longtermist organizations
NunoSempere
1y
34
122
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
Habryka
1y
7
28
Consider participating in ACX Meetups Everywhere
Habryka
4mo
1
1
CFAR Anki deck
Will Aldred
7d
3
51
What (standalone) LessWrong posts would you recommend to most EA community members?
Vaidehi Agarwalla
10mo
19
58
The Centre for the Governance of AI has Relaunched
GovAI
1y
0
71
The Centre for the Governance of AI is becoming a nonprofit
MarkusAnderljung
1y
7
14
FLI is hiring a new Director of US Policy
aaguirre
4mo
0
50
The LessWrong Team is now Lightcone Infrastructure, come work with us!
Habryka
1y
2
33
Low-Commitment Less Wrong Book (EG Article) Club
Jeremy
10mo
25
22
The aestivation hypothesis for resolving Fermi’s paradox (Sandberg, Armstrong & Cirkovic, 2017)
Will Aldred
7mo
0
4
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup Kick-Off
Ideopunk
2mo
0
77
Proposal: Impact List -- like the Forbes List except for impact via donations
Elliot_Olds
6mo
30
57
Concerns about AMF from GiveWell reading - Part 3
JPHoughton
11mo
6
4
How can one evaluate a charity's capacity to utilize funds beyond its annual budget?
haywyer
1mo
0
12
We’re discontinuing the standout charity designation
GiveWell
1y
11
4
How we verify charities' claims
Animal Charity Evaluators
11mo
0
10
Incentivizing Charity Cooperation
Dawn Drescher
7y
1