Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1687 posts
World Optimization
Ethics & Morality
Economics
Financial Investing
Postmortems & Retrospectives
Effective Altruism
Software Tools
Mechanism Design
Voting Theory
Ambition
Metaethics
Efficient Market Hypothesis
1598 posts
Community
Site Meta
Machine Intelligence Research Institute (MIRI)
Events (Community)
Center for Applied Rationality (CFAR)
Petrov Day
Meetups & Local Communities (topic)
Wiki/Tagging
LessWrong Review
Ritual
Secular Solstice
Updated Beliefs (examples of)
23
CEA Disambiguation
jefftk
1d
0
58
Key Mostly Outward-Facing Facts From the Story of VaccinateCA
Zvi
6d
2
13
The Risk of Orbital Debris and One (Cheap) Way to Mitigate It
clans
1d
1
136
Be less scared of overconfidence
benkuhn
20d
20
22
Introducing Shrubgrazer
jefftk
4d
0
42
patio11's "Observations from an EA-adjacent (?) charitable effort"
RobertM
10d
0
133
Sadly, FTX
Zvi
1mo
17
59
Summary of a new study on out-group hate (and how to fix it)
AllAmericanBreakfast
16d
30
38
Machine Learning Consent
jefftk
12d
14
114
Speculation on Current Opportunities for Unusually High Impact in Global Health
johnswentworth
1mo
31
77
Utilitarianism Meets Egalitarianism
Scott Garrabrant
29d
10
14
Beyond a better world
Davidmanheim
6d
7
33
Our 2022 Giving
jefftk
17d
0
53
Geometric Exploration, Arithmetic Exploitation
Scott Garrabrant
26d
4
62
The True Spirit of Solstice?
Raemon
1d
23
18
Boston Solstice 2022 Retrospective
jefftk
2d
2
22
Vaguely interested in Effective Altruism? Please Take the Official 2022 EA Survey
Peter Wildeford
4d
4
15
Looking for an alignment tutor
JanBrauner
3d
2
17
"Starry Night" Solstice Cookies
maia
3d
0
15
There have been 3 planes (billionaire donors) and 2 have crashed
Trevor1
3d
8
73
Probably good projects for the AI safety ecosystem
Ryan Kidd
15d
15
80
The LessWrong 2021 Review: Intellectual Circle Expansion
Ruby
19d
53
16
Consider working more hours and taking more stimulants
Arjun Panickssery
5d
9
103
LW Beta Feature: Side-Comments
jimrandomh
26d
47
100
LessWrong readers are invited to apply to the Lurkshop
Jonas Vollmer
28d
38
122
The Alignment Community Is Culturally Broken
sudo -i
1mo
67
56
Update on Harvard AI Safety Team and MIT AI Alignment
Xander Davies
18d
4
243
So, geez there's a lot of AI content these days
Raemon
2mo
133