Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
57 posts
Cluelessness
Fermi estimate
Risk aversion
Model uncertainty
Disentanglement research
Indirect long-term effects
Philanthropic diversification
Crucial consideration
Moral offsetting
Meat-eater problem
Broad vs. narrow interventions
Hilary Greaves
142 posts
Rationality
Epistemic deference
Epistemology
Statistics
Cognitive bias
Optimizer's curse
Bayesian epistemology
Scope neglect
Independent impression
Giving and happiness
Thinking at the margin
Conflict theory vs. mistake theory
141
Clarifications on diminishing returns and risk aversion in giving
Robert_Wiblin
25d
25
116
Evidence, cluelessness, and the long term - Hilary Greaves
james
2y
85
115
A practical guide to long-term planning – and suggestions for longtermism
weeatquince
1y
12
107
EAs underestimate uncertainty in cause prioritisation
freedomandutility
3mo
20
78
Meat Externalities
Richard Y Chappell
5mo
12
62
Hedging against deep and moral uncertainty
MichaelStJules
2y
11
57
What's your prior probability that "good things are good" (for the long-term future)?
Linch
10mo
12
56
Complex cluelessness as credal fragility
Gregory Lewis
1y
50
52
What should we call the other problem of cluelessness?
Owen Cotton-Barratt
1y
22
46
Doing good while clueless
Milan_Griffes
4y
8
45
Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities?
Vasco Grilo
4mo
8
45
Introduction to Fermi estimates
NunoSempere
3mo
5
43
Should marginal longtermist donations support fundamental or intervention research?
MichaelA
2y
4
41
Concerns with Difference-Making Risk Aversion
Charlotte
6mo
1
209
Reality is often underpowered
Gregory Lewis
3y
17
208
Flimsy Pet Theories, Enormous Initiatives
Ozzie Gooen
1y
57
204
List of ways in which cost-effectiveness estimates can be misleading
saulius
3y
30
167
Beware surprising and suspicious convergence
Gregory Lewis
6y
22
160
Global health is important for the epistemic foundations of EA, even for longtermists
Owen Cotton-Barratt
6mo
16
154
EA should blurt
RobBensinger
28d
26
140
Deference Culture in EA
Joey
6mo
23
133
Some thoughts on deference and inside-view models
Buck
2y
31
131
Independent impressions
MichaelA
1y
7
120
Invisible impact loss (and why we can be too error-averse)
Lizka
2mo
14
116
When reporting AI timelines, be clear who you're (not) deferring to
Sam Clarke
2mo
20
114
In defence of epistemic modesty
Gregory Lewis
5y
49
102
Deferring
Owen Cotton-Barratt
7mo
40
98
Limits to Legibility
Jan_Kulveit
5mo
3