Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
57 posts
Cluelessness
Fermi estimate
Risk aversion
Model uncertainty
Disentanglement research
Indirect long-term effects
Philanthropic diversification
Crucial consideration
Moral offsetting
Meat-eater problem
Broad vs. narrow interventions
Hilary Greaves
142 posts
Rationality
Epistemic deference
Epistemology
Statistics
Cognitive bias
Optimizer's curse
Bayesian epistemology
Scope neglect
Independent impression
Giving and happiness
Thinking at the margin
Conflict theory vs. mistake theory
170
Clarifications on diminishing returns and risk aversion in giving
Robert_Wiblin
25d
25
125
A practical guide to long-term planning – and suggestions for longtermism
weeatquince
1y
12
115
Evidence, cluelessness, and the long term - Hilary Greaves
james
2y
85
106
EAs underestimate uncertainty in cause prioritisation
freedomandutility
3mo
20
91
Meat Externalities
Richard Y Chappell
5mo
12
73
Hedging against deep and moral uncertainty
MichaelStJules
2y
11
73
What's your prior probability that "good things are good" (for the long-term future)?
Linch
10mo
12
69
Complex cluelessness as credal fragility
Gregory Lewis
1y
50
67
What should we call the other problem of cluelessness?
Owen Cotton-Barratt
1y
22
54
Concerns with Difference-Making Risk Aversion
Charlotte
6mo
1
54
Should marginal longtermist donations support fundamental or intervention research?
MichaelA
2y
4
47
Guesstimate Algorithm for Medical Research
Elizabeth
2mo
2
44
Introduction to Fermi estimates
NunoSempere
3mo
5
42
Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities?
Vasco Grilo
4mo
8
239
Flimsy Pet Theories, Enormous Initiatives
Ozzie Gooen
1y
57
200
Reality is often underpowered
Gregory Lewis
3y
17
168
Beware surprising and suspicious convergence
Gregory Lewis
6y
22
164
List of ways in which cost-effectiveness estimates can be misleading
saulius
3y
30
158
Global health is important for the epistemic foundations of EA, even for longtermists
Owen Cotton-Barratt
6mo
16
146
EA should blurt
RobBensinger
28d
26
142
Invisible impact loss (and why we can be too error-averse)
Lizka
2mo
14
139
Independent impressions
MichaelA
1y
7
134
Some thoughts on deference and inside-view models
Buck
2y
31
134
Deference Culture in EA
Joey
6mo
23
128
When reporting AI timelines, be clear who you're (not) deferring to
Sam Clarke
2mo
20
118
Deferring
Owen Cotton-Barratt
7mo
40
110
In defence of epistemic modesty
Gregory Lewis
5y
49
107
Limits to Legibility
Jan_Kulveit
5mo
3