Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
62 posts
AI forecasting
23 posts
Paul Christiano
Eliezer Yudkowsky
AI takeoff
Epoch
Fund for Alignment Research (FAR)
131
Survey on AI existential risk scenarios
Sam Clarke
1y
7
129
Samotsvety's AI risk forecasts
elifland
3mo
30
101
AI timelines via bioanchors: the debate in one place
Will Aldred
4mo
6
93
A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]
ThomasW
7mo
2
80
A concern about the “evolutionary anchor” of Ajeya Cotra’s report on AI timelines.
NunoSempere
4mo
43
79
2022 AI expert survey results
Zach Stein-Perlman
4mo
7
67
AI Forecasting Research Ideas
Jaime Sevilla
1mo
1
64
Disagreement with bio anchors that lead to shorter timelines
mariushobbhahn
1mo
1
62
AI Timelines: Where the Arguments, and the "Experts," Stand
Holden Karnofsky
1y
2
53
Roodman's Thoughts on Biological Anchors
lukeprog
3mo
7
51
Report on Semi-informative Priors for AI timelines (Open Philanthropy)
Tom_Davidson
1y
6
45
Ajeya's TAI timeline shortened from 2050 to 2040
Zach Stein-Perlman
4mo
2
41
Metaculus is building a team dedicated to AI forecasting
christian
2mo
0
41
What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about?
Evan_Gaensbauer
8mo
3
280
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
184
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
73
Introducing the Fund for Alignment Research (We're Hiring!)
AdamGleave
5mo
3
54
MIRI announces new "Death With Dignity" strategy (Yudkowsky, 2022)
Will Aldred
5mo
0
49
Discussion with Eliezer Yudkowsky on AGI interventions
RobBensinger
1y
35
49
Grokking “Semi-informative priors over AI timelines”
anson
6mo
1
43
Continuity Assumptions
Jan_Kulveit
6mo
4
39
Grokking “Forecasting TAI with biological anchors”
anson
6mo
0
36
Knowing About Biases Can Hurt People (Yudkowsky, 2007)
Will Aldred
4mo
1
35
Yudkowsky and Christiano discuss "Takeoff Speeds"
EliezerYudkowsky
1y
0
33
Shulman and Yudkowsky on AI progress
CarlShulman
1y
0
31
MIRI Conversations: Technology Forecasting & Gradualism (Distillation)
TheMcDouglas
5mo
9
30
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda
Chi
2y
3
20
"Slower tech development" can be about ordering, gradualness, or distance from now
MichaelA
1y
3