Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
384 posts
AI risk
AI safety
AI forecasting
Artificial intelligence
Eliezer Yudkowsky
Paul Christiano
Swiss Existential Risk Initiative
AI takeoff
Digital person
Ethics of artificial intelligence
Dual-use
Epoch
178 posts
AI alignment
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
64
High-level hopes for AI alignment
Holden Karnofsky
21h
1
-3
AGI Isn’t Close - Future Fund Worldview Prize
Toni MUENDEL
2d
14
33
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
60
We should say more than “x-risk is high”
OllieBase
4d
6
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
26
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9
37
Have your timelines changed as a result of ChatGPT?
Chris Leong
15d
18
109
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
Froolow
2mo
63
53
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
57
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
27
Questions about AI that bother me
Eleni_A
7d
5
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
6
Share your requests for ChatGPT
Kate Tran
15d
4
40
The religion problem in AI alignment
Geoffrey Miller
3mo
27
10
Why AGIs utility can't outweigh humans' utility?
Alex P
3mo
26
142
Lessons learned from talking to >100 academics about AI safety
mariushobbhahn
2mo
17
34
AI alignment with humans... but with which humans?
Geoffrey Miller
3mo
15
5
Why not to solve alignment by making superintelligent humans?
Patricio
2mo
12
31
Two reasons we might be closer to solving alignment than it seems
Kat Woods
2mo
18
71
7 traps that (we think) new alignment researchers often fall into
Akash
2mo
13
174
How might we align transformative AI if it’s developed very soon?
Holden Karnofsky
3mo
16
103
Alignment 201 curriculum
richard_ngo
2mo
8
18
Should I force myself to work on AGI alignment?
Isaac Benson
3mo
17
10
Who would you have on your dream team for solving AGI Alignment?
Greg_Colbourn
3mo
14
3
Does the idea of AGI that benevolently control us appeal to EA folks?
Noah Scales
5mo
20
28
A stubborn unbeliever finally gets the depth of the AI alignment problem
aelwood
2mo
7