Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
1 posts
AI boxing
99 posts
AI safety
2
Mutual Assured Destruction used against AGI
L3opard
2mo
5
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
53
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
57
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
34
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
46
Estimating the Current and Future Number of AI Safety Researchers
Stephen McAleese
2mo
28
81
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
65
What does it mean for an AGI to be 'safe'?
So8res
2mo
21
60
What Do AI Safety Pitches Not Get About Your Field?
Aris Richardson
3mo
19
18
Benefits/Risks of Scott Aaronson's Orthodox/Reform Framing for AI Alignment
Jeremy
29d
5
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
20
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
5
AI Safety Pitches post ChatGPT
ojorgensen
15d
2