Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
1 posts
AI boxing
99 posts
AI safety
4
Mutual Assured Destruction used against AGI
L3opard
2mo
5
66
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
78
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
165
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
51
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
13
12 career advising questions that may (or may not) be helpful for people interested in alignment research
Akash
8d
0
29
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
55
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
58
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
1
27
Distinguishing test from training
So8res
21d
0
69
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
46
Applications are now open for Intro to ML Safety Spring 2023
Joshc
1mo
1
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
6
AI Safety Pitches post ChatGPT
ojorgensen
15d
2
72
Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?
Benjamin Hilton
2mo
9