Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
1 posts
AI boxing
99 posts
AI safety
6
Mutual Assured Destruction used against AGI
L3opard
2mo
5
67
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
75
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
45
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
57
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
14
12 career advising questions that may (or may not) be helpful for people interested in alignment research
Akash
8d
0
24
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
50
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
1
56
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
20
Distinguishing test from training
So8res
21d
0
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
44
Applications are now open for Intro to ML Safety Spring 2023
Joshc
1mo
1
67
Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?
Benjamin Hilton
2mo
9
80
What Do AI Safety Pitches Not Get About Your Field?
Aris Richardson
3mo
19