Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
143 posts
AI risk
100 posts
AI safety
AI boxing
64
High-level hopes for AI alignment
Holden Karnofsky
21h
1
33
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
60
We should say more than “x-risk is high”
OllieBase
4d
6
109
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
Froolow
2mo
63
50
Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]
Nathan_Barnard
18d
11
88
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
144
AGI and Lock-In
Lukas_Finnveden
1mo
26
4
Who will be in charge once alignment is achieved?
trurl
4d
2
38
The first AGI will be a buggy mess
titotal
4mo
21
65
Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building
PeterSlattery
25d
7
53
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Jacy
28d
10
220
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
50
Why does no one care about AI?
Olivia Addy
4mo
46
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
53
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
57
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
34
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
46
Estimating the Current and Future Number of AI Safety Researchers
Stephen McAleese
2mo
28
81
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
65
What does it mean for an AGI to be 'safe'?
So8res
2mo
21
60
What Do AI Safety Pitches Not Get About Your Field?
Aris Richardson
3mo
19
18
Benefits/Risks of Scott Aaronson's Orthodox/Reform Framing for AI Alignment
Jeremy
29d
5
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
20
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
5
AI Safety Pitches post ChatGPT
ojorgensen
15d
2