Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
143 posts
AI risk
100 posts
AI safety
AI boxing
34
High-level hopes for AI alignment
Holden Karnofsky
21h
1
23
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
38
We should say more than “x-risk is high”
OllieBase
4d
6
83
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
Froolow
2mo
63
50
Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]
Nathan_Barnard
18d
11
66
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
98
AGI and Lock-In
Lukas_Finnveden
1mo
26
12
Who will be in charge once alignment is achieved?
trurl
4d
2
56
The first AGI will be a buggy mess
titotal
4mo
21
75
Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building
PeterSlattery
25d
7
45
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Jacy
28d
10
232
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
58
Why does no one care about AI?
Olivia Addy
4mo
46
67
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
56
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
57
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
45
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
24
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
58
Estimating the Current and Future Number of AI Safety Researchers
Stephen McAleese
2mo
28
75
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
41
What does it mean for an AGI to be 'safe'?
So8res
2mo
21
80
What Do AI Safety Pitches Not Get About Your Field?
Aris Richardson
3mo
19
12
Benefits/Risks of Scott Aaronson's Orthodox/Reform Framing for AI Alignment
Jeremy
29d
5
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
16
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
7
AI Safety Pitches post ChatGPT
ojorgensen
15d
2