Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
143 posts
AI risk
100 posts
AI safety
AI boxing
64
High-level hopes for AI alignment
Holden Karnofsky
21h
1
60
We should say more than “x-risk is high”
OllieBase
4d
6
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
33
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
88
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
50
Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]
Nathan_Barnard
18d
11
65
Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building
PeterSlattery
25d
7
144
AGI and Lock-In
Lukas_Finnveden
1mo
26
22
ChatGPT interviewed on TV
Will Aldred
11d
1
53
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Jacy
28d
10
109
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
Froolow
2mo
63
261
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
23
Concrete actions to improve AI governance: the behaviour science approach
AlexanderSaeri
19d
0
18
Probably good projects for the AI safety ecosystem
Ryan Kidd
15d
0
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
81
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
57
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
34
"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."
David van Beveren
14d
6
66
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
1
53
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
12
12 career advising questions that may (or may not) be helpful for people interested in alignment research
Akash
8d
0
34
Distinguishing test from training
So8res
21d
0
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
48
Applications are now open for Intro to ML Safety Spring 2023
Joshc
1mo
1
77
Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?
Benjamin Hilton
2mo
9
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
47
Superintelligent AI is necessary for an amazing future, but far from sufficient
So8res
1mo
5