Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
384 posts
AI risk
AI safety
AI forecasting
Artificial intelligence
Eliezer Yudkowsky
Paul Christiano
Swiss Existential Risk Initiative
AI takeoff
Digital person
Ethics of artificial intelligence
Dual-use
Epoch
178 posts
AI alignment
67
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
34
High-level hopes for AI alignment
Holden Karnofsky
21h
1
38
We should say more than “x-risk is high”
OllieBase
4d
6
32
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
0
75
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
16
What we owe the microbiome
TeddyW
3d
0
23
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
66
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
52
ChatGPT interviewed on TV
Will Aldred
11d
1
30
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9
25
Questions about AI that bother me
Eleni_A
7d
5
12
Who will be in charge once alignment is achieved?
trurl
4d
2
130
Lessons learned from talking to >100 academics about AI safety
mariushobbhahn
2mo
17
85
Alignment 201 curriculum
richard_ngo
2mo
8
134
How might we align transformative AI if it’s developed very soon?
Holden Karnofsky
3mo
16
73
7 traps that (we think) new alignment researchers often fall into
Akash
2mo
13
8
Share your requests for ChatGPT
Kate Tran
15d
4
57
Public-facing Censorship Is Safety Theater, Causing Reputational Damage
Yitz
2mo
7
36
A stubborn unbeliever finally gets the depth of the AI alignment problem
aelwood
2mo
7
49
EA’s brain-over-body bias, and the embodied value problem in AI alignment
Geoffrey Miller
3mo
1
45
Two reasons we might be closer to solving alignment than it seems
Kat Woods
2mo
18
28
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
37
Interpreting Neural Networks through the Polytope Lens
Sid Black
2mo
0
40
The religion problem in AI alignment
Geoffrey Miller
3mo
27
51
The alignment problem from a deep learning perspective
richard_ngo
4mo
0
59
Quantilizers: A Safer Alternative to Maximizers for Limited Optimization (Taylor, 2015)
Will Aldred
5mo
0