Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
19 posts
Q&A (format)
30 posts
Reading Group
Superintelligence
Automation
67
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Robert Miles
1mo
100
84
All AGI safety questions welcome (especially basic ones) [July 2022]
plex
5mo
130
22
All AGI safety questions welcome (especially basic ones) [Sept 2022]
plex
3mo
47
73
Q&A with Shane Legg on risks from AI
XiXiDu
11y
24
6
Steelmanning Marxism/Communism
Suh_Prance_Alot
6mo
8
39
Consequentialism FAQ
Scott Alexander
11y
124
59
Q&A with Jürgen Schmidhuber on risks from AI
XiXiDu
11y
45
86
Introducing the AI Alignment Forum (FAQ)
habryka
4y
8
33
Q&A with new Executive Director of Singularity Institute
lukeprog
11y
182
37
Diana Fleischman and Geoffrey Miller - Audience Q&A
Jacob Falkovich
3y
14
27
Aella on Rationality and the Void
Jacob Falkovich
3y
8
22
Singularity FAQ
lukeprog
11y
35
112
Transcription of Eliezer's January 2010 video Q&A
curiousepic
11y
9
53
Transcription and Summary of Nick Bostrom's Q&A
daenerys
11y
10
-8
AGI Impossible due to Energy Constrains
TheKlaus
20d
13
21
Why Do People Think Humans Are Stupid?
DragonGod
3mo
39
16
Are Human Brains Universal?
DragonGod
3mo
28
6
Would a Misaligned SSI Really Kill Us All?
DragonGod
3mo
7
8
A Critique of AI Alignment Pessimism
ExCeph
5mo
1
5
The Limits of Automation
milkandcigarettes
6mo
1
16
Superintelligent AGI in a box - a question.
Dmytry
10y
77
12
Superintelligence 19: Post-transition formation of a singleton
KatjaGrace
7y
35
14
Superintelligence 8: Cognitive superpowers
KatjaGrace
8y
96
11
Superintelligence 23: Coherent extrapolated volition
KatjaGrace
7y
97
9
Superintelligence 14: Motivation selection methods
KatjaGrace
8y
28
8
Superintelligence 20: The value-loading problem
KatjaGrace
7y
21
14
Superintelligence 29: Crunch time
KatjaGrace
7y
27
11
Superintelligence 25: Components list for acquiring values
KatjaGrace
7y
12