Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
19 posts
Q&A (format)
30 posts
Reading Group
Superintelligence
Automation
34
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Robert Miles
1mo
100
38
All AGI safety questions welcome (especially basic ones) [July 2022]
plex
5mo
130
17
All AGI safety questions welcome (especially basic ones) [Sept 2022]
plex
3mo
47
86
Introducing the AI Alignment Forum (FAQ)
habryka
4y
8
43
Diana Fleischman and Geoffrey Miller - Audience Q&A
Jacob Falkovich
3y
14
39
Aella on Rationality and the Void
Jacob Falkovich
3y
8
127
Transcription of Eliezer's January 2010 video Q&A
curiousepic
11y
9
93
Q&A with Shane Legg on risks from AI
XiXiDu
11y
24
67
Transcription and Summary of Nick Bostrom's Q&A
daenerys
11y
10
67
Q&A with Jürgen Schmidhuber on risks from AI
XiXiDu
11y
45
59
Q&A with experts on risks from AI #1
XiXiDu
10y
67
46
Q&A with Stan Franklin on risks from AI
XiXiDu
11y
10
43
Q&A with Abram Demski on risks from AI
XiXiDu
10y
71
43
Consequentialism FAQ
Scott Alexander
11y
124
20
Why Do People Think Humans Are Stupid?
DragonGod
3mo
39
15
Are Human Brains Universal?
DragonGod
3mo
28
68
Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.
Mark Xu
1y
9
55
The Scout Mindset - read-along
weft
1y
44
9
A Critique of AI Alignment Pessimism
ExCeph
5mo
1
4
Would a Misaligned SSI Really Kill Us All?
DragonGod
3mo
7
4
The Limits of Automation
milkandcigarettes
6mo
1
1
Notion Templates for Reading Groups
Kyal
2mo
0
58
Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities
KatjaGrace
8y
233
42
Superintelligence reading group
KatjaGrace
8y
2
36
[LINK] Wait But Why - The AI Revolution Part 2
Adam Zerner
7y
88
31
Request: Sequences book reading group
iarwain1
7y
31
30
Superintelligence 5: Forms of Superintelligence
KatjaGrace
8y
114
22
Superintelligence 7: Decisive strategic advantage
KatjaGrace
8y
60