Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
46 posts
General Intelligence
Superstimuli
Hope
18 posts
AI Services (CAIS)
Narrow AI
Delegation
149
Is Clickbait Destroying Our General Intelligence?
Eliezer Yudkowsky
4y
60
97
Two explanations for variation in human abilities
Matthew Barnett
3y
28
96
Two Neglected Problems in Human-AI Safety
Wei_Dai
4y
24
94
The Octopus, the Dolphin and Us: a Great Filter tale
Stuart_Armstrong
8y
236
91
AGI and Friendly AI in the dominant AI textbook
lukeprog
11y
27
80
Just Lose Hope Already
Eliezer Yudkowsky
15y
78
73
How special are human brains among animal brains?
zhukeepa
2y
38
68
Artificial Addition
Eliezer Yudkowsky
15y
129
65
Adaptation-Executers, not Fitness-Maximizers
Eliezer Yudkowsky
15y
33
60
Ngo and Yudkowsky on scientific reasoning and pivotal acts
Eliezer Yudkowsky
10mo
13
60
The Limits of Intelligence and Me: Domain Expertise
ChrisHallquist
9y
79
58
My Best and Worst Mistake
Eliezer Yudkowsky
14y
17
57
Might humans not be the most intelligent animals?
Matthew Barnett
2y
41
55
Productivity as a function of ability in theoretical fields
Stefan_Schubert
8y
34
88
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Rohin Shah
3y
75
72
Comments on CAIS
Richard_Ngo
3y
14
51
[Link] Book Review: Reframing Superintelligence (SSC)
ioannes
3y
9
38
Drexler on AI Risk
PeterMcCluskey
3y
10
34
What are CAIS' boldest near/medium-term predictions?
jacobjacob
3y
17
27
AI Services as a Research Paradigm
VojtaKovarik
2y
12
21
The economy as an analogy for advanced AI systems
rosehadshar
1mo
0
20
Take 6: CAIS is actually Orwellian.
Charlie Steiner
13d
5
17
Robin Hanson on Lumpiness of AI Services
DanielFilan
3y
2
13
The reward function is already how well you manipulate humans
Kerry
2mo
9
12
Could utility functions be for narrow AI only, and downright antithetical to AGI?
chaosmage
5y
38
10
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence
avturchin
4y
7
8
Danger(s) of theorem-proving AI?
Yitz
9mo
9
7
Are there substantial research efforts towards aligning narrow AIs?
Rossin
1y
4