Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
2040 posts
AI
Careers
Audio
Infra-Bayesianism
Interviews
SERI MATS
Redwood Research
Formal Proof
Organization Updates
AXRP
Adversarial Examples
Domain Theory
197 posts
AI Takeoff
AI Timelines
Dialogue (format)
DeepMind
62
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
153
The next decades might be wild
Marius Hobbhahn
5d
21
3
I believe some AI doomers are overconfident
FTPickle
6h
4
37
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
37
Existential AI Safety is NOT separate from near-term applications
scasper
7d
15
12
Will Machines Ever Rule the World? MLAISU W50
Esben Kran
4d
4
92
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
221
AGI Safety FAQ / all-dumb-questions-allowed thread
Aryeh Englander
6mo
514
8
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
6
159
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
27
If Wentworth is right about natural abstractions, it would be bad for alignment
Wuschel Schulz
12d
5
92
Revisiting algorithmic progress
Tamay
7d
6
59
Predicting GPU performance
Marius Hobbhahn
6d
24
33
Is the AI timeline too short to have children?
Yoreth
6d
20
134
Updating my AI timelines
Matthew Barnett
15d
40
27
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
2
269
Why I think strong general AI is coming soon
porby
2mo
126
1
Will the first AGI agent have been designed as an agent (in addition to an AGI)?
nahoj
17d
8
155
Planes are still decades away from displacing most bird jobs
guzey
25d
13
51
Human-level Diplomacy was my fire alarm
Lao Mein
27d
15
8
Benchmarks for Comparing Human and AI Intelligence
ViktorThink
9d
4
72
Disagreement with bio anchors that lead to shorter timelines
Marius Hobbhahn
1mo
16
191
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky
1y
181
287
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
-4
AGI in our lifetimes is wishful thinking
niknoble
1mo
21
11
Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda?
Amal
1mo
11
364
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
12
Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities
jacquesthibs
15d
4