Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
2040 posts
AI
Careers
Audio
Infra-Bayesianism
Interviews
SERI MATS
Redwood Research
Formal Proof
Organization Updates
AXRP
Adversarial Examples
Domain Theory
197 posts
AI Takeoff
AI Timelines
Dialogue (format)
DeepMind
84
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
198
The next decades might be wild
Marius Hobbhahn
5d
21
6
I believe some AI doomers are overconfident
FTPickle
6h
4
41
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
52
Existential AI Safety is NOT separate from near-term applications
scasper
7d
15
11
Will Machines Ever Rule the World? MLAISU W50
Esben Kran
4d
4
89
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
282
AGI Safety FAQ / all-dumb-questions-allowed thread
Aryeh Englander
6mo
514
19
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
6
190
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
25
If Wentworth is right about natural abstractions, it would be bad for alignment
Wuschel Schulz
12d
5
111
Revisiting algorithmic progress
Tamay
7d
6
74
Predicting GPU performance
Marius Hobbhahn
6d
24
35
Is the AI timeline too short to have children?
Yoreth
6d
20
125
Updating my AI timelines
Matthew Barnett
15d
40
33
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
2
394
Why I think strong general AI is coming soon
porby
2mo
126
4
Will the first AGI agent have been designed as an agent (in addition to an AGI)?
nahoj
17d
8
182
Planes are still decades away from displacing most bird jobs
guzey
25d
13
48
Human-level Diplomacy was my fire alarm
Lao Mein
27d
15
13
Benchmarks for Comparing Human and AI Intelligence
ViktorThink
9d
4
88
Disagreement with bio anchors that lead to shorter timelines
Marius Hobbhahn
1mo
16
209
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky
1y
181
332
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
0
AGI in our lifetimes is wishful thinking
niknoble
1mo
21
18
Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda?
Amal
1mo
11
432
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
25
Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities
jacquesthibs
15d
4