Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
2040 posts
AI
Careers
Audio
Infra-Bayesianism
Interviews
SERI MATS
Redwood Research
Formal Proof
Organization Updates
AXRP
Adversarial Examples
Domain Theory
197 posts
AI Takeoff
AI Timelines
Dialogue (format)
DeepMind
40
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
108
The next decades might be wild
Marius Hobbhahn
5d
21
0
I believe some AI doomers are overconfident
FTPickle
6h
4
33
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
22
Existential AI Safety is NOT separate from near-term applications
scasper
7d
15
13
Will Machines Ever Rule the World? MLAISU W50
Esben Kran
4d
4
95
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
160
AGI Safety FAQ / all-dumb-questions-allowed thread
Aryeh Englander
6mo
514
-3
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
6
128
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
29
If Wentworth is right about natural abstractions, it would be bad for alignment
Wuschel Schulz
12d
5
73
Revisiting algorithmic progress
Tamay
7d
6
44
Predicting GPU performance
Marius Hobbhahn
6d
24
31
Is the AI timeline too short to have children?
Yoreth
6d
20
143
Updating my AI timelines
Matthew Barnett
15d
40
21
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
2
144
Why I think strong general AI is coming soon
porby
2mo
126
-2
Will the first AGI agent have been designed as an agent (in addition to an AGI)?
nahoj
17d
8
128
Planes are still decades away from displacing most bird jobs
guzey
25d
13
54
Human-level Diplomacy was my fire alarm
Lao Mein
27d
15
3
Benchmarks for Comparing Human and AI Intelligence
ViktorThink
9d
4
56
Disagreement with bio anchors that lead to shorter timelines
Marius Hobbhahn
1mo
16
173
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky
1y
181
242
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
-8
AGI in our lifetimes is wishful thinking
niknoble
1mo
21
4
Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda?
Amal
1mo
11
296
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
-1
Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities
jacquesthibs
15d
4