Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
2040 posts
AI
Careers
Audio
Infra-Bayesianism
Interviews
SERI MATS
Redwood Research
Formal Proof
Organization Updates
AXRP
Adversarial Examples
Domain Theory
197 posts
AI Takeoff
AI Timelines
Dialogue (format)
DeepMind
84
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
41
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
5
Podcast: Tamera Lanham on AI risk, threat models, alignment proposals, externalized reasoning oversight, and working at Anthropic
Akash
2h
0
198
The next decades might be wild
Marius Hobbhahn
5d
21
265
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
6
I believe some AI doomers are overconfident
FTPickle
6h
4
5
Career Scouting: Housing Coordination
koratkar
5h
0
13
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
71
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
19
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
6
323
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
107
Okay, I feel it now
g1
7d
14
111
Revisiting algorithmic progress
Tamay
7d
6
89
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
33
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
2
125
Updating my AI timelines
Matthew Barnett
15d
40
182
Planes are still decades away from displacing most bird jobs
guzey
25d
13
394
Why I think strong general AI is coming soon
porby
2mo
126
432
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
88
Disagreement with bio anchors that lead to shorter timelines
Marius Hobbhahn
1mo
16
67
Human-level Full-Press Diplomacy (some bare facts).
Cleo Nardo
28d
7
110
Caution when interpreting Deepmind's In-context RL paper
Sam Marks
1mo
6
332
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
25
Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities
jacquesthibs
15d
4
48
Human-level Diplomacy was my fire alarm
Lao Mein
27d
15
13
Benchmarks for Comparing Human and AI Intelligence
ViktorThink
9d
4
239
What do ML researchers think about AI in 2022?
KatjaGrace
4mo
33
5
AI overhangs depend on whether algorithms, compute and data are substitutes or complements
NathanBarnard
4d
0