Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
48 posts
Community
Center for Human-Compatible AI (CHAI)
Moral Uncertainty
Regulation and AI Risk
Grants & Fundraising Opportunities
Future of Humanity Institute (FHI)
Population Ethics
Utilitarianism
The SF Bay Area
Future of Life Institute (FLI)
Disagreement
Events (Community)
11 posts
Agent Foundations
Machine Intelligence Research Institute (MIRI)
Cognitive Reduction
Dissolving the Question
177
2018 AI Alignment Literature Review and Charity Comparison
Larks
4y
26
139
Full-time AGI Safety!
Steven Byrnes
1y
3
133
2019 AI Alignment Literature Review and Charity Comparison
Larks
3y
18
116
Call for research on evaluating alignment (funding + advice available)
Beth Barnes
1y
11
105
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka
1y
4
82
Comparing Utilities
abramdemski
2y
31
71
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
2y
27
70
Encultured AI Pre-planning, Part 1: Enabling New Benchmarks
Andrew_Critch
4mo
2
70
[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI
Rohin Shah
3y
12
68
AGI Safety Fundamentals curriculum and application
Richard_Ngo
1y
0
64
Apply for research internships at ARC!
paulfchristiano
11mo
0
62
Jobs: Help scale up LM alignment research at NYU
Sam Bowman
7mo
1
61
AI risk hub in Singapore?
Daniel Kokotajlo
2y
18
59
Seeking Interns/RAs for Mechanistic Interpretability Projects
Neel Nanda
4mo
0
197
Why Agent Foundations? An Overly Abstract Explanation
johnswentworth
9mo
54
146
The Rocket Alignment Problem
Eliezer Yudkowsky
4y
42
120
What I’ll be doing at MIRI
evhub
3y
6
61
Challenges with Breaking into MIRI-Style Research
Chris_Leong
11mo
15
45
Prize and fast track to alignment research at ALTER
Vanessa Kosoy
3mo
4
45
Grokking the Intentional Stance
jbkjr
1y
20
45
Clarifying the Agent-Like Structure Problem
johnswentworth
2mo
14
44
Another take on agent foundations: formalizing zero-shot reasoning
zhukeepa
4y
20
30
On motivations for MIRI's highly reliable agent design research
jessicata
5y
1
27
My current take on the Paul-MIRI disagreement on alignability of messy AI
jessicata
5y
0
23
Bridging Expected Utility Maximization and Optimization
Whispermute
4mo
5