Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
35 posts
Community
Events (Community)
1 posts
The SF Bay Area
99
Announcing the Introduction to ML Safety course
Dan H
4mo
6
94
Introducing the ML Safety Scholars Program
Dan H
7mo
2
93
Full-time AGI Safety!
Steven Byrnes
1y
3
91
AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022
Sam Bowman
3mo
2
87
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
20
85
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka
1y
4
68
ARC is hiring!
paulfchristiano
1y
2
66
AGI Safety Fundamentals curriculum and application
Richard_Ngo
1y
0
63
Seeking Interns/RAs for Mechanistic Interpretability Projects
Neel Nanda
4mo
0
58
Jobs: Help scale up LM alignment research at NYU
Sam Bowman
7mo
1
58
Apply for research internships at ARC!
paulfchristiano
11mo
0
54
Encultured AI Pre-planning, Part 1: Enabling New Benchmarks
Andrew_Critch
4mo
2
50
Applications for AI Safety Camp 2022 Now Open!
adamShimi
1y
3
38
What does GPT-3 understand? Symbol grounding and Chinese rooms
Stuart_Armstrong
1y
15
53
AI risk hub in Singapore?
Daniel Kokotajlo
2y
18