Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
33 posts
Community
2 posts
Events (Community)
23
Event [Berkeley]: Alignment Collaborator Speed-Meeting
AlexMennen
1d
2
17
Looking for an alignment tutor
JanBrauner
3d
2
51
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
20
25
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
57
AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022
Sam Bowman
3mo
2
70
Encultured AI Pre-planning, Part 1: Enabling New Benchmarks
Andrew_Critch
4mo
2
59
Seeking Interns/RAs for Mechanistic Interpretability Projects
Neel Nanda
4mo
0
39
Announcing the Introduction to ML Safety course
Dan H
4mo
6
62
Jobs: Help scale up LM alignment research at NYU
Sam Bowman
7mo
1
105
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka
1y
4
52
Introducing the ML Safety Scholars Program
Dan H
7mo
2
139
Full-time AGI Safety!
Steven Byrnes
1y
3
64
Apply for research internships at ARC!
paulfchristiano
11mo
0
68
AGI Safety Fundamentals curriculum and application
Richard_Ngo
1y
0
11
*New* Canada AI Safety & Governance community
Wyatt Tessari L'Allié
3mo
0
22
Announcing Web-TAISU, May 13-17
Linda Linsefors
2y
3