Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
2 posts
PIBBSS
16 posts
AI Alignment Fieldbuilding
32
AI alignment as “navigating the space of intelligent behaviour”
Nora_Ammann
3mo
0
19
PIBBSS (AI alignment) is hiring for a Project Manager
Nora_Ammann
3mo
0
44
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
280
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
11
AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination
peterslattery
5d
0
82
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
165
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
190
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
212
Reshaping the AI Industry
Thane Ruthenis
6mo
34
37
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
26
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
170
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
24
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3
56
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
22
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
91
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3