Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
2 posts
PIBBSS
16 posts
AI Alignment Fieldbuilding
18
AI alignment as “navigating the space of intelligent behaviour”
Nora_Ammann
3mo
0
9
PIBBSS (AI alignment) is hiring for a Project Manager
Nora_Ammann
3mo
0
31
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
207
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
73
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
161
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
4
AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination
peterslattery
5d
0
161
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
30
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
160
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
24
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
143
Reshaping the AI Industry
Thane Ruthenis
6mo
34
46
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
21
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
13
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3
82
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3