Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
2 posts
PIBBSS
16 posts
AI Alignment Fieldbuilding
32
AI alignment as “navigating the space of intelligent behaviour”
Nora_Ammann
3mo
0
19
PIBBSS (AI alignment) is hiring for a Project Manager
Nora_Ammann
3mo
0
165
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
82
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
280
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
190
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
24
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3
37
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
56
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
23
What are all the AI Alignment and AI Safety Communication Hubs?
Gunnar_Zarncke
6mo
5
91
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3
11
AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination
peterslattery
5d
0
170
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
26
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
44
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
212
Reshaping the AI Industry
Thane Ruthenis
6mo
34