Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
2 posts
PIBBSS
16 posts
AI Alignment Fieldbuilding
4
AI alignment as “navigating the space of intelligent behaviour”
Nora_Ammann
3mo
0
-1
PIBBSS (AI alignment) is hiring for a Project Manager
Nora_Ammann
3mo
0
157
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
150
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
134
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
132
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
74
Reshaping the AI Industry
Thane Ruthenis
6mo
34
73
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3
64
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
36
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
27
What are all the AI Alignment and AI Safety Communication Hubs?
Gunnar_Zarncke
6mo
5
23
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
22
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
20
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
18
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
2
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3