Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
26 posts
Distillation & Pedagogy
32 posts
AI Alignment Fieldbuilding
Community Outreach
Marketing
PIBBSS
151
Call For Distillers
johnswentworth
8mo
42
39
Seeking PCK (Pedagogical Content Knowledge)
CFAR!Duncan
4mo
9
7
Distillation Experiment: Chunk-Knitting
AllAmericanBreakfast
1mo
1
22
(Summary) Sequence Highlights - Thinking Better on Purpose
qazzquimby
4mo
3
39
Features that make a report especially helpful to me
lukeprog
8mo
0
30
How to get people to produce more great exposition? Some strategies and their assumptions
riceissa
6mo
10
141
DARPA Digital Tutor: Four Months to Total Technical Expertise?
JohnBuridan
2y
19
12
Exposition as science: some ideas for how to make progress
riceissa
5mo
0
48
Think like an educator about code quality
Adam Zerner
1y
8
44
Expansive translations: considerations and possibilities
ozziegooen
2y
15
40
What are Examples of Great Distillers?
adamShimi
2y
12
42
How to teach things well
Neel Nanda
2y
15
51
Paternal Formats
abramdemski
3y
35
20
99% shorter
philh
1y
0
113
I Converted Book I of The Sequences Into A Zoomer-Readable Format
dkirmani
1mo
27
18
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
134
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
64
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
157
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
132
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
150
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
22
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
23
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
36
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
74
Reshaping the AI Industry
Thane Ruthenis
6mo
34
20
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
73
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3
27
What are all the AI Alignment and AI Safety Communication Hubs?
Gunnar_Zarncke
6mo
5