Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
26 posts
Distillation & Pedagogy
32 posts
AI Alignment Fieldbuilding
Community Outreach
Marketing
PIBBSS
192
Call For Distillers
johnswentworth
8mo
42
35
Seeking PCK (Pedagogical Content Knowledge)
CFAR!Duncan
4mo
9
9
Distillation Experiment: Chunk-Knitting
AllAmericanBreakfast
1mo
1
32
(Summary) Sequence Highlights - Thinking Better on Purpose
qazzquimby
4mo
3
200
DARPA Digital Tutor: Four Months to Total Technical Expertise?
JohnBuridan
2y
19
37
Features that make a report especially helpful to me
lukeprog
8mo
0
26
How to get people to produce more great exposition? Some strategies and their assumptions
riceissa
6mo
10
101
How to teach things well
Neel Nanda
2y
15
14
Exposition as science: some ideas for how to make progress
riceissa
5mo
0
44
Think like an educator about code quality
Adam Zerner
1y
8
5
How To Know What the AI Knows - An ELK Distillation
Fabien Roger
3mo
0
43
Expansive translations: considerations and possibilities
ozziegooen
2y
15
32
What are Examples of Great Distillers?
adamShimi
2y
12
50
Paternal Formats
abramdemski
3y
35
31
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
166
I Converted Book I of The Sequences Into A Zoomer-Readable Format
dkirmani
1mo
27
207
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
73
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
161
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
4
AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination
peterslattery
5d
0
161
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
30
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
160
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
24
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
143
Reshaping the AI Industry
Thane Ruthenis
6mo
34
46
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
21
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0
13
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3