Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
26 posts
Distillation & Pedagogy
32 posts
AI Alignment Fieldbuilding
Community Outreach
Marketing
PIBBSS
233
Call For Distillers
johnswentworth
8mo
42
11
Distillation Experiment: Chunk-Knitting
AllAmericanBreakfast
1mo
1
42
(Summary) Sequence Highlights - Thinking Better on Purpose
qazzquimby
4mo
3
31
Seeking PCK (Pedagogical Content Knowledge)
CFAR!Duncan
4mo
9
259
DARPA Digital Tutor: Four Months to Total Technical Expertise?
JohnBuridan
2y
19
160
How to teach things well
Neel Nanda
2y
15
35
Features that make a report especially helpful to me
lukeprog
8mo
0
11
How To Know What the AI Knows - An ELK Distillation
Fabien Roger
3mo
0
22
How to get people to produce more great exposition? Some strategies and their assumptions
riceissa
6mo
10
16
Exposition as science: some ideas for how to make progress
riceissa
5mo
0
40
Think like an educator about code quality
Adam Zerner
1y
8
42
Expansive translations: considerations and possibilities
ozziegooen
2y
15
49
Paternal Formats
abramdemski
3y
35
24
What are Examples of Great Distillers?
adamShimi
2y
12
44
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
219
I Converted Book I of The Sequences Into A Zoomer-Readable Format
dkirmani
1mo
27
280
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
11
AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination
peterslattery
5d
0
82
Takeaways from a survey on AI alignment resources
DanielFilan
1mo
9
165
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
190
The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
6mo
66
212
Reshaping the AI Industry
Thane Ruthenis
6mo
34
37
A newcomer’s guide to the technical AI safety field
zeshen
1mo
1
26
AI Safety Unconference NeurIPS 2022
Orpheus
1mo
0
170
Transcripts of interviews with AI researchers
Vael Gates
7mo
8
24
Are alignment researchers devoting enough time to improving their research capacity?
Carson Jones
1mo
3
56
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
22
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
2mo
0