Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
6 posts
Bounties & Prizes (active)
AI Safety Public Materials
8 posts
AI-assisted Alignment
36
Prizes for ML Safety Benchmark Ideas
joshc
1mo
3
68
NeurIPS ML Safety Workshop 2022
Dan H
4mo
2
52
$20K In Bounties for AI Safety Public Materials
Dan H
4mo
7
135
How much chess engine progress is about adapting to bigger computers?
paulfchristiano
1y
23
8
Distribution Shifts and The Importance of AI Safety
Leon Lang
2mo
2
22
[$20K in Prizes] AI Safety Arguments Competition
Dan H
7mo
543
90
[Link] Why I’m optimistic about OpenAI’s alignment approach
janleike
15d
13
12
Research request (alignment strategy): Deep dive on "making AI solve alignment for us"
JanBrauner
19d
3
4
Alignment with argument-networks and assessment-predictions
Tor Økland Barstad
7d
3
85
Beliefs and Disagreements about Automating Alignment Research
Ian McKenzie
3mo
4
127
Godzilla Strategies
johnswentworth
6mo
65
5
AI-assisted list of ten concrete alignment things to do right now
lcmgcd
3mo
5
6
Getting from an unaligned AGI to an aligned AGI?
Tor Økland Barstad
6mo
7
2
Making it harder for an AGI to "trick" us, with STVs
Tor Økland Barstad
5mo
5