Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
12 posts
AI-assisted Alignment
10 posts
Ought
184
Godzilla Strategies
johnswentworth
6mo
65
11
Alignment with argument-networks and assessment-predictions
Tor Økland Barstad
7d
3
21
Research request (alignment strategy): Deep dive on "making AI solve alignment for us"
JanBrauner
19d
3
17
Provably Honest - A First Step
Srijanak De
1mo
2
13
Getting from an unaligned AGI to an aligned AGI?
Tor Økland Barstad
6mo
7
12
AI-assisted list of ten concrete alignment things to do right now
lcmgcd
3mo
5
105
Beliefs and Disagreements about Automating Alignment Research
Ian McKenzie
3mo
4
28
Making it harder for an AGI to "trick" us, with STVs
Tor Økland Barstad
5mo
5
21
Discussion on utilizing AI for alignment
elifland
3mo
3
11
Sufficiently many Godzillas as an alignment strategy
142857
3mo
3
12
Would you ask a genie to give you the solution to alignment?
sudo -i
3mo
1
4
Infinite Possibility Space and the Shutdown Problem
magfrump
2mo
0
120
Supervise Process, not Outcomes
stuhlmueller
8mo
8
52
Ought will host a factored cognition “Lab Meeting”
jungofthewon
3mo
1
52
Factored Cognition
stuhlmueller
4y
6
89
Solving Math Problems by Relay
bgold
2y
26
6
The Stack Overflow of Factored Cognition
rmoehn
3y
4
10
[AN #86]: Improving debate and factored cognition through human experiments
Rohin Shah
2y
0
98
Ought: why it matters and ways to help
paulfchristiano
3y
7
23
Automating reasoning about the future at Ought
jungofthewon
2y
0
30
Update on Ought's experiments on factored evaluation of arguments
Owain_Evans
2y
0
26
The Majority Is Always Wrong
Eliezer Yudkowsky
15y
54