Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
16 posts
Complexity of Value
Value Drift
Whole Brain Emulation
Motivations
LessWrong Review
Futurism
Psychology
Superstimuli
1 posts
55
Alignment allows "nonrobust" decision-influences and doesn't require robust grading
TurnTrout
21d
27
40
Understanding and avoiding value drift
TurnTrout
3mo
9
130
Shard Theory: An Overview
David Udell
4mo
34
69
The two-layer model of human values, and problems with synthesizing preferences
Kaj_Sotala
2y
16
2
Chatbots or set answers, not WBEs
Stuart_Armstrong
7y
0
25
Would I think for ten thousand years?
Stuart_Armstrong
3y
13
85
Two Neglected Problems in Human-AI Safety
Wei_Dai
4y
24
12
Towards deconfusing values
Gordon Seidoh Worley
2y
4
36
Broad Picture of Human Values
Thane Ruthenis
4mo
5
7
Working towards AI alignment is better
Johannes C. Mayer
11d
2
55
Review of 'But exactly how complex and fragile?'
TurnTrout
1y
0
35
Can there be an indescribable hellworld?
Stuart_Armstrong
3y
19
68
Three AI Safety Related Ideas
Wei_Dai
4y
38
73
But exactly how complex and fragile?
KatjaGrace
3y
32
34
Acknowledging Human Preference Types to Support Value Learning
Nandi Sabrina Erin
4y
4