Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
42 posts
Value Learning
The Pointers Problem
Kolmogorov Complexity
14 posts
Metaethics
Meta-Philosophy
Philosophy
Perceptual Control Theory
60
Don't design agents which exploit adversarial inputs
TurnTrout
1mo
61
60
Beyond Kolmogorov and Shannon
Alexander Gietelink Oldenziel
1mo
14
32
People care about each other even though they have imperfect motivational pointers?
TurnTrout
1mo
25
42
Different perspectives on concept extrapolation
Stuart_Armstrong
8mo
7
104
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
2y
43
67
Parsing Chris Mingard on Neural Networks
Alex Flint
1y
27
26
How an alien theory of mind might be unlearnable
Stuart_Armstrong
11mo
35
46
Normativity
abramdemski
2y
11
14
Value extrapolation, concept extrapolation, model splintering
Stuart_Armstrong
9mo
1
17
Morally underdefined situations can be deadly
Stuart_Armstrong
1y
8
10
AIs should learn human preferences, not biases
Stuart_Armstrong
8mo
1
68
Preface to the sequence on value learning
Rohin Shah
4y
6
64
Clarifying "AI Alignment"
paulfchristiano
4y
82
41
Using vector fields to visualise preferences and make them consistent
MichaelA
2y
32
30
What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment
xuan
3mo
15
30
AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy
xuan
1y
21
30
Recursive Quantilizers II
abramdemski
2y
15
62
Some Thoughts on Metaphilosophy
Wei_Dai
3y
27
27
Deconfusing Human Values Research Agenda v1
Gordon Seidoh Worley
2y
12
24
Gricean communication and meta-preferences
Charlie Steiner
2y
0
23
Deliberation as a method to find the "actual preferences" of humans
riceissa
3y
5
27
A theory of human values
Stuart_Armstrong
3y
13
21
Impossible moral problems and moral authority
Charlie Steiner
3y
8
18
Meta-preferences two ways: generator vs. patch
Charlie Steiner
2y
0
16
Can we make peace with moral indeterminacy?
Charlie Steiner
3y
8
14
The Value Definition Problem
Sammy Martin
3y
6
20
My take on agent foundations: formalizing metaphilosophical competence
zhukeepa
4y
6
17
RFC: Philosophical Conservatism in AI Alignment Research
Gordon Seidoh Worley
4y
13