Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
0 posts
AI Sentience
1854 posts
AI
29
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
33
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
40
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
199
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
108
The next decades might be wild
Marius Hobbhahn
5d
21
15
Solution to The Alignment Problem
Algon
1d
0
95
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
22
Event [Berkeley]: Alignment Collaborator Speed-Meeting
AlexMennen
1d
2
54
High-level hopes for AI alignment
HoldenKarnofsky
5d
3
207
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
73
Revisiting algorithmic progress
Tamay
7d
6
51
«Boundaries», Part 3b: Alignment problems in terms of boundaries
Andrew_Critch
6d
2
128
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
59
Okay, I feel it now
g1
7d
14