natolambert/rlhf-book

Textbook on reinforcement learning from human feedback, focused on AI alignment research.

TeX
AI & Machine Learning
LLM Frameworks
MIT

1.7K

Stars

152

Forks

May 24, 2024

Created

Mar 2, 2026

Last Updated

Project Analytics

Stars Growth (1 Month)

+142

+9.2% change

Avg Daily Growth (1 Month)

+5.1

stars per day

Fork/Star Ratio (All Time)

9.0%

Normal engagement

Lifetime Growth

2.6

stars/day over 651 days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

ai
alignment
rlhf
machine-learning
reinforcement-learning
textbook

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.