Explore Projects

Discover 2 open source projects

Active filters (1):
Search: safe-reinforcement-learningร—
Clear all

Showing 1-2 of 2 projects

PKU-Alignment/safe-rlhf

A safe reinforcement learning from human feedback (RLHF) system for aligning large language models with human values.

1.6K
Stable
Python
LLM Frameworks
Reinforcement Learning
#ai-safety#large-language-models#reinforcement-learning

PKU-Alignment/omnisafe

OmniSafe is an infrastructural framework for accelerating safe reinforcement learning research.

1.1K
Experimental
Python
Reinforcement Learning
Constraint Satisfaction Problem
PyTorch
#safe-reinforcement-learning#benchmark#constraint-rl

Stay in the loop

Get weekly updates on trending AI coding tools and projects.