llm-attacks/llm-attacks

Universal and transferable attacks on aligned language models, useful for security researchers.

Python
AI & Machine Learning
LLM Frameworks
MIT

4.5K

Stars

604

Forks

Jul 27, 2023

Created

Aug 2, 2024

Last Updated

Project Analytics

Stars Growth (1 Month)

+50

+1.1% change

Avg Daily Growth (1 Month)

+1.8

stars per day

Fork/Star Ratio (All Time)

13.3%

Good engagement

Lifetime Growth

4.8

stars/day over 954 days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

language-models
security
adversarial-attacks
alignment
research

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.