Showing 1-6 of 6 projects
Open-source evaluation and testing library for LLM Agents
An open-source framework to help security professionals and engineers identify risks in generative AI systems.
A playground for running AI red teaming exercises, including infrastructure setup.
An AI-powered security toolkit for LLM vulnerability scanning and red teaming.
A powerful tool for automated LLM fuzzing to help developers and security researchers identify and mitigate potential jailbreaks.
An offensive/defensive security toolset for discovering, reconning, and ethically assessing AI agents.
Get weekly updates on trending AI coding tools and projects.