Showing 1-14 of 14 projects
This repository contains prompts for liberating AI systems, likely for adversarial or cybersecurity purposes.
Sliver is an adversary emulation framework written in Go that can be used for red team engagements.
A Python library for machine learning security, providing tools for adversarial attacks and defenses.
A data augmentation library for natural language processing (NLP) tasks, enabling developers to improve model performance.
TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.
A unified evaluation framework for large language models, focused on prompt engineering and model robustness.
This is a PyTorch implementation of adversarial attacks, a tool for developers working on deep learning projects.
A curated reading list for security, safety, and privacy of large language models (LLMs) and AI systems.
A collection of must-read papers on adversarial attacks and defenses for natural language processing.
A toolbox to generate adversarial examples that fool neural networks in various ML frameworks.
A toolbox for adversarial robustness research, focused on building more secure machine learning models.
A PyTorch library for attacking and defending deep learning models against adversarial examples.
This repository contains detailed adversary simulation APT campaigns targeting various critical sectors.
Get weekly updates on trending AI coding tools and projects.