Explore Projects

Discover 3 open source projects

Active filters (1):
Search: llm-evaluation-frameworkร—
Clear all

Showing 1-3 of 3 projects

confident-ai/deepeval

A Python framework for evaluating and benchmarking large language models (LLMs) and their capabilities.

13.9K
Active
Python
LLM Frameworks
Python
#llm-evaluation#benchmarking#python-framework

promptfoo/promptfoo

A framework for testing and evaluating large language models, prompts, and AI agents for security and performance.

10.8K
Active
TypeScript
LLM Frameworks
TypeScript
#llm-evaluation#prompt-engineering#red-teaming

msoedov/agentic_security

An AI-powered security toolkit for LLM vulnerability scanning and red teaming.

1.8K
Active
Python
LLM Frameworks
Security Research
Python
#llm-security#llm-vulnerability-scanner#llm-fuzzing

Stay in the loop

Get weekly updates on trending AI coding tools and projects.