Explore Projects

Discover 1 open source projects

Active filters (1):
Search: llm-evaluation-metricsร—
Clear all

Showing 1-1 of 1 projects

confident-ai/deepeval

A Python framework for evaluating and benchmarking large language models (LLMs) and their capabilities.

13.9K
Active
Python
LLM Frameworks
Python
#llm-evaluation#benchmarking#python-framework

Stay in the loop

Get weekly updates on trending AI coding tools and projects.