carlini/yet-another-applied-llm-benchmark

A benchmark to evaluate language models on various tasks, useful for vibe coders building AI-powered apps.

Python
AI & Machine Learning
LLM Frameworks
GPL-3.0

1.0K

Stars

78

Forks

Dec 18, 2023

Created

Apr 27, 2025

Last Updated

Project Analytics

Stars Growth (1 Month)

+3

+0.3% change

Avg Daily Growth (1 Month)

+0.1

stars per day

Fork/Star Ratio (All Time)

7.5%

Normal engagement

Lifetime Growth

1.3

stars/day over 810 days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

language-models
evaluation
benchmark
llm
ai-development
testing

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.