ggml-org/llama.cpp

Run LLMs locally in C/C++ with high performance

C++
Local AI & Model Runners
Local Inference Engines
MIT

96.8K

Stars

15.2K

Forks

Mar 10, 2023

Created

Mar 5, 2026

Last Updated

Project Analytics

Stars Growth (1 Month)

+2.3K

+2.5% change

Avg Daily Growth (1 Month)

+82.8

stars per day

Fork/Star Ratio (All Time)

15.8%

Good engagement

Lifetime Growth

88.6

stars/day over 1.1K days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

Topics

AI-Generated Tags

llama.cpp
ggml
C++
LLM inference
local AI
model runners

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.