Find which LLM models run on your hardware. Supports 157 models across 30 providers.
11.5K
Stars
647
Forks
Feb 15, 2026
Created
Mar 5, 2026
Last Updated
+368.3% change
stars per day
Normal engagement
stars/day over 19 days
Run LLMs locally in C/C++ with high performance
Run local LLMs on any device with GPT4All
Web UI for local AI with multiple backends and offline capabilities
Run frontier AI models locally across devices using RDMA and tensor parallelism
Sign in to leave a comment or vote
Sign InNo comments yet. Be the first to comment!
Get weekly updates on trending AI coding tools and projects.