Run LLMs locally in C/C++ with high performance
96.8K
Stars
15.2K
Forks
Mar 10, 2023
Created
Mar 5, 2026
Last Updated
+2.5% change
stars per day
Good engagement
stars/day over 1.1K days
Run local LLMs on any device with GPT4All
Web UI for local AI with multiple backends and offline capabilities
Run frontier AI models locally across devices using RDMA and tensor parallelism
Open-source ChatGPT replacement with local AI model support
Sign in to leave a comment or vote
Sign InNo comments yet. Be the first to comment!
Get weekly updates on trending AI coding tools and projects.