Run frontier AI models locally across devices using RDMA and tensor parallelism
42.1K
Stars
2.9K
Forks
Jun 24, 2024
Created
Mar 5, 2026
Last Updated
+2.5% change
stars per day
Normal engagement
stars/day over 620 days
Run LLMs locally in C/C++ with high performance
Run local LLMs on any device with GPT4All
Web UI for local AI with multiple backends and offline capabilities
Open-source ChatGPT replacement with local AI model support
Sign in to leave a comment or vote
Sign InNo comments yet. Be the first to comment!
Get weekly updates on trending AI coding tools and projects.