Category
Showing 1-20 of 20 trending projects
Find which LLM models run on your hardware. Supports 157 models across 30 providers.
Autonomous AI assistant infrastructure in Zigโfast, minimal, self-contained runtime for building AI agents.
Run LLMs locally in C/C++ with high performance
Run Claude AI agents on ultra-low-power chips with local-first memory and privacy.
Rust implementation of OpenClaw focusing on privacy-preserving AI model execution and security hardening.
Personal AI assistant platform with multi-app chat integration and extensible capabilities for local or cloud deployment.
Run frontier AI models locally across devices using RDMA and tensor parallelism
Cross-platform voice-to-text app with local & cloud AI models, privacy-first architecture
Lightweight AI assistant for ESP32 microcontrollers with GPIO, scheduling, custom tools, and memory.
Pure C inference engine for Mistral Voxtral 4B speech-to-text model with minimal dependencies
Fast local neural text-to-speech engine for offline voice synthesis
Reliable model swapping for local LLM servers - seamlessly switch between llama.cpp, vLLM, and compatible backends
Web UI for local AI with multiple backends and offline capabilities
macOS offline speech-to-text app using local MLโno cloud, fully private voice dictation
lib-pku/libpku: A collection of course materials and resources for vibe coders.
Run billion-parameter LLMs on embedded devices with extreme quantization for edge inference
Standalone Windows executables for Whisper speech-to-text & diarization without Python setup.
Get weekly updates on trending AI coding tools and projects.