Category
Showing 1-20 of 20 trending projects
Web UI for local AI with multiple backends and offline capabilities
Run frontier AI models locally across devices using RDMA and tensor parallelism
lib-pku/libpku: A collection of course materials and resources for vibe coders.
Find which LLM models run on your hardware. Supports 157 models across 30 providers.
Personal AI assistant platform with multi-app chat integration and extensible capabilities for local or cloud deployment.
Autonomous AI assistant infrastructure in Zigโfast, minimal, self-contained runtime for building AI agents.
Rust implementation of OpenClaw focusing on privacy-preserving AI model execution and security hardening.
Run Claude AI agents on ultra-low-power chips with local-first memory and privacy.
Fast local neural text-to-speech engine for offline voice synthesis
Standalone Windows executables for Whisper speech-to-text & diarization without Python setup.
Reliable model swapping for local LLM servers - seamlessly switch between llama.cpp, vLLM, and compatible backends
Lightweight AI assistant for ESP32 microcontrollers with GPIO, scheduling, custom tools, and memory.
Cross-platform voice-to-text app with local & cloud AI models, privacy-first architecture
Pure C inference engine for Mistral Voxtral 4B speech-to-text model with minimal dependencies
macOS offline speech-to-text app using local MLโno cloud, fully private voice dictation
Run billion-parameter LLMs on embedded devices with extreme quantization for edge inference
Get weekly updates on trending AI coding tools and projects.