Showing 41-60 of 62 projects
Find which LLM models run on your hardware. Supports 157 models across 30 providers.
Personal AI assistant platform with multi-app chat integration and extensible capabilities for local or cloud deployment.
Autonomous AI assistant infrastructure in Zig—fast, minimal, self-contained runtime for building AI agents.
Plugin for Stable Diffusion WebUI that optimizes large image generation through tiled diffusion and VAE techniques.
AI-powered stock analysis tool with multi-source LLM support, sentiment analysis, and local data processing
Rust implementation of OpenClaw focusing on privacy-preserving AI model execution and security hardening.
Native macOS runtime for running local/cloud AI models with MCP tool sharing and always-on workflows.
Run Claude AI agents on ultra-low-power chips with local-first memory and privacy.
Fast local neural text-to-speech engine for offline voice synthesis
AI subtitle generator for video with DaVinci Resolve integration, speaker diarization, runs locally.
Standalone Windows executables for Whisper speech-to-text & diarization without Python setup.
Native multimodal model for high-quality image generation with text-to-image capabilities
Reliable model swapping for local LLM servers - seamlessly switch between llama.cpp, vLLM, and compatible backends
Desktop GUI for OpenClaw AI agents - turns CLI agent orchestration into a graphical interface
Personal AI assistant in Rust with multi-provider LLMs, memory, sandboxed execution & MCP tools
Large language model by Alibaba Cloud Qwen team for advanced NLP and AI applications
Lightweight AI assistant for ESP32 microcontrollers with GPIO, scheduling, custom tools, and memory.
Cross-platform voice-to-text app with local & cloud AI models, privacy-first architecture
Pure C inference engine for Mistral Voxtral 4B speech-to-text model with minimal dependencies
macOS offline speech-to-text app using local ML—no cloud, fully private voice dictation
Get weekly updates on trending AI coding tools and projects.