Showing 1-20 of 24 projects
Open-source ChatGPT replacement with local AI model support
VybeGuide.ai discovery platform for vibe coders
Unified framework for building enterprise RAG pipelines with small, specialized models
A self-hosted, offline, ChatGPT-like chatbot powered by Llama 2 with no data leaving your device.
Production ready AI toolkit for local AI inference
Unified, production-ready inference API to run open-source, speech, and multimodal models on cloud, on-prem, or your laptop.
A private & local AI personal knowledge management app for high entropy people with a focus on vibe coders.
A web interface for chatting with Alpaca through llama.cpp, with a fully dockerized setup and easy-to-use API.
A free, open-source Rust inference server compatible with OpenAI-API, suitable for vibe coders
A free, open-source AI code completion plugin for Visual Studio Code that rivals GitHub Copilot.
AGiXT is a comprehensive AI agent automation platform that streamlines AI integration and task orchestration.
An open-source language server that empowers software engineers with AI-powered functionality, not replacing them.
A C++ library for building local AI inference platforms with support for ONNX models.
RamaLama simplifies local serving of AI models and enables their use for inference in production via containers.
Reliable model swapping for local LLM servers - seamlessly switch between llama.cpp, vLLM, and compatible backends
A Python library that provides SOTA compression techniques and efficient LLM inference on Intel platforms to build chatbots quickly.
A developer-focused platform for text-to-speech, RAG, and LLMs, with local-first architecture.
A native macOS app that allows you to chat with your favorite LLaMA language models.
Open-source LLM load balancer and serving platform for self-hosting LLMs at scale.
A JavaScript tool for calculating token/s and GPU memory requirements for large language models like LLaMa.
Get weekly updates on trending AI coding tools and projects.