mostlygeek/llama-swap

Reliable model swapping for local LLM servers - seamlessly switch between llama.cpp, vLLM, and compatible backends

Go
Local AI & Model Runners
Local Inference Engines
MIT

2.6K

Stars

191

Forks

Oct 4, 2024

Created

Mar 2, 2026

Last Updated

Project Analytics

Stars Growth (1 Month)

+155

+6.4% change

Avg Daily Growth (1 Month)

+8.6

stars per day

Fork/Star Ratio (All Time)

7.4%

Normal engagement

Lifetime Growth

4.9

stars/day over 519 days

Stars Over Time

Forks Over Time

Open Issues Over Time

AI-Generated Tags

local-llm
model-swapping
llama-cpp
vllm
openai-compatible
inference

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.