Showing 181-200 of 351 projects
A scalable image generation model based on the Llama language model, outperforming diffusion models.
Run AI models like LLaMA locally on your machine with Node.js bindings for llama.cpp and enforce JSON schema on the output.
This is a Verilog-based GPGPU hardware project for accelerating AI/ML workloads.
A library for training PyTorch models with differential privacy, enabling privacy-preserving machine learning.
uTensor is a TinyML AI inference library for microcontrollers and edge devices, enabling embedded AI applications.
OminiControl is a minimal and universal control system for diffusion transformer models like DALL-E and Stable Diffusion.
A diffusion transformer model for generating high-quality 4K text-to-image art, focused on vibe coders and AI developers.
A Python extension for the Stable Diffusion AI model, focused on the DreamBooth fine-tuning technique.
A TensorFlow template application for building deep learning models and deploying them to production.
Flux 2 is a pure C inference engine for an image generation model, useful for vibe coders working with AI tools.
Official inference repo for FLUX.2 models, a library for building AI-powered applications.
A Python library for state-of-the-art generative image models, focused on AI and machine learning tools for vibe coders.
A comprehensive guide for building production-ready RAG-based LLM applications using the Ray framework.
LongWriter is a fine-tuned large language model (LLM) that can generate high-quality long-form text of 10,000+ words from long-form context.
ONNX.js allows developers to run ONNX machine learning models using JavaScript in the browser or Node.js.
A complete installer for Automatic1111's popular Stable Diffusion WebUI, a tool for AI-powered image generation.
A C# library that provides a realtime interface to the Automatic1111 Stable Diffusion API for AI-powered image generation.
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
A Kotlin library that runs Stable Diffusion on Android devices with Snapdragon NPU acceleration.
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web.
Get weekly updates on trending AI coding tools and projects.