Showing 1-5 of 5 projects
An open-source implementation of the LoRA (Low-Rank Adaptation) technique for fine-tuning large language models.
A library for fine-tuning diffusion models like Stable Diffusion using Low-Rank Adaptation (LoRA) for quick and efficient model personalization.
An open-source library for quantizing diffusion models to 4-bit precision, absorbing outliers through low-rank components.
A memory-efficient library for training large language models (LLMs) using gradient low-rank projection.
This repository provides tools and models for training LoRA (Low-Rank Adaptation) for large language models like LLaMA and ChatGLM, enabling AI-powered code generation and assistance.
Get weekly updates on trending AI coding tools and projects.