Explore Projects

Discover 5 open source projects

Active filters (1):
Search: low-rankร—
Clear all

Showing 1-5 of 5 projects

microsoft/LoRA

An open-source implementation of the LoRA (Low-Rank Adaptation) technique for fine-tuning large language models.

13.3K
Archived
Python
LLM Frameworks
PyTorch
#low-rank-adaptation#language-model#fine-tuning

cloneofsimo/lora

A library for fine-tuning diffusion models like Stable Diffusion using Low-Rank Adaptation (LoRA) for quick and efficient model personalization.

7.5K
Archived
Jupyter Notebook
Fine-tuning
Inference
Jupyter Notebook
#diffusion#stable-diffusion#fine-tuning

nunchaku-ai/nunchaku

An open-source library for quantizing diffusion models to 4-bit precision, absorbing outliers through low-rank components.

3.7K
Active
Python
Diffusion Models
Quantization
PyTorch
#diffusion-models#quantization#mlops

jiaweizzhao/GaLore

A memory-efficient library for training large language models (LLMs) using gradient low-rank projection.

1.7K
Archived
Python
LLM Frameworks
Python
#machine-learning#large-language-models#memory-efficiency

unit-mesh/unit-minions

This repository provides tools and models for training LoRA (Low-Rank Adaptation) for large language models like LLaMA and ChatGLM, enabling AI-powered code generation and assistance.

1.1K
Archived
Jupyter Notebook
LLM Frameworks
Fine-tuning
Jupyter Notebook
#llm#lora#code-generation

Stay in the loop

Get weekly updates on trending AI coding tools and projects.