mit-han-lab/llm-awq

A library for efficient weight quantization of large language models to accelerate inference on edge devices.

Python
AI & Machine Learning
LLM Frameworks
MIT

3.5K

Stars

301

Forks

Jun 1, 2023

Created

Jul 17, 2025

Last Updated

Project Analytics

Stars Growth (1 Month)

+19

+0.6% change

Avg Daily Growth (1 Month)

+0.7

stars per day

Fork/Star Ratio (All Time)

8.7%

Normal engagement

Lifetime Growth

3.4

stars/day over 1.0K days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

llm
compression
acceleration
quantization
edge-inference
performance-optimization

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.