deepseek-ai/FlashMLA

Efficient multi-head latent attention kernels for AI coding tools and frameworks.

C++
AI & Machine Learning
LLM Frameworks
MIT

12.5K

Stars

994

Forks

Feb 21, 2025

Created

Feb 6, 2026

Last Updated

Project Analytics

Stars Growth (1 Month)

+60

+0.5% change

Avg Daily Growth (1 Month)

+2.1

stars per day

Fork/Star Ratio (All Time)

7.9%

Normal engagement

Lifetime Growth

33.0

stars/day over 379 days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

latent-attention
multi-head-attention
ai-coding
performance-optimization
c++

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.