Relaxed-System-Lab/Flash-Sparse-Attention

Efficient implementations of Native Sparse Attention, a key component in large language models.

Python
AI & Machine Learning
LLM Frameworks
Apache-2.0

982

Stars

14

Forks

Aug 17, 2025

Created

Sep 29, 2025

Last Updated

Project Analytics

Stars Growth (1 Month)

-61

-5.8% change

Avg Daily Growth (1 Month)

-2.2

stars per day

Fork/Star Ratio (All Time)

1.4%

Normal engagement

Lifetime Growth

4.9

stars/day over 201 days

Stars Over Time

Forks Over Time

Open Issues Over Time

Pull Requests Over Time

Commits Over Time

AI-Generated Tags

machine-learning
attention-mechanism
large-language-models
performance-optimization
open-source

Comments (0)

Sign in to leave a comment or vote

Sign In

No comments yet. Be the first to comment!

Stay in the loop

Get weekly updates on trending AI coding tools and projects.