Showing 1-20 of 25 projects
A tutorial on deep learning by renowned professor Li Hongy, covering a wide range of AI and machine learning topics.
Removes unnecessary files from node_modules to improve project structure and performance.
A high-performance library for efficient neural network pruning and compression across LLMs, vision models, and more.
A sparsity-aware deep learning inference runtime for CPUs, optimized for performance and efficiency.
Optimizes large language models for low-bit precision and sparsity, improving model compression techniques.
AIMET is an open-source library for advanced quantization and compression techniques in trained neural network models.
A curated list of neural network pruning resources for developers interested in model acceleration and compression.
SparseML provides a library for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models.
A Python library for optimizing deep learning models for faster inference on deployment platforms like TensorRT.
A curated list of efficient and compressed large language models for developers to explore.
A JavaScript-based Gobang (Five-in-a-Row) AI game, built using the Alpha-Beta pruning algorithm.
Easy-to-use, config-driven CLI tool for the Restic backup system, with support for deduplication and incremental backups.
Practical course on using Large Language Models (LLMs) with tools like LangChain, HuggingFace, and PEFT.
This repository provides a model pruning technique for the YOLOv3 object detection model on the Oxford Hand dataset.
An open-source toolbox and benchmark for model compression and acceleration in PyTorch.
PaddleSlim is an open-source library for deep model compression and architecture search.
A toolkit to optimize machine learning models for deployment, including quantization and pruning.
A powerful benchmark for Monte Carlo Tree Search in sequential decision-making scenarios.
A Python library for channel and layer pruning of YOLOv3 and YOLOv4 models, with support for knowledge distillation.
A PyTorch library for rethinking network pruning techniques to improve model performance.
Get weekly updates on trending AI coding tools and projects.