Showing 1-14 of 14 projects
An open-source AutoML toolkit for automating the machine learning lifecycle, including feature engineering, neural architecture search, and hyperparameter tuning.
Efficient AI model backbones developed by Huawei's Noah's Ark Lab, including GhostNet, TNT, and MLP.
A collection of resources and tools related to knowledge distillation, a technique for compressing and transferring knowledge from a larger model to a smaller one.
A high-performance library for efficient neural network pruning and compression across LLMs, vision models, and more.
Pretrained language model and optimization techniques for large-scale distributed AI/ML development.
An automatic model compression framework for developing smaller and faster AI applications.
A curated list of neural network pruning resources for developers interested in model acceleration and compression.
A comprehensive collection of resources for model quantization research and optimization.
A PyTorch library for exploring deep and shallow knowledge distillation experiments with flexibility.
A PyTorch implementation of various Knowledge Distillation (KD) methods for model compression and transfer learning.
A toolkit to optimize machine learning models for deployment, including quantization and pruning.
A Python toolkit for building and fine-tuning deep learning models, including NLP and question answering.
Efficient computing methods developed by Huawei Noah's Ark Lab for model compression and optimization.
A library for accelerating deep neural networks through channel pruning, a model compression technique.
Get weekly updates on trending AI coding tools and projects.