Showing 1-9 of 9 projects
DeepSpeed optimizes deep learning training and inference with distributed computing techniques.
A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration.
An accelerator for local LLM inference and fine-tuning on Intel XPUs, with seamless integration into popular LLM frameworks.
LMDeploy is a toolkit for compressing, deploying, and serving large language models (LLMs).
An open-source implementation of large language models with a focus on model parallelism and efficiency.
Example models using DeepSpeed for AI development
A safe reinforcement learning from human feedback (RLHF) system for aligning large language models with human values.
An ongoing research project for training transformer language models at scale, including BERT and GPT-2.
An open-source framework for building knowledgeable large language models with fine-tuning capabilities.
Get weekly updates on trending AI coding tools and projects.