Showing 1-20 of 62 projects
PyTorch Lightning simplifies deep learning training and deployment at scale.
Build and run Docker containers leveraging NVIDIA GPUs
An open-source machine learning engineering reference with resources for training, deploying, and scaling AI models.
CUDA on non-NVIDIA GPUs, a Rust library for utilizing CUDA on a variety of GPU architectures.
TensorRT LLM provides a Python API and optimizations to efficiently run large language models on NVIDIA GPUs.
Advanced offline password cracker supporting hundreds of hash and cipher types across multiple platforms
NVIDIA TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs.
A minimal GPU design in Verilog to learn how GPUs work from the ground up
A high-performance gradient boosting library for machine learning tasks on CPUs and GPUs.
TensorRT implementation of popular deep learning networks for efficient inference on GPUs
An open-source implementation of large language models with a focus on model parallelism and efficiency.
OptiScaler bridges upscaling/frame gen across GPUs, supporting DLSS2+, XeSS, FSR2+ and more.
Postgres-based database with GPU acceleration for machine learning and AI applications.
A flexible, high-performance deep learning framework for Python that runs on GPUs.
Unlock vGPU functionality for consumer-grade GPUs to enable advanced GPU-accelerated workloads.
A high-performance machine learning library for CUDA-enabled GPUs, accelerating data science workflows.
A high-performance zero-knowledge proof acceleration library for C++ and Rust developers working on cryptographic applications.
A high-performance GPU-accelerated fluid dynamics simulation library for scientific computing and visualization.
Optimize AI inference performance on GPUs with this Python library for selecting and tuning inference engines.
This Python script removes restrictions on NVENC video encoding sessions for consumer-grade GPUs.
Get weekly updates on trending AI coding tools and projects.