Showing 101-116 of 116 projects
A minimal implementation of the Flash Attention algorithm in CUDA for efficient AI model inference.
A Python library for OCR text recognition using a CNN-based seq2seq model with visual attention, compatible with Google Cloud ML Engine.
A PyTorch implementation of the original Transformer model with interactive visualizations.
A text classifier library that uses Hierarchical Attention Networks for document classification.
A comprehensive benchmark for spatio-temporal predictive learning, with a focus on AI-powered weather forecasting and video prediction.
A deep learning-based library for efficient lane detection, using self-attention distillation.
An unmaintained database sharding extension for PostgreSQL, superseded by the Citus distributed database.
Experts.js provides an easy way to create and deploy OpenAI assistants as modular AI agents with expanded memory and attention.
rebar is an Erlang build tool that provides dependency management and other build-related functionality.
A curated list of efficient attention modules for building AI-powered applications with transformers.
Graph Transformer Architecture for developing graph neural networks with attention mechanisms.
A next-generation AGI system that redefines intelligence as entropy reduction rather than attention modeling.
Self-Attention GAN, a Python library for generative adversarial networks using self-attention mechanisms.
This repository contains implementations of Neural Process variants for machine learning research.
A Python library that implements Spatial Transformer Networks, a type of deep learning model that can learn spatial transformations.
Efficient implementations of Native Sparse Attention, a key component in large language models.
Get weekly updates on trending AI coding tools and projects.