Showing 21-40 of 45 projects
A Python framework for efficient model inference with omni-modality AI models.
A fully automated AI-powered short video engine for generating videos from text, images, and audio.
Generates high-quality human motion videos with confidence-aware pose guidance.
ViMax is an all-in-one tool for agentic video generation, allowing developers to act as directors, screenwriters, producers, and video generators.
A Python framework for high-performance auto-regressive diffusion model-based image and video generation.
An open-source, real-time, and streaming interactive world model for developers building AI-powered applications.
Official implementation of a paper on expressive talking head generation using diffusion models.
ReCamMaster is a novel video generation model that enables camera-controlled generative rendering from a single input video.
A Python-based library for generating videos from code, focused on the vibe coding and AI tools ecosystem.
Official implementation of a controllable character video synthesis model using spatial decomposed modeling.
An open-source benchmarking tool for evaluating video generation models.
Phantom is a subject-consistent video generation tool that aligns text and video via cross-modal alignment.
An implementation of a pose-guided text-to-video generation model using the LAION Pose Dataset.
MagicTime is a diffusion-based time-lapse video generation model for metamorphic video simulation.
Official server for the MiniMax Model Context Protocol (MCP) that enables powerful AI capabilities like text-to-speech, image generation, and video generation.
MiniSora is a community exploring the implementation and future development of the Sora video generation platform.
A Python library for accelerating inference of video diffusion models using timestep embedding caching.
This repository contains code for a paper on motion representations for articulated animation, a key component in AI-driven animation and video generation.
A multimodal-driven architecture for customized video generation, enabling developers to create unique AI-powered videos.
StableAvatar is an end-to-end video diffusion transformer that generates high-quality audio-driven avatar videos.
Get weekly updates on trending AI coding tools and projects.