Try LongCat-Video Live Demo
Experience the power of LongCat-Video in real-time. Generate videos from text or images and see the amazing results.
Loading LongCat-Video demo...
What is LongCat-Video
LongCat-Video is a foundational video generation model from Meituan with 13.6B parameters. It features a unified architecture supporting Text-to-Video, Image-to-Video, and Video-Continuation tasks within a single model. LongCat-Video natively supports long video generation, producing minutes-long videos without color drifting or quality degradation, and generates 720p 30fps videos efficiently using coarse-to-fine generation strategy.
- Text-to-Video GenerationTransform text prompts into high-quality videos with advanced neural networks. Create engaging visual content from simple text descriptions.
- Image-to-Video GenerationBring static images to life by generating smooth video animations. Create dynamic content from single images with intelligent motion synthesis.
- Video ContinuationExtend existing videos seamlessly without quality loss or color drifting. Generate minutes-long videos with consistent quality and natural transitions.
Model Comparison
Compare LongCat-Video with other state-of-the-art video generation models across different tasks and metrics.
Text-to-Video Performance
| Model | Accessibility | Architecture | Parameters | Text-Alignment | Visual Quality | Motion Quality | Overall Quality |
|---|---|---|---|---|---|---|---|
| Veo3 | Proprietary | - | - | 3.99 | 3.23 | 3.86 | 3.48 |
| PixVerse-V5 | Proprietary | - | - | 3.81 | 3.13 | 3.81 | 3.36 |
| Wan 2.2-T2V-A14B | Open Source | MoE | 28B (14B Active) | 3.70 | 3.26 | 3.78 | 3.35 |
| LongCat-Video | Open Source | Dense | 13.6B | 3.76 | 3.25 | 3.74 | 3.38 |
Image-to-Video Performance
| Model | Accessibility | Architecture | Parameters | Image-Alignment | Text-Alignment | Visual Quality | Motion Quality | Overall Quality |
|---|---|---|---|---|---|---|---|---|
| Seedance 1.0 | Proprietary | - | - | 4.12 | 3.70 | 3.22 | 3.77 | 3.35 |
| Hailuo-02 | Proprietary | - | - | 4.18 | 3.85 | 3.18 | 3.80 | 3.27 |
| Wan 2.2-I2V-A14B | Open Source | MoE | 28B (14B Active) | 4.18 | 3.33 | 3.23 | 3.79 | 3.26 |
| LongCat-Video | Open Source | Dense | 13.6B | 4.04 | 3.49 | 3.27 | 3.59 | 3.17 |
Quick Start
Follow these steps to set up LongCat-Video and start generating videos in minutes.
#facc15;">"color: #4ade80;">git "color: #4ade80;">clone https://github.com/meituan-longcat/LongCat-Video."color: #4ade80;">git
cd LongCat-VideoKey Features of LongCat-Video
Advanced AI-powered video generation capabilities designed for creators and developers worldwide.
Unified Multi-Task Framework
Single model supporting Text-to-Video, Image-to-Video, and Video-Continuation tasks with consistent high performance across all generation modes.
Long Video Generation
Natively pretrained on Video-Continuation tasks to produce minutes-long videos without color drifting or quality degradation.
High-Quality Output
Generate 720p, 30fps videos with professional quality using advanced neural network architecture and optimization techniques.
Efficient Inference
Fast video generation within minutes using coarse-to-fine generation strategy along temporal and spatial axes with GPU acceleration.
Strong Performance
Powered by multi-reward GRPO optimization achieving performance comparable to leading open-source and commercial video generation models.
Open Source & MIT License
Fully open-source model with MIT license available on GitHub and Hugging Face for research and commercial applications.
What People Are Talking About LongCat-Video on X
Join the conversation about LongCat-Video on social media
🚀 LongCat-Video Now Open-Source: Text/Image-to-Video + Video Continuation in One Model
— Meituan LongCat (@Meituan_LongCat) October 25, 2025
🏆 Text/Image-to-Video Performance Hits Open-Source SOTA
🎬 Minutes-Long High-Quality Videos: No Color Drift/Quality Loss (Industry-Standout)
⚙ 13.6B Params | Strong Open-Source DiT-Based… pic.twitter.com/rJXv7DiVZx
Chinese doordash dropping MIT license foundation video models???
— Vaibhav (VB) Srivastav (@reach_vb) October 25, 2025
“We introduce LongCat-Video, a foundational video generation model with 13.6B parameters, delivering strong performance across Text-to-Video, Image-to-Video, and Video-Continuation generation tasks.”…
🐈✨✨✨ #VRChat #まめひなた
— ティキレス (@VRTikiRes) December 29, 2023
ワールド名 Longcat Challenge ǃ https://t.co/FSGoEMgCUM pic.twitter.com/7TdIRdy94M
🇨🇳 Chinese doordash Meituan launched LongCat-Video on @huggingface under MIT License.
— Rohan Paul (@rohanpaul_ai) October 25, 2025
A small 13.6B model that unifies Text-to-Video, Image-to-Video, and Video-Continuation, targeting minutes-long coherent clips and fast 720p 30fps output.
It frames every task as continuing… pic.twitter.com/Q0b71C2VWA
🎡📸
— n 🐹🪻 (@hasumikn) September 29, 2024
(cr: longcat • crepe) pic.twitter.com/h1NTxKMxzJ
Congrats to @Meituan_LongCat on achieving extremely low cost and fast generation speed for LongCat-Flash —powered by SGLang,FlashInfer kernels and #opensource innovation.
— NVIDIA AI Developer (@NVIDIAAIDev) September 8, 2025
We are excited to continue collaborating with @lmsysorg and the community to upstream optimizations. Read… https://t.co/M1wGVgR2No
🎉 Congrats to Meituan LongCat team on launching LongCat-Flash-Chat — a 560B MoE model now open-sourced!
— LMSYS Org (@lmsysorg) September 2, 2025
Powered by SGLang inference acceleration, it achieves high efficiency and strong benchmark results.
Details in the blog 👉 https://t.co/O5iMvBhiIn https://t.co/aEHttUHnpv
Meituan just open sourced their new MoE LLM LongCat on @huggingface
— Tiezhen WANG (@Xianbao_QIAN) August 30, 2025
It's exciting to see new players! The model looks very interesting too with technical report.https://t.co/DduHMQxw5F pic.twitter.com/QMq0K8qJa0
LongCat Flash Chat Available Now on Chuteshttps://t.co/cRa7rR48BQ
— Chutes (@chutes_ai) August 31, 2025
$0.1999 USD IN
$0.8001 USD OUT
Try it out now with PAYG or an active subscription, which starts at $3 for 300 requests/day. https://t.co/pRo9IcGzSi pic.twitter.com/Hn7m8CnqEO
