Unveiling the Two "Superpowers" Behind AI Video Creation

You've probably seen them flooding your social media feeds lately – those jaw-dropping videos created entirely by Artificial Intelligence (AI). Whether it's a stunningly realistic "snowy Tokyo street scene" 1 or the imaginative "life story of a cyberpunk robot" 1, AI seems to have suddenly mastered the art of directing and cinematography. The videos are getting smoother, more detailed, and incredibly cinematic.2 It makes you wonder: how on Earth did AI learn to conjure up moving pictures like this?

The "Secret Struggle" of Making Videos

Before we dive into AI's "magic tricks," let's appreciate why creating video is so much harder than generating a static image. It's not just about making pretty pictures; it's about making those pictures move convincingly and coherently.4

Think about it: a video is a sequence of still images, or "frames." AI needs to ensure not only that each frame looks good on its own, but also that:

    1. Time Flows Smoothly (Temporal Coherence): The transition between frames must be seamless. Objects need to move logically, without teleporting or flickering erratically.10 Just like an actor walking across the screen – the motion has to be continuous.
    2. Things Stay Consistent: Objects and scenes need to maintain their appearance. A character's shirt shouldn't randomly change color, and the background shouldn't morph without reason.11
    3. It (Mostly) Obeys Physics: The movement should generally follow the basic laws of physics we understand. Balls fall down, water flows.4 Current AI isn't perfect here, but it's getting better.
    4. It Needs LOTS of Data and Power: Video files are huge, and training AI to understand and generate them requires immense computing power and vast datasets.5

Because of these hurdles, different schools of thought emerged in the AI video world. Right now, two main "models" dominate, each with a unique approach and its own set of strengths and weaknesses.17

The Two Schools: Autoregressive (AR) vs. Diffusion

Imagine our AI artist wants to create a video. They have two main methods:

  • Method 1: The Storyteller or Sequential Painter. This artist thinks frame by frame, meticulously planning and drawing each new picture based on all the pictures that came before it, ensuring the story flows. We call this the Autoregressive (AR) approach.17
  • Method 2: The Sculptor or Photo Restorer. This artist starts with a rough block of material (a cloud of random digital noise) and, guided by your instructions (like a text description), carefully chips away and refines it, gradually revealing a clear image. This is the Diffusion method.17

Let's get to know these two artistic styles.

Style 1: The Autoregressive (AR) "Sequential Storytelling" Method

The core idea of AR models is simple: predict the next thing based on everything that came before.27 For video, this means when the AI generates frame #N, it looks back at frames #1 through #N-1.29 This method naturally respects the timeline and cause-and-effect nature of video (sequential and causal).

    • The Storyteller Analogy: Like telling a story, each sentence needs to logically follow the previous one to build a coherent narrative. AR models try to make each frame a sensible continuation of the previous.
    • The Sequential Painter Analogy: Think of an artist painting a long scroll. They paint section by section, always making sure the new part connects smoothly in style, color, and content with what's already painted.

How it Works (Simplified):

Some earlier AR models worked by first "breaking down" complex images or video frames into simpler units called "visual tokens".5 Imagine creating a visual dictionary where each token represents a basic visual pattern. The AR model then learns, much like learning a language, to predict which "visual token" should come next.5

However, this "break-and-reassemble" approach can lose fine details. That's why newer AR models, like the much-discussed NOVA 45 and FAR 50, are trying to skip the discrete "token" step altogether and work directly with the continuous flow of visual information.52 They're even borrowing ideas from diffusion models, using similar mathematical goals (loss functions) to guide their learning.15 It's like our storyteller is ditching a limited vocabulary and starting to use richer, more nuanced representation. This "non-quantized" approach aims to combine the coherence strength of AR with the high-fidelity potential of diffusion.52

AR's Pros:

    • Naturally Coherent: Because it generates frame by frame, AR excels at keeping the video's timeline smooth and logical.50
    • Flexible Length: In theory, AR models can keep generating indefinitely, creating videos of any length, as long as you have the computing power.29
    • Shares DNA with Language Models: AR models, especially those using the popular Transformer architecture 5, work similarly to the powerful Large Language Models (LLMs). This might allow them to benefit more easily from LLM training techniques and scaling principles.27

AR's Cons:

    • Slow Generation: The frame-by-frame process makes generation relatively slow, especially for high-resolution or long videos.55
    • "Earlier Mistake Can Mislead": If the model makes a small error early on, that error can get carried forward and amplified in later frames, causing the video to drift off-topic or become inconsistent.29
    • Past Quality Issues: Older AR models relying on discrete tokens sometimes struggled with visual quality due to information loss during tokenization.11 However, as mentioned, newer non-quantized methods are tackling this.52

Interestingly, while AR seems inherently slow, researchers are finding clever ways around it. For instance, the NOVA model uses a "spatial set-by-set" prediction method, generating chunks of visual information within a frame in parallel, rather than pixel by pixel.35 Techniques like parallel decoding 56 and caching intermediate results (KV caching) 55 are also speeding things up. Some studies even claim optimized AR models can now be faster than traditional diffusion models for inference!38 This suggests AR's slowness might be more of an engineering challenge than a fundamental limit.

Style 2: The Diffusion "Refining the Rough" Method

Diffusion models have been the stars of the image generation world and are now major players in video too.4 Their core idea is a bit counter-intuitive: first break it, then fix it.17

Imagine you have a clear video. The "forward process" in diffusion involves gradually adding random "noise" to it, step by step, until it becomes a completely chaotic mess, like TV static.29

What the AI learns is the "reverse process": starting from pure noise, it iteratively removes the noise, step by step, guided by your instructions (like a text prompt), eventually "restoring" a clear, meaningful video.29

    • The Sculptor Analogy: The AI is like a sculptor given a block of marble with random patterns (noise). Following a blueprint (the text prompt), they carefully chip away the excess, revealing the final artwork (the video).
    • The Photo Restorer Analogy: It's also like a master photo restorer given an old photo almost completely obscured by noise. Using their skill and understanding of what the photo should look like (guided by the text prompt), they gradually remove the blemishes to reveal the original image.

How it Works (Simplified):

The key word for diffusion is iteration. Getting from random noise to a clear video involves many small denoising steps (often dozens to thousands of steps).29

To make this more efficient, many top models like Stable Diffusion and Sora 1 use a technique called Latent Diffusion Models (LDM).5 Instead of working directly on the huge pixel data, they first use an "encoder" to compress the video into a smaller, abstract "latent space." They do the heavy lifting (adding and removing noise) in this compact space, and then use a "decoder" to turn the result back into a full-pixel video. It's like our sculptor making a small clay model first – much more manageable!16

Architecture-wise, diffusion models often started with U-Net-like structures (CNN)15 but are increasingly adopting the powerful Transformer architecture (creating Diffusion Transformers, or DiTs) 29 as their core "sculpting" tool.

Diffusion's Pros:

    • Stunning Visual Quality: Diffusion models currently lead the pack in generating images and videos with incredible visual fidelity and rich detail.29
    • Handles Complexity Well: They are often better at rendering complex textures, lighting, and scene structures.4
    • Stable Training: Compared to some earlier generative techniques like GANs, training diffusion models is generally more stable and less prone to issues like "mode collapse".29

Diffusion's Cons:

    • Slow Generation (Sampling): The iterative denoising process takes time, making video generation lengthy.55 Fine sculpting requires patience.
    • Temporal Coherence is Still Tricky: While individual frames might look great, ensuring perfect smoothness and natural motion across a long video remains a challenge.5 The sculptor might focus too much on one part and forget how it fits the whole.
    • Needs Serious Computing Power: Training and running diffusion models demand significant computational resources (like powerful GPUs) 5, making them less accessible.57

To tackle the slowness, researchers are in a race to speed things up. Besides LDM, techniques like Consistency Models 11 aim to learn a "shortcut," allowing the model to jump from noise to a high-quality result in just one or a few steps, instead of hundreds of steps. Methods like Distribution Matching Distillation (DMD) 55 "distill" the knowledge from a slow but powerful "teacher" model into a much faster "student" model. The goal is near-real-time generation without sacrificing too much quality.55

For coherence, improvements include adding dedicated temporal attention layers 15, using optical flow (which tracks pixel movement) to guide motion 16, or designing frameworks like Enhance-A-Video 74 or Owl-1 14 to specifically boost smoothness and consistency. It seems that after mastering static image quality, making videos move realistically and tell a coherent story is the next big frontier for diffusion models.

Which Style to Choose? Storytelling vs. Sculpting

So, which approach is "better"? It depends on what you value most.

Here's a quick comparison:

AR vs. Diffusion at a Glance

Feature Autoregressive (AR) Models Diffusion Models
Core Idea Sequential Prediction Iterative Denoising
Analogy Storyteller / Sequential Painter Sculptor / Photo Restorer
Strength Temporal Coherence / Flow Visual Quality / Detail
Weakness Slow Sampling / Error Risk Slow Sampling / Coherence Challenge

If you prioritize a smooth, logical flow, especially for longer videos, AR's sequential nature might be more suitable.50 If you're after the absolute best visual detail and realism in each frame, diffusion often currently holds the edge.17 But remember, both are evolving fast and borrowing from each other.

The Best of Both Worlds: When Storytellers Meet Sculptors

Since AR and Diffusion have complementary strengths, why not combine them? 29

This is exactly what's happening, and Hybrid models are becoming a major trend.

    • Idea 1: Divide and Conquer. Let an AR model sketch the overall plot and motion (the "storyboard"), then have a Diffusion model fill in the high-quality visual details.50
    • Idea 2: AR Framework, Diffusion Engine. Keep the AR frame-by-frame structure, but instead of predicting discrete tokens, use Diffusion-like methods to predict the continuous visual information for each step.44 Models like NOVA and FAR lean this way.
    • Idea 3: Diffusion Framework, AR Principles. Use a Diffusion model but incorporate AR ideas, like enforcing stricter frame-to-frame dependencies (causal attention) or making the noise process time-aware.29 AR-Diffusion 29 and CausVid 55 are examples.

The sheer number of models with names blending AR and Diffusion concepts (AR-Diffusion, ARDiT, DiTAR, LanDiff, MarDini, ART-V, CausVid, Transfusion, HART, etc.) 29 shows this is where much of the action is. It's less about choosing one side and more about finding the smartest way to combine their powers.

The Road Ahead: Challenges and Dreams for AI Video

Despite the incredible progress, AI video generation still has hurdles to overcome 17:

    • Making Longer Videos: Most AI videos are still short. Generating minutes-long (or longer!) videos that stay coherent and interesting is a huge challenge.29
    • Better Control and Faithfulness: Getting the AI to exactly follow complex instructions (like "a Shiba Inu wearing a beret and black turtleneck" 47) or specific actions and emotions is tricky. AI can still misunderstand or "hallucinate" things not in the prompt.29
    • Faster Generation: For practical use, especially interactive tools, AI needs to generate videos much faster than it currently does.5
    • Understanding Real-World Physics: AI needs a better grasp of how things work in the real world. Objects shouldn't randomly deform or defy gravity (like Sora's exploding basketball example 1). Giving AI "common sense" is key to true realism.4

But the future possibilities are dazzling:

    • Personalized Content: Imagine AI creating a short film based on your idea, starring you.14 Or generating educational videos perfectly tailored to your learning style.
    • Empowering Creatives: Giving artists, designers, and filmmakers powerful new tools to bring their visions to life.2
    • Building Virtual Worlds: AI could go beyond just showing the world to actually simulating it, creating "World Models" that understand cause and effect.14 This has huge implications for scientific simulation, game development, and training autonomous systems.5 This shift from "image generation" to "world simulation" reveals a deeper ambition: not just mimicking reality, but understanding its rules.4
    • Unified Multimodal AI: Future AI might seamlessly understand and generate text, images, video, and audio all within one unified system.11

Achieving these dreams hinges heavily on improving efficiency. Generating long videos, enabling real-time interaction, and building complex world models all require immense computing power. Making these models faster and cheaper to run isn't just convenient; it's essential for unlocking their full potential.5 Efficiency is one key.

Conclusion: A New Era of Visual Storytelling

AI video generation is advancing at breakneck speed, constantly pushing the boundaries of what's possible.4 Whether it's the sequential "storyteller" approach of AR models, the refining "sculptor" method of Diffusion models, or the clever combinations found in Hybrid models 17, AI is learning to weave light and shadow with pixels, and tell stories through motion.

We're witnessing the dawn of a new era in visual storytelling. AI won't just change how we consume media; it will empower everyone with unprecedented creative tools. Of course, with great power comes great responsibility. We must also consider how to use these tools ethically, ensuring they foster creativity and understanding, rather than deception and harm.13

The future is unfolding frame by frame. The next AI-directed blockbuster might just start with an idea you have right now. Let's watch this space!

Works cited

[1]Asynchronous Video Generation with Auto-Regressive Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.07418v1

[2][2503.07418] AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2503.07418

[3]AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion | Request PDF - ResearchGate, accessed on April 28, 2025, https://www.researchgate.net/publication/389748070_AR-Diffusion_Asynchronous_Video_Generation_with_Auto-Regressive_Diffusion

[4]Video Diffusion Models: A Survey - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2405.03150v2

[5]Video Is Worth a Thousand Images: Exploring the Latest Trends in Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.18688

[6]Autoregressive Models in Vision: A Survey - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.05902v1

[7]A Survey on Vision Autoregressive Model - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.08666v1

[8] SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.11455v1

[9] On Improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models - NIPS papers, accessed on April 28, 2025, https://proceedings.neurips.cc/paper_files/paper/2024/file/18023809c155d6bbed27e443043cdebf-Paper-Conference.pdf

[10] Opportunities and challenges of diffusion models for generative AI - Oxford Academic, accessed on April 28, 2025, https://academic.oup.com/nsr/article/11/12/nwae348/7810289?login=false

[11] Video Diffusion Models - A Survey - OpenReview, accessed on April 28, 2025, https://openreview.net/pdf?id=sgDFqNTdaN

[12] The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.04606v1

[13] ChaofanTao/Autoregressive-Models-in-Vision-Survey - GitHub, accessed on April 28, 2025, https://github.com/ChaofanTao/Autoregressive-Models-in-Vision-Survey

[14] [2412.09600] Owl-1: Omni World Model for Consistent Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.09600

[15] arXiv:2412.07772v2 [cs.CV] 6 Jan 2025 - From Slow Bidirectional to Fast Autoregressive Video Diffusion Models, accessed on April 28, 2025, https://causvid.github.io/causvid_paper.pdf

[16] SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.11455

[17] Phenaki - SERP AI, accessed on April 28, 2025, https://serp.ai/tools/phenaki/

[18] openreview.net, accessed on April 28, 2025, https://openreview.net/pdf/9cc7b12b9ea33c67f8286cd28b98e72cf43d8a0f.pdf

[19] Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation, accessed on April 28, 2025, https://www.researchgate.net/publication/390038718_Bridging_Continuous_and_Discrete_Tokens_for_Autoregressive_Visual_Generation

[20] Autoregressive Video Generation without Vector Quantization ..., accessed on April 28, 2025, https://openreview.net/forum?id=JE9tCwe3lp

[21] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.19325v1

[22] Language Model Beats Diffusion — Tokenizer is Key to Visual Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2310.05737

[23] Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.16430v2

[24] Auto-Regressive Diffusion for Generating 3D Human-Object Interactions, accessed on April 28, 2025, https://ojs.aaai.org/index.php/AAAI/article/view/32322/34477

[25] Fast Autoregressive Video Generation with Diagonal Decoding - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.14070v1

[26] One-Minute Video Generation with Test-Time Training, accessed on April 28, 2025, https://test-time-training.github.io/video-dit/assets/ttt_cvpr_2025.pdf

[27] Photorealistic Video Generation with Diffusion Models - European Computer Vision Association, accessed on April 28, 2025, https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/10270.pdf

[28] arXiv:2412.03758v2 [cs.CV] 24 Feb 2025, accessed on April 28, 2025, https://www.arxiv.org/pdf/2412.03758v2

[29] Advancing Auto-Regressive Continuation for Video Frames - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.03758v1

[30] From Slow Bidirectional to Fast Autoregressive Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.07772v2

[31] Enhance-A-Video: Better Generated Video for Free - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2502.07508v3

[32] [D] The Tech Behind The Magic : How OpenAI SORA Works : r/MachineLearning - Reddit, accessed on April 28, 2025, https://www.reddit.com/r/MachineLearning/comments/1bqmn86/d_the_tech_behind_the_magic_how_openai_sora_works/

[33] Delving Deep into Diffusion Transformers for Image and Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2312.04557v1

[34] CVPR Poster Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution - CVPR 2025, accessed on April 28, 2025, https://cvpr.thecvf.com/virtual/2024/poster/31563

[35] SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models - AAAI Publications, accessed on April 28, 2025, https://ojs.aaai.org/index.php/AAAI/article/view/32663/34818

[36] Latte: Latent Diffusion Transformer for Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2401.03048v2

[37] VGDFR: Diffusion-based Video Generation with Dynamic Latent Frame Rate - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.12259v1

[38] [2501.00103] LTX-Video: Realtime Video Latent Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2501.00103

[39] LTX-Video: Realtime Video Latent Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.00103v1

[40] Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.03931v1

[41] LaMD: Latent Motion Diffusion for Image-Conditional Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2304.11603v2

[42] Video-Bench: Human-Aligned Video Generation Benchmark - ResearchGate, accessed on April 28, 2025, https://www.researchgate.net/publication/390569999_Video-Bench_Human-Aligned_Video_Generation_Benchmark

[43] Advancements in diffusion models for high-resolution image and short form video generation, accessed on April 28, 2025, https://gsconlinepress.com/journals/gscarr/sites/default/files/GSCARR-2024-0441.pdf

[44] NeurIPS Poster StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation, accessed on April 28, 2025, https://neurips.cc/virtual/2024/poster/94916

[45] FrameBridge: Improving Image-to-Video Generation with Bridge Models | OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=oOQavkQLQZ

[46] Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution - CVPR 2024 Open Access Repository, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Learning_Spatial_Adaptation_and_Temporal_Coherence_in_Diffusion_Models_for_CVPR_2024_paper.html

[47] Subject-driven Video Generation via Disentangled Identity and Motion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.17816v1

[48] AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion - alphaXiv, accessed on April 28, 2025, https://www.alphaxiv.org/overview/2503.07418

[49] Phenaki - Reviews, Pricing, Features - SERP, accessed on April 28, 2025, https://serp.co/reviews/phenaki.video/

[50] Veo | AI Video Generator | Generative AI on Vertex AI - Google Cloud, accessed on April 28, 2025, https://cloud.google.com/vertex-ai/generative-ai/docs/video/generate-videos

[51] Generate videos in Gemini and Whisk with Veo 2 - Google Blog, accessed on April 28, 2025, https://blog.google/products/gemini/video-generation/

[52] Sora: Creating video from text - OpenAI, accessed on April 28, 2025, https://openai.com/index/sora/

[53] Top AI Video Generation Models in 2025: A Quick T2V Comparison - Appy Pie Design, accessed on April 28, 2025, https://www.appypiedesign.ai/blog/ai-video-generation-models-comparison-t2v

[54] ART•V: Auto-Regressive Text-to-Video Generation with Diffusion Models - CVF Open Access, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024W/GCV/papers/Weng_ART-V_Auto-Regressive_Text-to-Video_Generation_with_Diffusion_Models_CVPRW_2024_paper.pdf

[55] Simplified and Generalized Masked Diffusion for Discrete Data - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.04329

[56] Unified Multimodal Discrete Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.20853

[57] Simple and Effective Masked Diffusion Language Models - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.07524

[58] [2107.03006] Structured Denoising Diffusion Models in Discrete State-Spaces - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2107.03006

[59] Structured Denoising Diffusion Models in Discrete State-Spaces, accessed on April 28, 2025, https://proceedings.neurips.cc/paper/2021/file/958c530554f78bcd8e97125b70e6973d-Paper.pdf

[60] Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2406.03736v2

[61] Fast Sampling via Discrete Non-Markov Diffusion Models with Predetermined Transition Time - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2312.09193v3

[62] [2406.03736] Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2406.03736

[63] AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation | OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=0EG6qUQ4xE

[64] Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2410.14157v3

[65] [R] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution - Reddit, accessed on April 28, 2025, https://www.reddit.com/r/MachineLearning/comments/1ezyunc/r_discrete_diffusion_modeling_by_estimating_the/

[66] [2412.07772] From Slow Bidirectional to Fast Autoregressive Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.07772

[67] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.19325v2

[68] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2503.19325

[69] ManiCM: Real-time 3D Diffusion Policy via Consistency Model for Robotic Manipulation - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.01586?

[70] G-U-N/Awesome-Consistency-Models: Awesome List of ... - GitHub, accessed on April 28, 2025, https://github.com/G-U-N/Awesome-Consistency-Models

[71] showlab/Awesome-Video-Diffusion: A curated list of recent diffusion models for video generation, editing, and various other applications. - GitHub, accessed on April 28, 2025, https://github.com/showlab/Awesome-Video-Diffusion

[72] [PDF] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models, accessed on April 28, 2025, https://www.semanticscholar.org/paper/66d927fdb6c2774131960c75275546fd5ee3dd72

[73] [2502.07508] Enhance-A-Video: Better Generated Video for Free - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2502.07508

[74] NeurIPS Poster FIFO-Diffusion: Generating Infinite Videos from Text without Training, accessed on April 28, 2025, https://nips.cc/virtual/2024/poster/93253

[75] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text, accessed on April 28, 2025, https://openreview.net/forum?id=26oSbRRpEY

[76] Owl-1: Omni World Model for Consistent Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.09600v1

[77] Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.16375v1

[78] ViD-GPT: Introducing GPT-style Autoregressive Generation in Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2406.10981v1

[79] TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models - CVF Open Access, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024/papers/Ni_TI2V-Zero_Zero-Shot_Image_Conditioning_for_Text-to-Video_Diffusion_Models_CVPR_2024_paper.pdf

[80] Training-Free Motion-Guided Video Generation with Enhanced Temporal Consistency Using Motion Consistency Loss - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.07563v1

[81] DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2502.03930v1

[82] VBench-2.0: A Framework for Evaluating Intrinsic Faithfulness in Video Generation Models, accessed on April 28, 2025, https://www.reddit.com/r/artificial/comments/1jmgy6n/vbench20_a_framework_for_evaluating_intrinsic/

[83] NeurIPS Poster GenRec: Unifying Video Generation and Recognition with Diffusion Models, accessed on April 28, 2025, https://neurips.cc/virtual/2024/poster/94684

[84] Evaluation of Text-to-Video Generation Models: A Dynamics Perspective - OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=tmX1AUmkl6¬eId=MAb60mrdAJ

[85] [CVPR 2024] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models - GitHub, accessed on April 28, 2025, https://github.com/evalcrafter/EvalCrafter

[86] [2412.18688] Video Is Worth a Thousand Images: Exploring the Latest Trends in Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.18688

发布者

立委

立委博士,多模态大模型应用高级咨询。出门问问大模型团队前工程副总裁,聚焦大模型及其AIGC应用。Netbase前首席科学家10年,期间指挥研发了18种语言的理解和应用系统,鲁棒、线速,scale up to 社会媒体大数据,语义落地到舆情挖掘产品,成为美国NLP工业落地的领跑者。Cymfony前研发副总八年,曾荣获第一届问答系统第一名(TREC-8 QA Track),并赢得17个小企业创新研究的信息抽取项目(PI for 17 SBIRs)。

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理