Runway AI Release Notes

Last updated: Dec 19, 2025

  • Dec 18, 2025
    • Parsed from source:
      Dec 18, 2025
    • Detected by Releasebot:
      Dec 19, 2025

    Runway AI

    New Audio Features

    Text to Speech is now available via the Audio tab in Tool mode alongside new audio Apps for SFX and Speech to Speech.

    Original source Report a problem
  • Dec 11, 2025
    • Parsed from source:
      Dec 11, 2025
    • Detected by Releasebot:
      Dec 12, 2025

    Runway AI

    Dec 11, 2025

    Paid Plans

    Gen-4.5

    Introducing Gen-4.5, the world's best video model. Now available for all paid plans.

    Original source Report a problem
  • Dec 11, 2025
    • Parsed from source:
      Dec 11, 2025
    • Detected by Releasebot:
      Dec 12, 2025

    Runway AI

    Introducing GWM-1

    Runway introduces GWM-1, a real-time, interactive General World Model family for Worlds, Avatars and Robotics, built on Gen-4.5 to simulate reality and train agents in virtual environments. It includes a robotics SDK, avatars with natural motion, and native audio and multi-shot video editing.

    Introducing GWM-1

    A state-of-the-art model built to interact with the real world.

    INTRODUCTION

    GWM-1: our state-of-the-art General World Model, built to simulate reality in real time. Interactive, controllable and general-purpose.

    Two years ago, we introduced a new research direction: General World Models. A world model is an AI system that builds an internal representation of an environment and uses it to simulate future events within that environment. The aim of general world models is to represent and simulate a wide range of situations and interactions1ike those encountered in the real world.

    Today, we're announcing GWM-11our first general world model family. GWM-1 is an autoregressive model built on top of Gen-4.5. It generates frame by frame, runs in real time, and can be controlled interactively with actions1camera pose, robot commands, audio.

    GWM-1 comes in three variants: GWM Worlds for explorable environments, GWM Avatars for conversational characters, and GWM Robotics for robotic manipulation. Today, these are separate post-trained models. We're working toward unifying many different domains and action spaces under a single base world model.

    We believe that world models are at the frontier of progress in artificial intelligence. Language models alone won't solve the worlds hardest problems robotics, disease, scientific discovery. Real progress requires models that experience the world and learn from their mistakes, the same way that humans do. And this kind of trial and error can be massively accelerated when done in simulation, rather than in the real world. World models offer the most clear path to general-purpose simulation.

    Key Features

    • Action-Conditioning
    • Camera
    • Events
    • Robot Pose
    • Speech

    Customizable Model

    • World Models for Custom Actions +
    • Domains
    • For access to fine-tuning, fill out this form

    Up to 2 minutes of video
    720p

    Explore what GWM-1 can do

    GWM Robotics

    GWM Robotics is a learned simulator that generates synthetic data for scalable robot training and policy evaluation, removing the bottlenecks of physical hardware.

    GWM Robotics is a world model trained on robotics data that predicts video rollouts conditioned on robot actions.

    The model supports counterfactual generation, enabling exploration of alternative robot trajectories and outcomes.

    Synthetic data augmentation for policy training
    Use the world model to generate synthetic training data that augments your existing robotics datasets across multiple dimensions, including novel objects, task instructions, and environmental variations. This synthetic data improves the generalization capabilities and robustness of your trained policies without requiring expensive real-world data collection.

    Policy evaluation in simulation
    Test the performance of your policy models (such as VLA models like OpenVLA or OpenPi) directly within Runway's world model instead of deploying to physical robots. This approach is faster, more reproducible, and significantly safer than real-world testing while still providing realistic behavioral assessments.

    Gen-4.5
    GWM-1 Robotics SDK
    A Python SDK for Runway's robotics world model API that enables action-conditioned video generation using models trained on robotics data. The SDK supports multi-view video generation and long-context sequences, with an interface designed for seamless integration into modern robotic policy models.
    Request Access

    GWM-1

    For Real-time World Simulation and Exploration

    A new frontier for open-ended interactive world simulation. A way of building infinite explorable realities in real-time.

    Use cases

    • Gaming
    • Education
    • Training Agents
    • VR and Immersive Experiences
      GWM Worlds enables players to move freely through coherent, reactive worlds without the need to manually design every space.

    GWM Worlds is a world model for real-time environment simulation. You give the model a static scene, and it generates an immersive, infinite, explorable space as you move through it, with geometry, lighting, physics. All in real time. You can travel to any place, real or imagined. You can become any agent, a person walking through a city, a drone flying over a snowy mountain, a robot navigating a warehouse. What makes this work is consistency. When you explore an environment, you expect the world to stay coherent. Turn around, and what was behind you is still there. Walk forward and back, and you return to where you started. GWM Worlds maintains this spatial consistency across long sequences of movement. And because it's a simulation, the environment can react. You can define the physics of a world with your input prompt, and the world will respond accurately. If you prompt the agent to ride a bike, it stays on the ground; if you prompt for flight, it can freely navigate the sky. This is useful for interactive experiences, games, explorable worlds, immersive environments. But it's equally important for training agents. If you want to train an AI system to navigate and act in the physical world, you need a simulator in which to teach it. GWM Worlds can serve as that sandbox, an environment where agents can explore, make mistakes and learn.

    GWM-1

    For Real-time

    Avatars
    GWM Avatars is an audio-driven interactive video generation model that simulates natural human motion and expression for arbitrary photorealistic or stylized characters. The model renders realistic facial expressions, eye movements, lip-syncing and gestures during both speaking and listening, running for extended conversations without quality degradation.

    Use cases

    • Real-time tutoring and education
    • Customer support and service
    • Training simulations
    • Interactive entertainment and gaming
      Bring personalized tutors to life. Responsive characters that explain concepts, react to questions, and hold extended conversations with the natural expressions and gestures that make learning feel like a real dialogue.

    GWM Avatars is coming soon to the Runway web product and Runway API for integration into your own products and services.

    Gen-4.5 Updates

    • Native audio generation, audio editing and multi-shot video editing.
      Gen-4.5 now supports native audio generation and native audio editing. Not only will you be able to generate novel videos with audio, but you'll also be able to edit the audio of existing videos to suit your needs. Gen-4.5 also introduces multi-shot editing. With multi-shot editing, you can make a change in your initial scene and propagate that change throughout your entire video.

    • Native Audio
      Gen-4.5 can generate realistic dialogue compelling sound effects and immersive background audio, transforming the kinds of stories you can create with the model.

    • Audio Editing
      Gen-4.5 has now the ability to edit the audio of existing videos to suit your needs in any particular way you want.

    • Multi-shot video editing
      Gen-4.5 has the ability to edit videos of arbitrary length applying consistent transformations across multiple shots of arbitrary duration.

    New Gen-4.5 capabilities are coming soon to the Runway web product.

    Fill out this form to request GWM-1 early access

    Original source Report a problem
  • Dec 4, 2025
    • Parsed from source:
      Dec 4, 2025
    • Detected by Releasebot:
      Dec 5, 2025
    • Modified by Releasebot:
      Dec 12, 2025

    Runway AI

    Dec 4, 2025

    Paid Plans

    Publish Workflows as Apps

    Turn your Workflows into Apps you can share with your workspace.

    Original source Report a problem
  • Dec 1, 2025
    • Parsed from source:
      Dec 1, 2025
    • Detected by Releasebot:
      Dec 1, 2025

    Runway AI

    Introducing Runway Gen-4.5: A new frontier for video generation.

    Runway unveils Gen-4.5, a high fidelity video generation model with precise prompt adherence, dynamic scene control and cinematic visuals. Built on NVIDIA GPUs, it delivers fast performance with broad control modes and rolling access.

    Introduction

    Introducing Runway Gen-4.5: A new frontier for video generation.
    State-of-the-art motion quality, prompt adherence and visual fidelity.

    Runway Gen-4.5 is the world's top-rated video model, offering unprecedented visual fidelity and creative control. It produces cinematic and highly realistic outputs while providing limitless creative freedom and precise control over every aspect of generation.

    Already in use by:

    Introducing Runway Gen-4.5
    Two years ago, we released Gen-1, the first publicly available video generation model. It enabled an entirely new form of creative expression and a new product category. Since then, we've led the industry in making video models more powerful and controllable, from significant base model improvements to new controls and general in-context capabilities.

    Runway Gen-4.5 pushes the frontier of video generation even further. It represents significant advances in both pre-training data efficiency and post-training techniques. Gen-4.5 sets new standards for dynamic, controllable action generation, temporal consistency and precise controllability across diverse generation modes. With 1,247 Elo points, Gen-4.5 currently holds the top position in the Artificial Analysis Text to Video benchmark, surpassing all other models.

    Gen-4.5 maintains the speed and efficiency of Gen-4, delivering breakthrough quality without compromising performance. Available at comparable pricing across all subscription plans, Gen-4.5 makes world-leading video generation accessible to creators and organizations at every scale. We’ll also be bringing all existing control modes (Image to Video, Keyframes, Video to Video and more) to Gen-4.5.

    Core Capabilities

    • Precise Prompt Adherence
      Gen-4.5 achieves unprecedented physical accuracy and visual precision. Objects move with realistic weight, momentum and force. Liquids flow with proper dynamics. Surface details render at great fidelity. And fine details like hair strands and material weave remain coherent across motion and time.

    • Complex Scenes
      Intricate, multi-element scenes rendered with precision.

    • Detailed Compositions
      Precise placement and fluid motion for both objects and characters.

    • Physical Accuracy
      Realistic physics with believable collisions and natural movement.

    • Expressive Characters
      Nuanced emotions, natural gestures and lifelike facial detail.

    • Stylistic Control and Visual Consistency
      Gen-4.5 can handle a wide range of aesthetics, from photorealistic and cinematic to stylized animation, while maintaining a coherent visual language.

    • Photorealistic
      Visuals indistinguishable from real-world footage with lifelike detail and accuracy.

    • Non-photorealistic
      Stylized, expressive motion with artistic freedom unconstrained by realism.

    • Slice of Life
      Everyday scenes and environments with authentic, true-to-life detail.

    • Cinematic
      Emotionally powerful visuals with striking depth and cinematic polish.

    Gen4.5 Deployment

    • High-performance
    • Built on NVIDIA

    Gen-4.5 was developed entirely on NVIDIA GPUs across initial R&D, pre-training, post-training and inference. We collaborated extensively with NVIDIA to push the boundaries of what's possible in video diffusion model optimization, from training efficiency to inference speed.

    Inference runs on NVIDIA Hopper and Blackwell series GPUs, delivering optimized performance without compromising quality.


    This is an incredibly exciting time for video and world models.

    We’re proud that Runway built their groundbreaking video and world model on NVIDIA GPUs, and are thrilled to see Runway revolutionize the video generation industry.

    Together, we are partnering to advance the entire lifecycle of AI from pretraining, to post-training and inference.”

    Jensen Huang
    President and CEO of NVIDIA

    Select Early Access Enterprise Partners

    • Retail & Ecommerce
    • Marketing & Advertising
    • Broadcast
    • Gaming

    Gen-4.5 Limitations

    Despite the leap in capabilities, the model exhibits several limitations common to video generation models:

    • Causal reasoning: effects sometimes precede causes (e.g., a door opening before the handle is pressed).
    • Object permanence: objects may disappear or appear unexpectedly across frames (e.g., a cup vanishing after being occluded).
    • Success bias: actions disproportionately succeed (e.g., a poorly aimed kick still scoring a goal).

    These limitations are particularly important in our work on world models, which need to accurately represent the outcomes of actions taken in the environment. We are actively researching ways to address them.

    Generated with Runway Gen-4.5

    We are gradually rolling out access to Runway Gen-4.5.
    It will be available to everyone in the coming days.
    Try now

    Fill out the form if you are interested in a custom Gen-4.5 model for your specific use case.

    Original source Report a problem
  • Nov 21, 2025
    • Parsed from source:
      Nov 21, 2025
    • Detected by Releasebot:
      Dec 19, 2025

    Runway AI

    Workflows Updates

    Audio nodes, video upscaling nodes and three new featured Workflows available now.

    Original source Report a problem
  • Nov 21, 2025
    • Parsed from source:
      Nov 21, 2025
    • Detected by Releasebot:
      Nov 21, 2025
    • Modified by Releasebot:
      Dec 12, 2025

    Runway AI

    Nov 21, 2025

    Paid Plans

    Workflows Updates

    Audio nodes, video upscaling nodes and three new featured Workflows available now.

    Original source Report a problem
  • Oct 24, 2025
    • Parsed from source:
      Oct 24, 2025
    • Detected by Releasebot:
      Oct 30, 2025
    • Modified by Releasebot:
      Dec 19, 2025

    Runway AI

    Workflows

    Create your own custom node-based workflows chaining together multiple models, modalities and intermediary steps for even more control of your generations. Available now.

    Original source Report a problem
  • Oct 14, 2025
    • Parsed from source:
      Oct 14, 2025
    • Detected by Releasebot:
      Nov 21, 2025
    • Modified by Releasebot:
      Dec 19, 2025

    Runway AI

    Apps

    Apps are an ever-growing collection of use case specific workflows that make it easier than ever to get to great outputs. Apps are available now for web, with more releasing every week.

    Original source Report a problem
  • October 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Oct 30, 2025

    Runway AI

    Introducing Runway Gen-4

    Runway Gen-4 launches a groundbreaking multi-scene AI that preserves consistent characters, objects, and environments across shots from one reference. It adds production-ready video, physics-aware world modeling, and fast GVFX, enabling seamless storytelling without extra training.

    Introducing Runway Gen-4

    Our next-generation series of AI models for media generation and world consistency.

    A new generation of consistent and controllable media is here.
    With Gen-4, you are now able to precisely generate consistent characters, locations and objects across scenes. Simply set your look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame. Then, regenerate those elements from multiple perspectives and positions within your scenes.

    Gen-4 can utilize visual references, combined with instructions, to create new images and videos utilizing consistent styles, subjects, locations and more. Giving you unprecedented creative freedom to tell your story.

    All without the need for fine-tuning or additional training.

    RUNWAY GEN-4

    Narrative Capabilities

    A collection of short films and music videos made entirely with Gen-4 to test the model's narrative capabilities.

    One simple interface, endless workflows and capabilities

    WORKFLOW – CONSISTENT CHARACTERS

    Infinite character consistency with a single reference image
    Runway Gen-4 allows you to generate consistent characters across endless lighting conditions, locations and treatments. All with just a single reference image of your characters.

    WORKFLOW – CONSISTENT OBJECTS

    Whatever you want, everywhere you need it
    Place any object or subject in any location or condition you need. Whether you’re crafting scenes for long form narrative content or generating product photography, Runway Gen-4 makes it simple to generate consistently across environments.

    WORKFLOW – COVERAGE

    Get every angle of any scene
    To craft a scene, simply provide reference images of your subjects and describe the composition of your shot. Runway Gen-4 will do the rest.

    CAPABILITIES – PRODUCTION-READY VIDEO

    A new standard for quality and language understanding for video generation
    Gen-4 excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object and style consistency with superior prompt adherence and best in class world understanding.

    CAPABILITIES – PHYSICS

    A step towards Universal Generative Models that understand the world
    Runway Gen-4 represents a significant milestone in the ability of visual generative models to simulate real world physics.

    WORKFLOW – GVFX

    A new kind of visual effects
    Fast, controllable and flexible video generation that can seamlessly sit beside live action, animated and VFX content.

    Original source Report a problem

Related products