Runway AI Release Notes
Last updated: Mar 10, 2026
- Mar 9, 2026
- Date parsed from source:Mar 9, 2026
- First seen by Releasebot:Mar 10, 2026
Latest updates
Runway AI unveils Runway Characters, real-time intelligent avatars you can chat with and learn from, now available via the API and web demo.
All Plans
Runway Characters
Introducing Runway Characters. Real-time intelligent avatars you can talk with and learn from. Now available via the Runway API and demo on web.
Original source Report a problem - Feb 27, 2026
- Date parsed from source:Feb 27, 2026
- First seen by Releasebot:Feb 28, 2026
Feb 27, 2026
Paid Plans
Nano Banana 2
The most advanced and consistent image generation and editing model. Now available in Runway.
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Runway AI and hundreds of other software products.
- Feb 20, 2026
- Date parsed from source:Feb 20, 2026
- First seen by Releasebot:Feb 22, 2026
Feb 20, 2026
Paid Plans
New third party models
All of the world’s best models, are now all available right inside of Runway. Including Kling 3.0, Kling 2.6 Pro, Kling 2.5 Turbo Pro, WAN2.2 Animate, GPT-Image-1.5, Sora 2 Pro and many more. More models coming soon.
Original source Report a problem - Jan 22, 2026
- Date parsed from source:Jan 22, 2026
- First seen by Releasebot:Feb 28, 2026
Evaluating Recognition of AI-Generated Content
Gen-4.5 adds image-to-video capabilities and a public try site, signaling a new era in AI video generation. A new study shows most people can’t reliably tell real from AI video, underscoring responsibility and the push for verification standards.
AI video generation models have improved exponentially since we released Gen-2, the first publicly available text-to-video model, in early 2023. Two years ago, these models took several minutes to generate choppy, pixelated clips that were a few seconds long. Today, leading video generation models can reliably produce outputs that are virtually indistinguishable from real video.
This week, we released image-to-video capabilities for Gen-4.5, our latest base model. Today, we're publishing new research, evaluating people's ability to determine if a five second video is real, or was generated by our model. We're also launching a new site where anyone can try for themselves.
For this research study, we recruited a random sampling of 1,043 participants. Each participant viewed 20 videos (10 real, 10 generated) in randomized order and judged whether each was real or AI-generated. Each video was generated only once — the outputs were not edited, and no video was regenerated to improve quality or skew results.
Results
Over 90% of participants could not reliably distinguish Gen-4.5 outputs from real video.
Only 99 of 1,043 participants (9.5%) achieved statistically significant accuracy (>=15/20 correct, p < 0.05, binomial test). Overall detection accuracy was 57.1%, only slightly above chance. Performance was similar on real (58.0%) and generated (56.1%) videos, indicating no systematic detection strategy.
Detection accuracy varied by content category. Human-related videos (faces, hands, actions) were easier to detect (58-65%), while animals and architecture fell below chance (45-47%) — participants were more likely to mistake generated videos for real than vice versa.
These findings represent a fundamental shift in how we should think about video authenticity. For years, we've been building toward General World Models. Realistic simulation is a prerequisite for solving hard problems in the physical world. Gen-4.5 is the most capable simulator we've built yet. But that capability comes with responsibility. When 90% of people cannot reliably distinguish synthetic from real footage—and when generated content in certain categories is more convincing than reality—detection is an inadequate strategy for trust and verification.
Conclusions
Video generation models will continue their exponential improvement, assuming we continue to scale training data and compute. The AI industry and society at large have reached a tipping point, where the average person cannot determine if a video is generated by AI or not.
From photography to photoshop to traditional CGI, technology has consistently shifted public opinion on what makes a piece of content "real." As AI models continue to improve, we expect another, similar shift. We believe that foundational model developers, including Runway, have a responsibility to drive public conversation around the quality of model outputs, and explore how we can mitigate the societal challenges this technology will introduce while continuing to push the boundaries of AI research and innovation.
All Runway-generated outputs include C2PA metadata, allowing us to certify the origin and provenance of the content our models produce. This open technical standard is embraced by a wide variety of media companies and news organizations, but it is not infallible. We need to build new, more capable standards that preserve trust while enabling creative possibility. That requires technical solutions like C2PA, but also new literacies, updated editorial standards and ongoing dialogue about authenticity.
Moving forward, we're committed to three principles: maintaining transparency about our models' capabilities, collaborating with industry partners on verification standards and engaging directly with creators, enterprises and policymakers to establish new norms for synthetic media.
Methodology
Source videos were sampled from Filmpac across five content categories: faces, full-body human motion, animals, nature scenes and urban environments. For each category, we selected examples representative of content people often aim to generate. The first frame of each video was extracted and used as input to Gen-4.5 with default settings. Each video was generated once, with no regeneration or post-processing. Real and generated clips were trimmed to five seconds and matched in resolution. Participants could view each video for up to 10 seconds before making their judgment. Participants who achieved greater than 75% accuracy (>=15/20 correct, p < 0.05, binomial test) were classified as successful detectors.
Original source Report a problem - Jan 21, 2026
- Date parsed from source:Jan 21, 2026
- First seen by Releasebot:Jan 22, 2026
- Modified by Releasebot:Feb 22, 2026
Jan 21, 2026
Paid Plans
Gen-4.5 Image to Video
You can now supply a first frame image alongside your text prompt with Gen-4.5.
Original source Report a problem - Dec 18, 2025
- Date parsed from source:Dec 18, 2025
- First seen by Releasebot:Dec 19, 2025
- Modified by Releasebot:Feb 22, 2026
Dec 18, 2025
All Plans
New Audio Features
Text to Speech is now available via the Audio tab in Tool mode alongside new audio Apps for SFX and Speech to Speech.
Original source Report a problem - Dec 11, 2025
- Date parsed from source:Dec 11, 2025
- First seen by Releasebot:Dec 12, 2025
Dec 11, 2025
Paid Plans
Gen-4.5
Introducing Gen-4.5, the world's best video model. Now available for all paid plans.
Original source Report a problem - Dec 11, 2025
- Date parsed from source:Dec 11, 2025
- First seen by Releasebot:Dec 12, 2025
Introducing GWM-1
Runway introduces GWM-1, a real-time, interactive General World Model family for Worlds, Avatars and Robotics, built on Gen-4.5 to simulate reality and train agents in virtual environments. It includes a robotics SDK, avatars with natural motion, and native audio and multi-shot video editing.
Introducing GWM-1
A state-of-the-art model built to interact with the real world.
INTRODUCTION
GWM-1: our state-of-the-art General World Model, built to simulate reality in real time. Interactive, controllable and general-purpose.
Two years ago, we introduced a new research direction: General World Models. A world model is an AI system that builds an internal representation of an environment and uses it to simulate future events within that environment. The aim of general world models is to represent and simulate a wide range of situations and interactions1ike those encountered in the real world.
Today, we're announcing GWM-11our first general world model family. GWM-1 is an autoregressive model built on top of Gen-4.5. It generates frame by frame, runs in real time, and can be controlled interactively with actions1camera pose, robot commands, audio.
GWM-1 comes in three variants: GWM Worlds for explorable environments, GWM Avatars for conversational characters, and GWM Robotics for robotic manipulation. Today, these are separate post-trained models. We're working toward unifying many different domains and action spaces under a single base world model.
We believe that world models are at the frontier of progress in artificial intelligence. Language models alone won't solve the worlds hardest problems robotics, disease, scientific discovery. Real progress requires models that experience the world and learn from their mistakes, the same way that humans do. And this kind of trial and error can be massively accelerated when done in simulation, rather than in the real world. World models offer the most clear path to general-purpose simulation.
Key Features
- Action-Conditioning
- Camera
- Events
- Robot Pose
- Speech
Customizable Model
- World Models for Custom Actions +
- Domains
- For access to fine-tuning, fill out this form
Up to 2 minutes of video
720pExplore what GWM-1 can do
GWM Robotics
GWM Robotics is a learned simulator that generates synthetic data for scalable robot training and policy evaluation, removing the bottlenecks of physical hardware.
GWM Robotics is a world model trained on robotics data that predicts video rollouts conditioned on robot actions.
The model supports counterfactual generation, enabling exploration of alternative robot trajectories and outcomes.
Synthetic data augmentation for policy training
Use the world model to generate synthetic training data that augments your existing robotics datasets across multiple dimensions, including novel objects, task instructions, and environmental variations. This synthetic data improves the generalization capabilities and robustness of your trained policies without requiring expensive real-world data collection.Policy evaluation in simulation
Test the performance of your policy models (such as VLA models like OpenVLA or OpenPi) directly within Runway's world model instead of deploying to physical robots. This approach is faster, more reproducible, and significantly safer than real-world testing while still providing realistic behavioral assessments.Gen-4.5
GWM-1 Robotics SDK
A Python SDK for Runway's robotics world model API that enables action-conditioned video generation using models trained on robotics data. The SDK supports multi-view video generation and long-context sequences, with an interface designed for seamless integration into modern robotic policy models.
Request AccessGWM-1
For Real-time World Simulation and Exploration
A new frontier for open-ended interactive world simulation. A way of building infinite explorable realities in real-time.
Use cases
- Gaming
- Education
- Training Agents
- VR and Immersive Experiences
GWM Worlds enables players to move freely through coherent, reactive worlds without the need to manually design every space.
GWM Worlds is a world model for real-time environment simulation. You give the model a static scene, and it generates an immersive, infinite, explorable space as you move through it, with geometry, lighting, physics. All in real time. You can travel to any place, real or imagined. You can become any agent, a person walking through a city, a drone flying over a snowy mountain, a robot navigating a warehouse. What makes this work is consistency. When you explore an environment, you expect the world to stay coherent. Turn around, and what was behind you is still there. Walk forward and back, and you return to where you started. GWM Worlds maintains this spatial consistency across long sequences of movement. And because it's a simulation, the environment can react. You can define the physics of a world with your input prompt, and the world will respond accurately. If you prompt the agent to ride a bike, it stays on the ground; if you prompt for flight, it can freely navigate the sky. This is useful for interactive experiences, games, explorable worlds, immersive environments. But it's equally important for training agents. If you want to train an AI system to navigate and act in the physical world, you need a simulator in which to teach it. GWM Worlds can serve as that sandbox, an environment where agents can explore, make mistakes and learn.
GWM-1
For Real-time
Avatars
GWM Avatars is an audio-driven interactive video generation model that simulates natural human motion and expression for arbitrary photorealistic or stylized characters. The model renders realistic facial expressions, eye movements, lip-syncing and gestures during both speaking and listening, running for extended conversations without quality degradation.Use cases
- Real-time tutoring and education
- Customer support and service
- Training simulations
- Interactive entertainment and gaming
Bring personalized tutors to life. Responsive characters that explain concepts, react to questions, and hold extended conversations with the natural expressions and gestures that make learning feel like a real dialogue.
GWM Avatars is coming soon to the Runway web product and Runway API for integration into your own products and services.
Gen-4.5 Updates
Native audio generation, audio editing and multi-shot video editing.
Gen-4.5 now supports native audio generation and native audio editing. Not only will you be able to generate novel videos with audio, but you'll also be able to edit the audio of existing videos to suit your needs. Gen-4.5 also introduces multi-shot editing. With multi-shot editing, you can make a change in your initial scene and propagate that change throughout your entire video.Native Audio
Gen-4.5 can generate realistic dialogue compelling sound effects and immersive background audio, transforming the kinds of stories you can create with the model.Audio Editing
Gen-4.5 has now the ability to edit the audio of existing videos to suit your needs in any particular way you want.Multi-shot video editing
Gen-4.5 has the ability to edit videos of arbitrary length applying consistent transformations across multiple shots of arbitrary duration.
New Gen-4.5 capabilities are coming soon to the Runway web product.
Fill out this form to request GWM-1 early access
Original source Report a problem - Dec 4, 2025
- Date parsed from source:Dec 4, 2025
- First seen by Releasebot:Dec 5, 2025
- Modified by Releasebot:Jan 22, 2026
Paid Plans
Publish Workflows as Apps
Turn your Workflows into Apps you can share with your workspace.
Original source Report a problem - Dec 1, 2025
- Date parsed from source:Dec 1, 2025
- First seen by Releasebot:Dec 1, 2025
Introducing Runway Gen-4.5: A new frontier for video generation.
Runway unveils Gen-4.5, a high fidelity video generation model with precise prompt adherence, dynamic scene control and cinematic visuals. Built on NVIDIA GPUs, it delivers fast performance with broad control modes and rolling access.
Introduction
Introducing Runway Gen-4.5: A new frontier for video generation.
State-of-the-art motion quality, prompt adherence and visual fidelity.Runway Gen-4.5 is the world's top-rated video model, offering unprecedented visual fidelity and creative control. It produces cinematic and highly realistic outputs while providing limitless creative freedom and precise control over every aspect of generation.
Already in use by:
Introducing Runway Gen-4.5
Two years ago, we released Gen-1, the first publicly available video generation model. It enabled an entirely new form of creative expression and a new product category. Since then, we've led the industry in making video models more powerful and controllable, from significant base model improvements to new controls and general in-context capabilities.Runway Gen-4.5 pushes the frontier of video generation even further. It represents significant advances in both pre-training data efficiency and post-training techniques. Gen-4.5 sets new standards for dynamic, controllable action generation, temporal consistency and precise controllability across diverse generation modes. With 1,247 Elo points, Gen-4.5 currently holds the top position in the Artificial Analysis Text to Video benchmark, surpassing all other models.
Gen-4.5 maintains the speed and efficiency of Gen-4, delivering breakthrough quality without compromising performance. Available at comparable pricing across all subscription plans, Gen-4.5 makes world-leading video generation accessible to creators and organizations at every scale. We’ll also be bringing all existing control modes (Image to Video, Keyframes, Video to Video and more) to Gen-4.5.
Core Capabilities
Precise Prompt Adherence
Gen-4.5 achieves unprecedented physical accuracy and visual precision. Objects move with realistic weight, momentum and force. Liquids flow with proper dynamics. Surface details render at great fidelity. And fine details like hair strands and material weave remain coherent across motion and time.Complex Scenes
Intricate, multi-element scenes rendered with precision.Detailed Compositions
Precise placement and fluid motion for both objects and characters.Physical Accuracy
Realistic physics with believable collisions and natural movement.Expressive Characters
Nuanced emotions, natural gestures and lifelike facial detail.Stylistic Control and Visual Consistency
Gen-4.5 can handle a wide range of aesthetics, from photorealistic and cinematic to stylized animation, while maintaining a coherent visual language.Photorealistic
Visuals indistinguishable from real-world footage with lifelike detail and accuracy.Non-photorealistic
Stylized, expressive motion with artistic freedom unconstrained by realism.Slice of Life
Everyday scenes and environments with authentic, true-to-life detail.Cinematic
Emotionally powerful visuals with striking depth and cinematic polish.
Gen4.5 Deployment
- High-performance
- Built on NVIDIA
Gen-4.5 was developed entirely on NVIDIA GPUs across initial R&D, pre-training, post-training and inference. We collaborated extensively with NVIDIA to push the boundaries of what's possible in video diffusion model optimization, from training efficiency to inference speed.
Inference runs on NVIDIA Hopper and Blackwell series GPUs, delivering optimized performance without compromising quality.
“
This is an incredibly exciting time for video and world models.We’re proud that Runway built their groundbreaking video and world model on NVIDIA GPUs, and are thrilled to see Runway revolutionize the video generation industry.
Together, we are partnering to advance the entire lifecycle of AI from pretraining, to post-training and inference.”
Jensen Huang
President and CEO of NVIDIASelect Early Access Enterprise Partners
- Retail & Ecommerce
- Marketing & Advertising
- Broadcast
- Gaming
Gen-4.5 Limitations
Despite the leap in capabilities, the model exhibits several limitations common to video generation models:
- Causal reasoning: effects sometimes precede causes (e.g., a door opening before the handle is pressed).
- Object permanence: objects may disappear or appear unexpectedly across frames (e.g., a cup vanishing after being occluded).
- Success bias: actions disproportionately succeed (e.g., a poorly aimed kick still scoring a goal).
These limitations are particularly important in our work on world models, which need to accurately represent the outcomes of actions taken in the environment. We are actively researching ways to address them.
Generated with Runway Gen-4.5
We are gradually rolling out access to Runway Gen-4.5.
It will be available to everyone in the coming days.
Try nowFill out the form if you are interested in a custom Gen-4.5 model for your specific use case.
Original source Report a problem