The words "Reality Remixed" in bold, glowing letters with sparks and light streaks in the background.

AI video editing using Runway: Part Eight

Reality has been remixed!

We complete our series introducing one of the world’s most popular online generative AI video production toolkits, as we create a short video for a brand new immersive experience.


The AI tools and models used in this series

We’ll use runway.ml to generate video clips and assemble them into a sequence, and we’ll use additional AI tools to help with initial ideation and to create text prompts to use at runway to help generate high quality video clips.

The main steps in the project:

  1. Generate an overview for an ‘immersive experience’ promotional video using perplexity.ai
  2. Generate still images using the flux-1.1-pro model at replicate.com to use as ‘first frame’ prompts at runway
  3. Generate short video clips using runway’s latest Gen-4 model
  4. Use runway’s online video editor to sequence the promotional video
  5. Use the speech-02-hd AI model at replicate.com to generate a realistic-sounding voiceover
  6. Use udio.com to generate a background music track
  7. Add text titles and export the video file


Part 8: The ‘completed’ promo video and Runway.ml in review

Here’s the promo video exported from Runway at the end of part seven of this series:

Before our review of the pros and cons of Runway (Summer 2025), here’s a reminder of the project workflow and the generative AI tools used in this series.

  • The video storyboard / basic blocking was generated using perplexity.ai
  • The video clips were generated using runway’s ‘gen-4’ model using a ‘first frame’ prompt image combined with a ‘camera guidance’ text prompt.
  • The ‘first frame’ still images were generated using the xxxxx model at replicate.com
  • The music track was generated at udio.com using a text prompt generated by perplexity.ai

In Review: runway.ml in June 2025

  • Video clips created using Runway’s latest ‘Gen-4’ model are impressive, displaying ‘natural’ human and facial movement.
  • The number of credits consumed to generate still images seems excessive compared to competing dedicated AI models and platforms, including the models available at replicate.com
  • Generating six 5 second video clips, 8 candidate still images and around 1 minute’s worth of text-to-speech voiceover has consumed approximately 540 credits of the monthly 625 credits provided by the $15/month ‘standard’ plan. This equates to spending around £13 / $14.
  • Runway’s browser-based video editor is intuitive and could be all you need to create a video sequence with text and music. We experienced issues with the sequence failing to play back in-browser (using an up to date version of Chrome on a two year old laptop). This was rectified by refreshing the browser tab.
  • For finer control and additional options, a ‘regular’ video editor such as Vegas Pro or Premiere Pro will be needed.

Recommendations

We recommend using external AI tools to create still images (E.g. replicate), generate high quality text prompts (E.g. perplexity.ai) and generate music (E.g. Udio.com), but for video clip generation Runway remains an excellent choice.


That’s the end of this series introducing Runway’s AI-based video generation and editing. See you next month as our Super Summer of Search begins!