“…by the end of this year (2023), anyone should be able to create photorealistic (AI) movies…
That’s an excerpt of a forecast I wrote on May 19, 2023. A little over two months later: we can now create semi-realistic movies with AI that work for advertising and music videos.
While creating a short or feature-length movie is possible, it’s time-consuming due to the need to render multiple four-second video segments (the current render limitation for Runway Gen 2) to achieve your desired movement, lighting, and framing. Rendering glitches, primarily with hands and eyes, are being reduced almost daily but are currently another reason you may need to render a dozen segments before you have one that meets your requirements.
If you’re already familiar with the process and time needed to create still images with AIs, then you won’t be surprised by the requirements with a video AI like the Gen2 version of Runway.
Creating AI Video | Limitations and Glitches
- Hands and Eyes: Like earlier versions of Midjourney for still images, Runway has difficulty rendering eyes and hands.
- Length of the video: Currently, 4 seconds is the maximum length. But there’s a workaround for this limitation I’ve listed below in Process.
- Color Match and Grading: Each segment must be carefully matched if you need a video segment longer than 4 seconds.
Process
- Video Quality: Activate the following before starting your video production:
- Note: by activating these four items, the generation of your video will take longer.
- Fix seed between generations. Note: Copy the Seed information for later use.
- Interpolate
- Upscale
- Remove watermark
- Note: by activating these four items, the generation of your video will take longer.
- Prompt: text, photo, or both: At this point in the development of the Runway AI, I recommend using an image with text. The results will be more movie-like, with fewer artifacts and glitches.
- Breaking the 4-second limit: Import the video into your editor and save the last frame as an image. Then use that image with a minimal text prompt, or none, to generate the next 4-second video. Rinse and repeat until you have the video length needed for your project.
- Color Match and Grading: In the case of AI-generated videos, by using Step 3 as the image source for the next video segment, the first frame of the next segment generated by Runway AI will not be an identical color match to the reference image. Similar to most videos, whether captured with video cameras or generated by AI, color matching and grading is time-consuming but worthwhile to obtain a cohesive look.
Example AI Video
Resources
While I wrote this article myself, I used Grammarly to make suggestions for improving it.
I created the image within the video by training the MidJourney Image AI with a photograph I took during an editorial photoshoot and an AI prompt I wrote.
I then created the video by training the Runway AI, Gen 2, with the image and a prompt that I wrote.
The voiceover is from a script I wrote and then generated by Elevenlabs.
The background music was generated using the SoundRaw AI.
Note: I don’t receive compensation for mentioning companies or organizations in this article.