Post production for 360 video with an offline/online workflow

Post production for 360 video with an offline/online workflow

Offline/online workflows work well for 360 video projects, which often require more post-production work than traditional video productions.

Alt text.

Categories:Getting Started
Tags:Image OptimizationPost ProductionProductionVideo Editing
Skill Level:

Read Time: 5 Minutes

Updated 09/02/2022


Immersive video projects often involve huge media files and a lot of post production time. Unlike monoscopic workflows using modern editors, it often isn’t practical to work with final quality files through the whole pipeline because it would take up too much time and computer resources. This is especially true for 360 media, which involves a time-consuming stitching process to even see the footage properly. An ‘Offline/Online’ workflow can help make these productions possible.

Offline media refers to lower-quality versions of source media. These versions can be rough, with issues to be fixed later in the post production pipeline once the edit is complete.

Online media is full-resolution, final-quality (stitched) video that is ready for other parts of the finishing process such as color grading and final VFX. The following guide breaks down the basic parts of the workflow.

Alt text.

1. Ingest

To protect against the possibility of data loss, it is common convention to download all media to at least two separate sources and to keep those sources in separate physical locations. Another common practice is to maintain the original folder structures from camera media because some camera manufacturers rely on the directory structure and camera-assigned filenames in their processing software. Offload apps such as ShotPut Pro or Hedge offer checksum verification during media offloading that is used to verify that the files were copied without errors from the camera media to drives.

On some immersive camera rigs, each lens records to its own media. If you are using 3rd party stitching software you may have to organize the files to prepare for stitching. This normally involves taking the files from each card and organizing them according to the ‘take.’ For camera systems that use multiple cards this means taking the first file from card one, the first file from card two and so on, and creating a ‘take’ folder that contains a single file from each camera.

Once the media is organized, the project is ready to stitch.

2. Rough Stitch

Rendering out rough stitches of all the takes without spending time getting stitches perfect (painting out the crew and camera supports, etc) can speed up productions and reduce overall cost. Most productions capture far more content than will make it into the final cut. By rendering out rough stitches, creative teams can decide which takes will make it into the final cut. Then, time only needs to be spent cleaning up video segments that were actually used. Using rough stitches can save a lot of time, rendering time and hard drive space.

If stitching with camera manufacturer-supplied software, consider using a lower bitrate or more compressed output format for your offline files to save hard drive space. Those applications generally don’t give editors many options, but this can help to spot shots with tricky stitching issues that you may want to take into a more advanced software to fix later on with the online stage of the workflow.

IMPORTANT: Rendering out rough stitches for the entire duration of each recorded take allows for easy selection of precise edit in and out points.

While artifacts during the rough stitching process shouldn’t be a concern, it’s still a good idea to render rough stitches at a resolution that matches the finishing resolution so that the editing is done at the correct final resolution. If playing back 8K footage in real time is difficult (and it usually is), then use the editing software to create lower resolution proxies. This is an extra step, but it avoids having to reposition titles and adjust size-specific effects later in the finishing process.

Alt text.

Adobe Premiere Pro Timeline. Image: Light Sail VR.

3. Editing

The edit stage of the post production workflow is very similar to the edit stage in traditional video production. Because stitching is so involved, the idea of ‘picture lock’ in immersive video production is more important than in standard workflows. Picture lock means that none of the visual cuts or timings are going to change; if you ‘unlock’ the picture once work has started on fine stitching, visual effects, and spatial audio mixing, you may end up having to re-do a lot.

After picture lock, fine stitching, color grading, VFX, and spatial audio mixing can begin.

4. Fine Stitch

All takes usually need to be carefully stitched and cleaned up. For 3D-180 media this usually involves small adjustments (if needed). For 360 media, this means spending time adjusting the stitch, if using 3rd party stitching software like SGO’s Mistika VR. This is also the stage when removing tripods, compositing out lights, crew and other unwanted elements, and stabilizing the image occurs. Stitching is highly specialized, but engaging a dedicated stitcher can make a huge difference in the final quality of your video. A bad stitch can make a great creative edit unwatchable.

The final stitched videos are placed in the timeline to match the rough stitch cuts

5. Color Grade

With all of the final media in place, the next phase is color grading. This can be done right inside the editorial software, or it can be sent to an external tool such as Blackmagic Design’s Davinci Resolve Studio or Assimilate Scratch VR. At this point in the process, sharpening or denoising can also be applied.

If color has been completed in an external tool, you can choose whether to render the color-graded footage as individual clips or as a single file before bringing the footage back into the editing software for final mastering. Individual clips give more flexibility to make last minute changes, but a single master file is easier to work with.

For more about color grading see articles in the Skills & Principles and the Create & Build categories.

6. Mastering

The second to last step involves marrying the completed audio mix and color-corrected footage with any titles or graphic elements. It is good practice to keep any titles and graphic elements separate from the footage in case titles need to change or be removed (e.g., for translation into other languages). An audio workflow may optionally require separating the audio mix into dialog, music, and sound effects passes so that it’s possible to isolate those elements in case a trailer needs to be cut without having access to the original editorial media and project file, but if spatial audio has been mixed, this needs to be done in the DAW (digital audio workstation) software.

After all of this work, it’s good practice to render out the final video as a master using a high-quality codec such as Prores 422HQ. This master is later used to encode variations for distribution. The included audio is sometimes just a stereo mixdown of the spatial mix to serve as reference, as the spatial versions are usually added during the encoding process.

7. Encoding

The final stage is encoding the video and audio for each delivery platform. For more about encoding and delivery, refer to Encoding immersive videos for Meta Quest 2.