Behind the scenes: Tested VR

Behind the scenes: Tested VR

Tested VR is an entertaining 3D 180 series shot in Adam Savage’s workshop. He shares details about some of his most interesting projects. The immersion is as good as it gets in VR.

Alt text.

Photo Courtesy of: Tested VR BTS, Joey Fameli (Left), Eric Cheng (Middle), Adam Savage (Right)

Categories:Case Studies
Tags:180 Video
Skill Level:

Read Time: 5 Minutes

Updated 09/02/2022


Not all projects work best in 360, so when shooting VR video of one of Adam Savage’s Tested episodes a 3D-180 camera was used to put the viewer by the workbench. The result is Tested VR, an entertaining 3D-180 series shot in Adam’s workshop. He shares details about some of his most interesting projects, and the immersion is as good as it gets in VR.

How did this project come to be?

We have been covering the VR space for quite a while, and we’ve been looking at the ways content creators used this new medium to tell immersive stories. We were intrigued by custom-built rigs making use of off-the-shelf cameras like GoPros, and with 360-degree cameras from Samsung and Ricoh, but it was only when turnkey stereo 180 cameras became available from Lenovo and Z CAM that we started experimenting with filming VR content ourselves. We quickly found that filming the process of making was compelling to watch in VR – the idea that you could put a viewer across a workbench from a craftsperson and invite them to be a spectator at a build project.

Alt text.

Photo 1 Left: Tested VR BTS, Brett Foxwell; Photo 2 Middle: Tested VR BTS, Griffon Ramsey; Photo 3 Right: Tested VR BTS, Andrew Freeman

Telling the stories of makers and the spaces they inhabit is core to Tested and Adam Savage’s mission, so we jumped at the opportunity Oculus gave us to visit some of our favorite makers and film them in 3D 180. Eric Cheng, Immersive Media Lead at Meta, guided us through the production requirements and best practices that he had developed in his 3D 180 experiments, and Adam Savage curated the group of makers we featured. Many of our operating procedures were born out of filming tests done with Adam in his workshop, which gave us a model to work from for this season.

Alt text.

Photo: Tested VR Behind the Scenes with Adam Savage

Why did you feel this was a project that needed to be told in immersive media?

We’ve had a lot of experience filming profiles and interviews of interesting makers, and a recurring theme is that the objects they make, as well as their processes, are often reflected in the workshops and spaces they work in. In traditional flatscreen video we can give a tour of a workshop and show a maker’s tools up close, but it’s not nearly as effective as 3D 180 video for giving the viewer a complete sense of space – the physical dimensions as well as the level of activity. Sometimes a maker’s workshop might be just a garage or home office; 3D 180 puts you right into that room, which we hope will help viewers understand that you don’t have to work in a big woodshop or warehouse to make things. Similarly, there are workshops that are bustling with activity, and the spatial audio puts the viewer in the middle of that as well, which has a whole different energy than when documented in 16:9.

Alt text.

Photo 1: Tested VR BTS, Brett Foxwell; Photo 2: Tested VR BTS, Brianna Chin

What prep did you do that you felt really paid off on this project? Storyboards? Previsualizations? Shot list? What is your process?

Previsualizing is different in many ways for VR production. We couldn’t really storyboard in a traditional sense, as those panels are designed for framed compositions, not 180 degrees, and we were pretty loose with shot listing. The decision on which shots to commit to was mainly built from the tips and tricks we learned from the previous episode we shot. We thought of it almost like a theater, where set decoration, set dressing, and available lighting is really key to creating compelling visuals.

Our pre-production process usually went like this:

In the days leading up to recording we would have a couple of calls with the subject. The first thing we’d try to do is explain the format and how that will impact the production. Even people who have done camera work before can get a little thrown off by the style of production this entails. There isn’t the luxury of multiple cameras or b-roll to help cover up edits. There will be blocking and center staging but, to contradict ourselves, we’ll also encourage them to be free to make use of the 180 degree environment. In the interests of a smooth production we’ll do our best to give clarity and explain our process so they’re not surprised on the day. My go-to line would usually be, ‘think of this less like we’re making a video and more like you’re on stage giving a live presentation. But the stage is your workshop, and the live audience is this weird looking camera rig with two big lenses.’

Next we would talk about the subject matter. We would ask them to walk us through, in detail, the process of what we’re hoping to explore, and from there we would develop an editorial plan. That part is not too different from how we normally prepare these types of 16:9 videos, but the difference here is we would craft the editorial based on what we know looks good and feels immersive in VR. The particulates, the smoke, sparks, wood chips, large volumetric shapes; all those things would be the main driving force in editorial – because we’re trying to tell a story that feels like it can only properly be told in VR. Without those considerations we might as well be making a traditional 16:9 video.

Alt text.

Photo: Tested VR BTS, Rick Lyon

Once we feel like we have a handle on what editorial will look like we go through pictures of the shooting environment. We encourage folks to dress their shop – not in any kind of false way, but maybe move large sculptures or decorations away from the wall a bit. Populate the workshop so that the 3D really pops. That also allows us to move around the space a bit and give the user a different vantage point. If we were shooting in one big empty room things could seem repetitive or bland. This would usually be our last pre-shooting step, and sometimes that would happen on the night we arrive, the day before the shoot. If we do all the above steps correctly the shoot days should go pretty smoothly.


What were some challenges you faced in production and how did you solve them?

We were relatively lucky when it came to general production challenges. We worked so closely with the folks over at Oculus that we felt like we were in front of a lot of the big production challenges. I think our biggest setback was forgetting a power cable, but that sent us into a frenzy; no local camera store had one available because the gear was so unique. We ended up having to run power through an ethernet switch, which was a little cumbersome.

The camera build, while it remained relatively the same in concept, changed and morphed a bit as the season went on; we would find little issues here and there with the camera size and the way it was mounted. Finding the right tripod and counterweight to work with was a bit of a trial and error process as well.

Alt text.

Photo: Tested VR BTS, Joey Fameli (Left), Ryan Nagata (Right)

I think the biggest wildcard for us was trying to encourage the subject to complete as many thoughts as possible in one presentation, so that we could limit the amount of cutting per camera shot when it came time for post. For immersive media you really want to feel like you’re in the room with the person, and when you cut too much, or at inopportune times, you start to break the illusion. Because the people we were talking to were makers, not necessarily presenters or actors, that was a near impossible ask. We would do our best to guide them, as well as compromise on our own guidelines where we had to. Other than that, the two of us on set worked with both cameras and technology for so long that any problem we had we were able to troubleshoot pretty effectively.

One of the benefits of 3D 180 is the sense of intimacy you get from the presenter talking to the camera. Our direction was to ask the maker to speak to the camera rig as if they were a guest in their workshop. That meant relaxing a little bit and not being afraid to address the camera directly as well as bringing objects closer to the lens to allow the viewer to examine them. We had to occasionally remind the subjects to not look off camera at us, even if we were prompting them with questions. We realized how much these subjects want to make eye contact, and even though the camera has two lenses that give the illusion of eyes, it doesn’t give the same kind of acknowledgement or feedback cues that you would get from having a conversation with a real person.

Alt text.

Photo 1 Left: Tested VR BTS, Griffon Ramsey; Photo 2 Right: Tested VR BTS, Ryan Nagata

What camera(s) did you choose for this project and why?

For season one we shot Tested VR with the Z CAM K1 Pro kit for 3D 180, and supplemental 16:9 1080p footage was captured with a Sony FS5 and a Sony X70. That would be used for picture-in-picture within the VR space.

We chose the K1 Pro for its size, media requirements, and the infrastructure of software that was already developed. We knew we were going to be traveling a fair bit, and because of that we wanted equipment that can pack up tightly and also allow a second K1 Pro to be included as backup. As of right now, this equipment is still very unique, so if something happened to our gear on the road it would be very hard to find parts or repair. Investing money in a second “backup” camera that would travel with us seemed like good insurance. We also knew that some of the spaces that we would be visiting would be small and tight, so to account for that we wanted to make our footprint as small as possible. This meant we could not only shoot comfortably but also open up multiple options for camera angles throughout the spaces. A larger rig would have prohibited such freedom.

Alt text.

Photo: Tested VR BTS, Andrew Freeman

The K1 Pro also shoots in a compressed codec, meaning that we can shoot high volume spread across multiple inexpensive SD cards. This was important because the ‘interview docu-style’ format we shot in required us to shoot hours and hours of footage to ensure that we had the right coverage to build the story in post. I believe our shooting ratio averaged at 24:1, which is high, but because a lot of what we were doing was experimental we needed a bit of redundancy in shots and camera angles in case something just didn’t work or didn’t play well in the final delivery. Having a manageable media size was also important for post-production, as we had to store the original clips of each camera (or ‘eyeball’), then stitch them together to create one 5.7k file (approximately double the storage size of one eyeball clip), as well as the rest of the assets, and have that all available on preferably one target drive to cut from. Of course with good DIT support and lots of lots of hard drive space available, you could shoot with higher end cameras and record and cut in a format such as ProRes, but a part of the Tested VR project was to test out and develop a workflow that anyone who works on the prosumer level could adapt to, if they were inclined to try out 180 VR production.

Alt text.

Photo: Tested VR BTS

Finally, we chose the K1 Pro because of the software infrastructure that Z CAM has already developed. On the higher end, when shooting for a stereoscopic image using multiple cameras, you would normally send that footage off to a stereographer who would create a 3D image. While that creates a high quality 3D alignment, it also adds steps within the post-production process and can be expensive. Again, with the intention of developing a workflow for the prosumer, we used the software that came with the Z Cam cameras and provided automated stitching and exporting. The quality was good, and the process was painless and highly automated. The biggest decision you had to make was which codec to export to.

Alt text.

Photo 1 Left: Tested VR BTS, Adam Savage; Photo 2 Right: Tested VR BTS, Griffon Ramsey

The ZCam K1 Pro – with all of it’s limitations aside – allowed me as a traditional video editor to work in a system and workflow that was relatively familiar. For season two (which we are currently shooting) we did two of the episodes with the ZCam K2 Pro, the bigger brother of the K1. That required a little bit more attention and care; it shoots in ProRes, on CFast cards, which means the file sizes are significantly bigger, and with many more options on each individual camera inside the body of the K2 you have to take proper care in calibrating the system correctly. Although the quality of image is much better and the ProRes gives you much more options in post, it’s not a system I would recommend for just anyone without taking the proper time to learn and test.

How Many Days Were the Shoots?

We started pre-production early in the process to iron out location, props, and style, and to talk through most of what the video was going to be. That let us be incredibly efficient with our time during production, which usually lasted 2-3 days.

What was the most exciting part of the production?

Uncertainty, along with trial and error, was probably the most exciting part of production. I know that kind of goes against what production is usually like; typically you hire specialists to perform production tasks very quickly and very effectively to maintain a budget and manage your time. But, in this scenario none of us were really “specialists” so to speak. We had technical knowledge of how to operate our gear and how to construct the story from an editorial standpoint, but the years of being exposed to the media, and cinematography, and framing simply was not there, because it was so new.

In the six months we were on this project we were faced with many unique challenges, such as lighting for a 180 degree environment, for example. We no longer could lean on rules of thumb like “3-point lighting” because so much was in frame. There was no way you could hide the lights.

Alt text.

Photo: Tested VR BTS, Brianna Chin (Left), Joey Fameli (Right)

We had to not only adapt and use area lighting (and maintaining a certain contrast ratio so that the camera could resolve something that resembled a “natural” environment), we also had to solve problems with light color and tint bleeds throughout each set that caused harsh transitions to different parts of the room. These are all things we would have never considered when framing up for a 16:9 video.

Alt text.

Photo: Tested VR BTS, Alexis Noriega

Also, framing subjects in general had to be entirely VR focused. This meant our roles as cinematographers had to be less about manipulating an image to our point of view, and more about placing the user to give them their best point of view. We had to ask ourselves, ‘where in this room would someone want to stand? How close can we get them without causing discomfort?’. It was a real active question on set; we were constantly having to quash our desire for ‘cool shots,’ and we also had to keep in mind the need to give the audience the best point of view at all times. Because, ultimately, that’s what is important in immersive VR.

These were interesting problems to solve, and it made every challenge a huge learning opportunity that hopefully will provide a new set of ‘guidelines’ for other VR content creators when they get stuck.

Post Production

Can you give us an insight into your post workflow (software used, mastering format, editing process). What worked well? What were the challenges that you either solved or learned from?

The post-production pipeline could be broken down in various stages:

  • Heavy Organization of Assets
  • Concatenating & Stitching
  • Master Sync of All Video (both eyes + stitch) to Audio
  • Assembly
  • Fine Tune & Asset Pass
  • Color Correction
  • Spatial Audio Editing
  • Exporting & Muxing

I used a combination of Adobe Bridge and a Windows batch rename tool to organize everything, so that assets could be traced back. This was important because, for example, 6 months after the project wrapped, a stereographer was hired to do a realignment of a few scenes to see if they could be made to “pop” a little better. Being able to trace the concatenated and stitched version back to the original source (individual ‘eyes’) to send to them made that process painless. Had things not been organized, the process would have been much tricker, especially considering that the stitching and concatenating steps are destructive to the file names.

From there I stitched the files using Z CAM Wonderstich, a free application that works very well with the Z CAM K1 Pro cameras.

Once the files were complete, I brought everything into Adobe Premiere and began to sync manually with the various audio sources. There’s no option for a timecode sync on the K1 Pro, and syncing programs like Plural Eyes got quite confused with the spatial audio.

From there, I would slowly chip away at the footage. I would do what I guess I can call a ‘vomit draft’ version – just a rough cut to get the broad shape of the story developed – then I would just go through multiple times and fine tune the edit until the story felt right. Up until now I wouldn’t really be using a headset, I’d be looking at the Adobe Premiere program monitor with “VR” mode turned on, meaning that it would give me a sort of a normal looking image instead of the two-eye stereo version.

Alt text.

Photo: Tested VR BTS, Adam Savage

While I did the final fine tuning, color correction and asset passes I would rely heavily on a headset, as that gave me a clearer picture of how things were looking. I did most of my color correction in Adobe Premiere, but for more complicated grading I would bounce out to Davinci Resolve.

All the audio editing would be through a program called Reaper. The audio recording equipment was a Sennheiser Ambeo VR mic that would record spatially, a lavalier mic on the subject, and a shotgun mic that would record ambient sounds plus the subject. All those sources would be converted to spatialized audio using the Facebook Spatial Workstation plugins in Reaper, and then I would realtime track those audio channels to their audio sources. I would have a sort of keyframe writing mode turned on, then I would watch the video through a preview and follow the source along with my mouse, mapping those sources to locations. The end result would be spatialized audio. The only tracks that would remain in headlocked stereo would be anytime we used voice over, or music.

Audio editing was always the last step as the picture was usually locked. We would then export out essentially five different things. The first three were a ProRes video master with no audio, a spatialized audio track from Reaper, and a stereo audio track from Reaper. Then, using a command line in FFMPeg, I would encode the video to a compressed H.264 version. I would then take the H.264 and both audio tracks and muxx them together using the FB360 tool. That would then give me the final export, with video and all audio combined, that would be used for delivering. Our delivery format was usually in the mkv container.

We did experience a number of challenges along the way that we learned from. In the first season we had some big hiccups with color rendition across multiple headsets. Early on, the Oculus Quest didn’t have a link cable for live view, so I would attempt to grade an image, then export out a still frame to a headset like the Oculus Go to check the frame. I would realize it didn’t quite match, so I then would add an adjustment curve to compensate for the variations in color displays only to find everything looked different on another headset. I eventually created compensation LUTs to add so that I could get views of what the image would look like on different headsets. That’s gone away now with the Oculus link; the image that I see in the headset is much more in tune with what the export will be. This really sped up the coloring process. Having a headset with a live view (and a system that could run it) is definitely key to getting the editing process to a reasonable speed.

How long did post production take?

Our end product would typically be about 15-20 minutes in length, cut down from around 4-6 hours of captured footage. Post-production typically took about a month, but that time has sped way up since then. We were relatively early with this type of content in 3D 180 VR, so there was a lot of going back and forth, trying out different things, experimenting with editing styles and graphics, and also just getting the tech to work.

Alt text.

Photo: Tested VR BTS, Alexis Noriega

We’re working on season 2 at the moment, and with our house style now developed, with the Oculus Quest supporting live view in Adobe Premiere, and with the institutional knowledge we’ve gained, episodes usually take us about 2 weeks.

Did you have to fix something in post? What was it?

The things that we’d often have to fix in post rarely dealt with stereo image. Because we were using Z Cam’s Wonderstich software most of the stereo work was automated and we couldn’t really touch the alignment settings. The things we did have to fix were little bits of weirdness we didn’t account for; sometimes a tripod leg would sneak into frame, or light would hit one of the lenses but not the other, and it would cause too much discomfort. For the tripod legs we would alter our overlay mask to cover that stuff up, but for light flaring in one lens there wasn’t much you could do. That is why you have to be careful to evaluate each shot as you set up. In those situations we would have to ask ourselves if the shot was worth it, and how to minimize it.

Alt text.

Photo 1 Left: Tested VR BTS, Melissa Ng (Left), Brianna Chin (Right); Photo 2 Right: Tested VR BTS, Damien Zimmerman

I think the biggest problem we had came in the form of color crossover. In one episode in particular there were multiple different lights in the workshop, all with different colors that were bleeding into each other and creating a very unnatural looking environment. On top of that, we had a large soft source of light on the subject. To our naked eyes things looked okay. Our eyes tend to neutralize color crossover in a way that feels normal to us, and the dynamic range of our vision lets us resolve things at a lower contrast. However, in the camera all the areas of the shop had large color cast spills with ugly crossovers, and they were also much darker than we would have liked; because we exposed for our subject, and had too much light on him, the rest of the world looked darker and lonely and cold.

I couldn’t fix this in Adobe Premiere, so I bumped out a couple shots into Davinci Resolve and used an array of nodes to do targeted color correction. I told the software to take certain colors at certain values and saturation and bring them back towards something more neutral. Then I was careful to adjust curves and levels and use masking to make the world look more normal and inviting. I exported that out as a LUT and brought it into Premiere and applied it via an adjustment layer. It seemed to work pretty well, but boy what a pain! This was partly due to us not setting the stage correctly, but also because there weren’t great monitoring tools out yet for these cameras, and as we were in a bit of a rush we didn’t take the time to render out preview clips to view in the headset before saying “action!” That’s a mistake we will not make again.


How did you decide to distribute the project?

To distribute the series in headset we wanted to package them in a fun way that would explain the concept as well as give us a chance to introduce the VR audience at large to Tested and Adam’s workshop. We decided that an Oculus Go and Oculus Quest app would be ideal, and we worked with a developer who designed an interactive cardboard-themed recreation of Adam’s workshop that would serve as a menu interface as well as a landing place for the series. This also allowed us to make use of the controllers on the Oculus Quest, enabling viewers to pick up objects and artifacts to get more detail about the makers and get some more context about the videos outside of the content itself.

Alt text.

Photo: Tested VR BTS, Adam Savage

After the release of Tested VR, Oculus launched its own Media Studio platform for content creators to directly upload immersive video into the Oculus TV app, and we’ve uploaded the entirety of Tested VR there so viewers can stream the series without having to download the episodes first.

What did you learn from audience feedback?

We were really surprised by how much people enjoyed just visually exploring the spaces we filmed in, and the different details that resonated with each viewer. Some people were picking out details on the floor, or trying to identify certain tools that appeared but weren’t used in the projects. It helped us realize the importance of staging each shot, and it also showed that the viewers quickly gave themselves permission to lose eye contact with the presenter and look around. As much as immersive video has the power to simulate an intimate one-on-one conversation, it’s kind of an asymmetrical relationship between the viewer and presenter. It can look weird when a presenter doesn’t make eye contact with you during a piece to camera, but you have no problem looking away either, which isn’t something that normally happens in real-world interactions.