The best camera settings and workflows for 360 photography

The best camera settings and workflows for 360 photography

360 photography can deliver incredibly rich, detailed images, but some of the techniques and best-practice methods differ from regular photography. Learn the key points here.

Hero image - example, camera settings for 360 photography

Categories:Skills & Principles
Tags:Image OptimizationCameraProductionStitching Rendering
Skill Level:

Read Time: 4 Minutes

Updated 12/30/2022


Creating immersive photography with the best possible image quality can be a little demanding, but with practise and the right workflow choices the end results can be exceptional.

The big divide in terms of processes, workflows and advice for 360 photography is between ‘one shot’ 360 cameras and traditional DSLR or mirrorless cameras teamed with some kind of panoramic tripod head. Dedicated 360 cameras remove a lot of the post-production requirements from the workflow, but this generally means losing some of the control possible over capture and quality that a DSLR-style rig offers. (When working with a panoramic head see Set up a panoramic head to shoot high-resolution 360 photos to make sure it is adjusted correctly for your camera and lens.)

Set exposure to manual

Each photo that is part of a 360 scene should be shot with the same exposure settings. All one-shot 360 cameras will handle this for you, matching your exposure choices across their multiple lenses and sensors. But DSLRs and mirrorless cameras should have all exposure controls set to manual and locked to the same settings for each shot that goes into a composite 360 photo.

This is a fundamental requirement for best practise professional 360 photo work, and the reason is simple: a 360 photo is created from at least two and normally more shots that are patched together to appear as one continuous image. If the exposure is different from one shot to the next, this is likely to look like uneven lighting at best and at times a distracting patchwork of exposure areas.

Having said this, it’s not impossible to shoot using auto exposure settings. If there’s a lot of overlap between shots then the stitching software can blend lightness levels across the scene, and the results can look good. This allows someone to expose for a bright window and then differently for a dark corner. However, this comes at the expense of accurate, consistent tones across the shots – for example where a flat, evenly lit wall runs through both photos – and as a result it is not considered best professional practise.

Assess the brightness and shadow areas of the complete scene. You may wish to settle on an exposure that’s simply averaged between the extremes, or you may prefer to expose for the most important parts of the scene and deal with any bright and dark areas in post-production or simply let them peak or fill in.

Bracket exposures

For scenes with light and dark extremes there’s a real risk that highlights will be blown out and shadows filled in. If there’s little or no movement in the scene, shooting bracketed exposures and blending them at some point in the production or post-production stage can be very helpful. The most common method is to take three shots per position, one at the normal exposure point, one two stops brighter, and one two stops darker. See what bracketing options your camera provides and practise shooting and processing these images.

Some cameras can shoot and merge bracketed exposures in-camera. This can be a big timesaver, especially when using 360 cameras that can also stitch and deliver the final equirectangular image. Where time isn’t critical, your camera can’t merge exposure brackets for you, or you simply want more control over the outcome, you can merge these in the post-production stage using mainstream tools such as Adobe Lightroom or dedicated HDR utilities such as Photomatix Pro, Bracketeer or the command-line enfuse. The output from this stage can be a standard 8-bit JPEG or TIFF, or a 16-bit TIFF. The latter is preferred for the very best workflow quality as it preserves more tonal detail through to the final step of a web-ready virtual tour, but the file sizes of these interim 16-bit TIFFs are significant.

Why HDR usually doesn’t mean HDR

True 32-bit HDR (high dynamic range) imaging is typically used in 3D modelling for environment maps. These are used to create a wide range of effects, including reflections, specular highlights, and diffuse lighting, but they aren’t used in more traditional situations.

When people say HDR they normally mean ‘tone-mapped’ images that have had their highlight and shadow ranges modified to normalize the overall exposure effect and avoid burned-out highlights and filled-in shadows, either through artificial image processing or through merging different exposures. These are usually 8-bit images with 256 brightness levels in each RGB channel rather than the billions of brightness levels possible with true 32-bit HDR.

These HDR-style tone-mapped images are often characterized by vibrant colors and an increased sense of contrast and detail. When handled sensitively the results can feel natural and believable. However, HDR processing can be easily overdone or poorly executed, leading to images that look unnatural or exaggerated. As with any photography technique, it is important to use tone-mapping with care in order to create high-quality images.

Shoot in RAW

If your camera can capture and deliver RAW files rather than JPEGs that will provide a significant boost to the level of adjustments that can be made. RAW files also require an extra processing step, although serious photo work should always allow for image optimizing whatever file format the camera saves.

The benefits of shooting in RAW rather than JPEG include:

  • Significantly greater dynamic range
  • Ability to alter white balance without any impact on image quality
  • No image compression artefacts

The drawbacks of shooting in RAW include:

  • Significantly larger file sizes
  • The need to process shots into regular bitmap file formats before use

Optimize images before stitching

It is best to make all adjustments and optimizations to the images before stitching. Doing this after the composite 360-degree equirect has been generated can lead to visible seams where the sides meet, and it can also cause ‘cat butt’ vortex effects to the top and bottom (zenith and nadir) of the image when viewed in immersive VR. Global adjustments such as levels and curves are safe to make, but avoid features such as Clarity, Dehaze, sharpening or noise removal that work with local differences in the image.

Once the shots have been processed they are ready to be stitched. See Using a DSLR or mirrorless camera to shoot and stitch a 360 photo shooting with a panoramic head for help with this stage of the process.