Introduction

360° video uses video formats to encode moving image sequences that surround the user in virtual space. The user is able to freely rotate their head, determining the viewing angle. The viewing position may be a fixed position, may follow a pre-determined path (on rails), or may offer some degree of positional interactivity through "portals" which jump the viewer to another piece of 360 video content.

360° video can be created in a number of different ways:

  • Captured by a camera or array of camera lenses;

  • Generated as an export from 3D rendering software (e.g. Blender);

  • Generated from a real-time 3D game engine. The 360° video exported from a game engine can be the artistic end product (s. for instance "Passage Park #7: Relocate" by Studer / van den Berg) or the documentation of a real-time 3D artwork.

Assessing 360° Video

  • How was the 360° video created?

    • Was it created through camera capture? If so, is the camera type and output format known?

    • What is created using a 3D software tool e.g. 3D renderer or game engine? Is the capture and output format of the video known?

    • Did the video undergo a stitching process, and is the software known?

    • Did the video undergo a editing process and is the software known? Are these production assets available?

  • By inspecting file metadata, are you able to determine key characteristics of the 360° video format? These are:

    • The projection format used e.g. cubemap, equirectangular.

    • Whether the video is monoscopic or stereoscopic.

    • What codec and pixel format has the video has been encoded with?

Acquisition Checklist

The following measures can help ensure long-term access to 360° video:

  • Ensuring metadata has been captured describing the projection format and mono/stereoscopic format used.

  • Ensuring the video file received is the highest quality available.

  • Considering whether the source video files (pre-stitching) should also be acquired.

Last updated