Pre-Production Process
Set Scanning / Location Capture
As part of a virtual production workflow, it is often required to create a digital version of a physical set, or to create a virtual version of a real-world location. These can be used in a multitude of ways:
- To scout camera positions, either in virtual reality, on a workstation or as part of a virtual camera session.
- To plan action sequences without needing access to the location
- In previsualization, to create animated storyboards or fully animated sequences.
- As an on-set reference tool, using Simulcam / real-time compositing.
- As final-picture assets filmed with ICVFX.
- As reference for post-production, showing set pieces, light placement etc.
Photogrammetry
Photogrammetry is a process that uses overlapping still photographs or a sequential video file to extract a three-dimensional object. The input data can be captured with consumer grade cameras, and the image data is also used to texture the object, creating a high-resolution asset to be ingested into a digital workflow. Aerial Photogrammetry uses a drone-mounted camera to capture the images, resulting in a large-scale landscape mesh.
- Assets created in this way can not be used out-of-the-box for ICVFX. When photo-realistic assets are required, an Asset Artist will need to clean up and prep the object before it is used.
- Photogrammetry struggles with reflective objects, or featureless environments.
- A large number of images are required to calculate the most accurate result.
- The output object will be created at an arbitrary scale. An object of known size must be included in the scan to allow accurate scaling.
Image 1: An Aerial / Drone scan, showing camera positions
Image2: An object scan, showing camera positions and images
Hardware required
A DSLR Stills camera / Drone Camera (depending on quality required and accessibility of location))
Software required
Reality Capture / Agisoft Metashape / Autodesk ReCap
Photogrammetry can also be used to scan actors, although a different hardware setup is required (multiple synchronized static cameras, rather than one camera in multiple locations).
LIDAR Scanning
LIDAR is an acronym for ‘light detection and ranging’ and uses custom hardware scanners to bounce laser pulses that create a three-dimensional map of an environment. Depending on the scanner used, this will either be a greyscale mesh object, or a textured / colored mesh.
Hardware required:
A LIDAR scanner. (FARO, Leica)
Software required:
FARO Scene / Scanner manufacturer software
Notes:
- Assets created in this way can not be used out-of-the-box for ICVFX. When photo-realistic assets are required, an Asset Artist will need to clean up and prep the object before it is used.
- LIDAR struggles with reflective surfaces, as it uses bounced light.
- Scans are ideally done with a clear set.
- Multiple static scans are created and merged in a post-processing step into a cohesive point cloud, which can then be meshed.
Left Image: A FARO LIDAR scanner
Right Image 2: Resulting LIDAR data
Hybrid Scanning
If the highest accuracy is required, the best approach is a combination of the above. LIDAR scan data gives the highest scale and world-space accuracy, while adding high resolution images from a DSLR camera gives the highest texture accuracy. The data from both approaches can be combined using photogrammetry software and output as an accurate, high resolution textured asset.
If the asset will be used on camera, or absolute accuracy is required for technical visualization or detailed action planning, this is the recommended approach.
Virtual Art Department (VAD)
As early as 12 weeks prior to the first virtual production shoot day, production should be thinking about engaging the production designer, director, DP, and VAD. This core group working early and often will lead to a successful and efficient shoot. It is worth noting that ASVP does not have a full internal VAD team. We will work closely with production to build a team or source vendors to create assets for the project.
The VAD is used for many purposes including:
- Creating the virtual environments to be displayed on the LED volume to act as the digital background to complement the practical set. See below in the Virtual Environment section for a more detailed explanation.
- Figuring out the Volume placement, scene setup, what needs to be virtually built and what needs to be practically built.
- Driving the location scouting session with the filmmakers.
- Creating the virtual props which drive the creation of the practical props.
- Assisting the DP in pre-lighting the sets, for both the virtual and practical. So when the DP comes in on the lighting day with his crew, he has the exact look ready.
- Assisting the Production Designer in realizing the design in full 3D and seeing it in Virtual Reality before creation of the large set pieces.
The amount of time and cost it takes to create these full CG virtual environments vary greatly depending on complexity and methodology, like any practical sets or visual effects sets. The good rule of thumb is that it will take roughly the same amount of time to create the virtual environment as it would take a visual effects company to create the same thing to be used in a VFX shot. The big difference is that the VAD environments must be completed prior to filming, unlike VFX sets.
Virtual Environments
The Virtual Environments are created by the VAD under the guidance of the Production Designer and utilizing lighting input from the DP. The virtual environments to be used in the Volume come in many flavors. The type of shots will determine what kind of virtual environment techniques will be used. Most virtual sets can be created in 6 to 12 weeks.
Static 360 Photography
A 360 degree panoramic photograph is created by combining multiple photographs captured at the desired real world location. This spherical photograph is then projected as a backdrop on the LED Volume.
- Pros: Proper lighting and reflection and background. Photorealistic. Inexpensive. Fast turnaround.
- Cons: No parallax. Extremely limited flexibility. No animation.
Stitched Video Playback
Video plates are captured via a camera array and stitched together to playback seamlessly in the Volume. Alternatively, a rendered sequence of a camera moving through a fully CG environment can be used. The movies are rendered out spherically to be projected on the Volume. This is great for driving, flying or any types of traveling shots.
- Pros: Correct lighting and reflection. Moving background.
- Cons: No parallax. Extremely limited modifications.
Unreal CG Environment
A full CG environment is created using various techniques such as: Photogrammetry, Lidar/Laser Scans or creating from scratching using a DCC.
- Pros: Accurate parallax, lighting and reflection. Extremely flexible and adjustable. Not limited to being based on any real physical locations. Ability to virtually location scout in VR.
- Cons: Most expensive of the methods. Takes the longest amount of time to create. Not as realistic as the other two methods.