space_00

Exploratory Work.

This is the first of a series of explorations where I’m attempting to discover and develop an aesthetic narrative that relies on the attentive perception of the expressive qualities of physical materials, architectonic spaces, light and aural architecture. In the context of my previous work, I believe these virtual environments are a place convergence for the attentive spatial awareness of a soundscape, the sensorial experience of the configuration of a physical space, the sensorial narrative of detailed textures and materials revealed through lighting, and slowly paced careful observation.

I’m interested in the aesthetic potential of realistic architectural structures intervened by surreal geometrical forms that serve as containers or canvases for abstracted texture and material expression. The expression of “pure” materials (e.g. metals, glass, natural materials in homogeneous form) is usually done through Platonic solids whereas the mixed materials (from natural and evidently human manufacture) are presented through organic and irregular geometry. The textures utilized for rendering these are produced with a combination of existing PBR texture libraries and my own photogrammetry captured surfaces.

These environments are also sonically illuminated, to reveal their aural architecture, by the diffusion of multichannel generative sounds played through fixed and moving virtual speakers placed inside the space. The reverberation and occlusion of the sounds is synthesized in realtime utilizing the actual geometry of the architecture and the objects placed within.

With the ideal technology, and in order to elicit full immersion, these detailed and high resolution environments (both in pixel count and audiovisual expressive content) would be better experienced with a VR system. Since current technology is still not easily capable of rendering high fidelity, stereoscopic realtime renders, I’m currently experimenting with producing still images and monoscopic framed virtual cinematics, captured inside these environments. In the case of virtual cinematics, I’m exploring different types of camera movement and framing sequencing, including: fixed cameras, simulated rigs (cameras moving on simulated rails or cranes) and physical handheld virtual cameras driven by motion tracking sensors. This last technique has the added benefit of allowing me to capture a virtual environment with natural camera movements that I can use to express the expected quality of attention a viewer would require to fully experience these environments.

I’m using Unreal Engine as the main platform to assemble these environments. For the texture, material and geometry creation I’m employing a variety of tools that include Agisoft Metashape (photogrammetry), Substance Designer, Substance Painter, Quixel Mixer, xNormal, Blender and Instant Meshes. The organic geometries are generated via the mapping and manipulation of displacement maps taken from mixed materials and applied as vertex displacement to simple, highly tessellated geometric shapes. Initially the rendering inside Unreal Engine was done using rasterized reflections combined with baked lightmaps (GPU lightmass), but later I switched to fully raytraced dynamic shadows, global illumination, reflections and subsurface scattering.

The virtual camera setup consists of a handheld rig frame with an HTC Vive controller and a mobile device attached. The Vive controller is used to track the position of the rig in space and the mobile device is used as a viewfinder (via a remote desktop connection to the computer running Unreal Engine and the Virtual Camera plugin).

For the sound synthesis, I’m using my own implementation of Csound (C++ API) that runs inside Unreal Engine as a plugin. Csound produces multichannel audio that is then played through the virtual sources (speakers). The binaural spatialization of these sources is done using Steam Audio, with raytraced reverb and occlusion based on the room’s geometry. In Unreal Engine I’m using a recent addition (UE 4.25) to their audio API that allows for sound patching between components , using pairs of FPatchInput and FPatchOutput C++ objects to connect the Csound USynthComponent generating multichannel audio and USynthComponent receivers in separate actors. Note: The videos recorded in this environment and included in this page don’t have any audio since the audio aspects of this work are still being developed, but I included a demonstration video of the technique described in this paragraph.