VRSG's Core Image Generator Features
MetaVR Virtual Reality Scene Generator™ (VRSG™) supports the typical features required for image generators used in flight, ground vehicle, and infantry training simulators, and many other applications. Image generators are typically driven by users’ simulator host model, such as a flight model. VRSG renders a virtual world as it is specified by host parameters such as location and field-of-view.
VRSG's core image generator features include:
VRSG supports full-featured light points; processing runs entirely in vertex shader programs downloaded to the graphics chipset, providing exceptional performance. VRSG light points were developed with input from subject matter experts, such as commercial and military pilots.
All light points, including directional light points with unique per FOV edge attenuation behavior, run entirely in the vertex shader, providing exceptional performance. Light point features include:
VRSG provides realistic light lobes that yield per-pixel radial attenuation and per-vertex axial attenuation. VRSG light lobes are flexible enough to support landing lights, taxi lights, headlights, and searchlights. VRSG light lobes do not require multiple database render passes or hardware that can store alpha information in the frame buffer. Instead, VRSG light lobes are rendered single-pass, which affords minimal performance degradation when enabling a light lobe. You can configure multiple concurrent, independent light lobes. No drastic impact on fill rate or geometry processing penalties is incurred when enabling light lobes. VRSG supports independent, concurrent, steerable light lobes for video cards that support Pixel Shader Model 5.0.
VRSG supports a highly optimized dynamic lighting pipeline, which uses per-vertex color, blended with per-polygon material, combined with ambient lighting conditions and directional light sources for efficient and convincing dynamic lighting effects.
VRSG supports environment and weather effects such as:
VRSG uses an ephemeris model to calculate sun position, moon position, star position, and moon phase from date, time, and geographic location. Lighting conditions can also be automatically calculated from date, time, and geographic location.
Using CIGI or DIS SetDataPDUs, users can instantiate multiple volumetric clouds from our library of over 13 cloud models. A cloud can be positioned, oriented, scaled, and moved over time. Volumetric clouds are particle masses that model light absorption, creating a realistic reduction in visibility when flown through. VSRG features a real-time dynamic lighting model for clouds that models light absorbtion as a function of particle depth into the cloud along the line-of-sight to the sun. Clouds can cast shadows on the terrain.
Clouds can have a optional precipitation effect modeling either rainfall or snow. The precipitation effect is also a volumetric mass extending from the cloud base through ground level, creating a realistic reduction in visibility during flight. The rain or snow precipitation effect is generated dynamically, so it can be applied to any cloud instance, at any cloud altitude.
VRSG supports multiple mechanisms for adding 2D overlays to the 3D display. VRSG includes built-in overlays for many popular UAS systems and targeting pods. For CIGI integrators, VRSG supports a large portion of the CIGI 3.3 symbology opcodes. More advanced integrations can develop overlays using VRSG's Plugin API and the DirectX Graphics API.
MetaVR provides a plug-in mechanism for users who want to generate overlay graphics, such as sensor overlays, using a low-level graphics API. The end user develops a dynamically loaded library (DLL), which the visual system loads at runtime. The visual system makes calls into functions exported by the DLL, which pass the thread of execution to the user-written function. From within the DLL, the user can use Direct3D to render customized overlays.
VRSG supports an unlimited number of viewports per channel. Multiple viewports on a single visual channel may be overlapped or spatially disjoint. Viewports can be horizontally mirrored to support applications that demand this (such as, rear-view mirror), or display systems whose optics imposes a horizontal reversal of the image.
VRSG supports the basic mission functions requirements to meet the needs of ground -based vehicle simulators up to fast moving fixed-wing aircraft. VRSG channels support one laser range per channel, per frame at 60 HZ, with single frame latency. VRSG also supports an Above Ground Level (AGL) response per channel at 60 HZ. A library that can be integrated into the simulation host provides for features such as point-to-point intervisibility, terrain height lookup, and collision detection with terrain or dynamic model geometry.
Mission rehearsal applications require the ability see long distances (that is, far horizon) and process large amounts of geospecific imagery draped upon terrain elevation data. MetaVR's visual database format, round-earth Metadesic format, meets these and other mission rehearsal requirements.
MetaVR VRSG real-time scene of an A-10C entity flying over the geospecific 2 cm per-pixel resolution synthetic 3D terrain of the Prospect Square area of the Yuma Proving Ground (YPG).
VRSG provides the ability to create a wire-frame threat dome, in a bubble or cylindrical shape, that represents the detection and lethal ranges of a Surface to Air Missile (SAM) or similar threat system. You can describe the radius of the dome in meters, and the color of the dome in terms of the red, green, and blue components of the intended color.