
Operational Capability Gap:
•Today’s sensors and systems do not easily allow an operator to continuously scan his surroundings for threats while focusing on his mission and ability to act first. Additionally, they lack the ability to easily access mission critical information while maintaining continual situational awareness, which could impact their survivability.
•With the myriad of C4I displays/ devices, operator focus is diluted, and their cognitive capabilities overloaded with multiple screens and information sources. This results in decreased situational awareness (SA) and reduced effectiveness. A capability is needed that compiles information from the disparate sources, reduces the cognitive workload using AI/ML algorithms to integrate and provide key information the operator needs during the various phases of a mission. Information needs to be displayed using intuitive symbology, filterable for operator preferences, and be within the operator’s field of view so they can maintain sight on objective/target.
•Operators currently need to learn to operate several devices/systems to maximize their situation awareness. This includes mission planning (SOMPE) and execution systems (Tech-Intel… ATAK, navigation, radios, ODA intercoms, weapons/sights, NVGs, thermal imagers, SUAS/video feeds).
–Being task saturated, operators are not able to be proficient will all the capabilities being provided to them, effecting readiness. To share the knowledge workload, SME responsibilities are divided among the team members, which is not optimal.
–A capability is needed that minimizes individual training on each device to optimize outputs for actionable information that the operator would then consider, analyze and decide the best COA based on all system outputs. The new capability should reduce the training overload and tactical OODA loop timeline for the operator to act first and rapidly prosecute the target. It should allow all team members to maximize use of all tech intel capabilities available to them, maximize situation awareness and increase mission effectiveness with a technology overmatch against our adversaries.

Operational Application/CONOP:
•Helmet Mounted high resolution
see-through display that works with NVGs and synergistically integrates inputs
from multiple sources (video, navigational, symbology, weapons sight, ATAK,
etc.) and presents mixed reality symbology within the operators FOV to elevate
SA, day or night.
•A Mixed Reality Augmentation System
(MRAS) would be used for a variety of SOF operational directives, including
airborne, surface mobility, and ground. This type of system would increase
operator’s ability to conduct ISR at the team, squad or platoon level in most
or all operational environments.
•Android based MRAS enables simple
integration with any platform / weapon system and allows quick target
acquisition and target hand over to kill chain/team members.
•High resolution binocular display from
any input (video, navigational, symbology, weapons sight, ATAK, UAS etc.) with
embedded mixed reality symbology, significantly elevates operator situational
awareness (SA) and reconnaissance capabilities.
–Increases SA on the move; walking,
driving, sailing or HALO/HAHO jumping and reduces cognitive load during
HALO/HAHO and enables easy navigation to the LZ by creating virtual tracks to
follow and decreases operators’ risk.
–MRAS will transform digital information
display and sharing across the team networks, increasing lethality, mission
effectiveness and survivability of team members in austere conditions, day and
night.
–Additionally allows using of Remote
Weapons Station and Unmanned Systems, and
leverages streaming video feeds from sensors for greater (SA) without losing
SA.
–The system Variable transmission visor
supports displays for day/night operation in conjunction with existing NVGs



Deficiency of Existing Technology: Current Augmented Reality (AR) systems suffer from 2 major drawbacks; they are too big/heavy/bulky/consume a lot of power AND don’t have sufficient resolution/accuracy to support the AR/MR challenge.
•Current AR systems are not designed to work in conjunction with existing NVGs/night vision capabilities.
•Existing UMS either use VR googles that block the operator’s view of the operating environment or have poor UMS video resolution, reducing operator’s SA.
•Other efforts to solve the technology gap and operational requirement are both immature and too expensive to offer a satisfactory near or midterm solution.
•AI/ML is just beginning to emerge so there is no specific solution available with a common interface to ingest and integrate multiple lines of data from multiple sources.
•Traditionally, ISR/SA capabilities are developed and built as individual systems. Standards for common interfaces and data formats have yet to be established that support agnostic plug and play management of raw data from multiple systems.
Logistics and Interoperability:.
•System must interface with and be compatible with existing Tactical Networks, ISR capabilities, Navigation Devices, Information systems such as ATAK and tactical Comms.
•It should be single person portable/operable with minimal support equipment and ruggedized/highly reliable to withstand the majority of USSOCOM operating environments.
•If a system is damaged, the vendor will provide a hot swap with 4 weeks lead time after damaged system is received.
•System and operator maintenance training should not exceed training requirements for similar systems.
