Spatial cognitive processing is a critical domain of human spatial cognition that involves studying how humans perceive their spatial environments and the relationship between different spatial objects that exist within them. Humans encode spatial relationships through egocentric processing in which they relate spatial objects with reference to their own body and through allocentric processing, which involves relating spatial objects with reference to other objects. Using this spatial encoding, whether egocentric or allocentric, humans stive to create a complete and accurate perception of a space. Other sources of information such as proprioceptive, somatosensory, and vestibular system further enhance the process of spatial perception. An accurate and complete spatial perception is critical to not only safely living and efficiently working in space but also navigating within and across the space. In other words, spatial perception impacts our everyday mundane and critical tasks.
Spatial cues emanating from spatial landmark objects in a spatial environment form one such a source of information that helps human brains to comprehend and perceive distance and size, which governs all major and minor decisions humans make in their routine lives. On earth, objects such as roads, buildings, trees, street poles, cars, and people act as visual landmarks to offer visual cues to help accurately determine size and distances that we use to make informed decisions. For instance, the judgment of the speed of an oncoming vehicle, decision of when to brake while riding a bike, and determining synchronized working with a piece of equipment rely heavily on our ability to perceive relative distances and sizes. The availability of these visual landmarks, therefore, could influence human ability to accurately and completely perceive distance and sizes.
Due to the advent of emerging technologies, the future workplaces are evolving faster and involving work conditions that may deprive humans of these essential visuospatial cues. For instance, polar regions and hot deserts of Earth may have no or very limited visual landmarks that may impact human spatial perception. Lunar conditions and environments on other planets such as Mars have landscapes that may not be familiar to humans. Moreover, such places do not have these spatial landmark objects, which may make perceiving distance and size a challenging task in such conditions. In fact, conditions of astronauts doing a spacewalk not only lack these visual landmarks but also the terrain that may offer some spatial cues. If the spatial perception is distorted due to the lack of visuospatial cues, humans may not be able to understand distance, size, and speed accurately, which may jeopardize their health, safety, and work productivity.
The main goal of this paper is to examine how human spatial perception, specifically size perception, is affected by different levels of availability of spatial cues. To reach this goal, spatial environments of a city with all routine visual landmarks, Martian terrain with unfamiliar landscape with no familiar visual landmarks, and deep space with no base plane or terrain and no visual landmarks are simulated in Virtual Reality (VR) using Unity 3D game engine. VR can offer a powerful medium to simulate real-world conditions effectively and realistically, particularly those that cannot be experienced first-hand. The city, Mars, and space conditions are designated as control, (CG), experiment 1 (EX1), and experiment group 2 (EX2). Each condition is embedded with a size perception test that involves participants manipulating the length, width, and height of a cube-like object and make it a perfect cube. One …