Autonomous robots should be able to carry out localization and map creation in highly heterogeneous zones. In this work, global appearance descriptors are tested to perform the localization task. It focuses on the use of an omnidirectional vision sensor as unique source of information and global appearance to describe the visual information. Global-appearance techniques consist in obtaining a unique vector that describes globally the image. The main objective of this work is to propose and test new alternatives to build and to handle global descriptors. In previous experiments the images have been processed without considering the spatial distribution of the information. In contrast, in this work, the main approach is that relevant information will be in the central rows. For this reason central rows information is given a higher weight comparing to other zones of the image. The results show that this consideration can be an interesting presumption to take into account. The experiments are carried out with real images that have been taken in two different heterogeneous environments where simultaneously humans and robots work together. For this reason, variations of the lighting conditions, people who occlude the scene and changes on the furniture may appear.