Multimodal learning and inference from visual and remotely sensed data

D Rao, M De Deuge… - … Journal of Robotics …, 2017 - journals.sagepub.com
The International Journal of Robotics Research, 2017journals.sagepub.com
Autonomous vehicles are often tasked to explore unseen environments, aiming to acquire
and understand large amounts of visual image data and other sensory information. In such
scenarios, remote sensing data may be available a priori, and can help to build a semantic
model of the environment and plan future autonomous missions. In this paper, we introduce
two multimodal learning algorithms to model the relationship between visual images taken
by an autonomous underwater vehicle during a survey and remotely sensed acoustic …
Autonomous vehicles are often tasked to explore unseen environments, aiming to acquire and understand large amounts of visual image data and other sensory information. In such scenarios, remote sensing data may be available a priori, and can help to build a semantic model of the environment and plan future autonomous missions. In this paper, we introduce two multimodal learning algorithms to model the relationship between visual images taken by an autonomous underwater vehicle during a survey and remotely sensed acoustic bathymetry (ocean depth) data that is available prior to the survey. We present a multi-layer architecture to capture the joint distribution between the bathymetry and visual modalities. We then propose an extension based on gated feature learning models, which allows the model to cluster the input data in an unsupervised fashion and predict visual image features using just the ocean depth information. Our experiments demonstrate that multimodal learning improves semantic classification accuracy regardless of which modalities are available at classification time, allows for unsupervised clustering of either or both modalities, and can facilitate mission planning by enabling class-based or image-based queries.
Sage Journals
以上显示的是最相近的搜索结果。 查看全部搜索结果