Coverage path planning methods focusing on energy efficient and cooperative strategies for unmanned aerial vehicles

G Fevgas, T Lagkas, V Argyriou, P Sarigiannidis - Sensors, 2022 - mdpi.com
The coverage path planning (CPP) algorithms aim to cover the total area of interest with
minimum overlapping. The goal of the CPP algorithms is to minimize the total covering path …

Survey on coverage path planning with unmanned aerial vehicles

TM Cabreira, LB Brisolara, FJ Paulo R - Drones, 2019 - mdpi.com
Coverage path planning consists of finding the route which covers every point of a certain
area of interest. In recent times, Unmanned Aerial Vehicles (UAVs) have been employed in …

Multidrone aerial surveys of penguin colonies in Antarctica

K Shah, G Ballard, A Schmidt, M Schwager - Science Robotics, 2020 - science.org
Speed is essential in wildlife surveys due to the dynamic movement of animals throughout
their environment and potentially extreme changes in weather. In this work, we present a …

A multi-objective coverage path planning algorithm for UAVs to cover spatially distributed regions in urban environments

A Majeed, SO Hwang - Aerospace, 2021 - mdpi.com
This paper presents a multi-objective coverage flight path planning algorithm that finds
minimum length, collision-free, and flyable paths for unmanned aerial vehicles (UAV) in …

Energy-Efficient Multi-UAV Multi-Region Coverage Path Planning Approach

G Ahmed, T Sheltami, A Mahmoud - Arabian Journal for Science and …, 2024 - Springer
Due to the high deployment flexibility and strong maneuverability, unmanned aerial vehicles
(UAVs) have gained a significant attention in civilian and military applications. One of the …

A Tabu list strategy based DQN for AAV mobility in indoor single-path environment: implementation and performance evaluation

N Saito, T Oda, A Hirata, Y Nagai, M Hirota… - Internet of Things, 2021 - Elsevier
Abstract The Deep Q-Network (DQN) is one of the key methods in the deep reinforcement
learning algorithm, which has a deep neural network structure used to estimate Q-values in …

Multi-underwater gliders coverage path planning based on ant colony optimization

H Ji, H Hu, X Peng - Electronics, 2022 - mdpi.com
Underwater gliders (UGs) are widely applied to regional exploration to find potential targets.
However, the complex marine environment and special movement patterns make it difficult …

Multi-UAV collaboration to survey Tibetan antelopes in Hoh Xil

R Huang, H Zhou, T Liu, H Sheng - Drones, 2022 - mdpi.com
Reducing the total mission time is essential in wildlife surveys owing to the dynamic
movement of animals throughout their migrating environment and potentially extreme …

Human–Machine Network Through Bio-Inspired Decentralized Swarm Intelligence and Heterogeneous Teaming in SAR Operations

ME Longa, A Tsourdos, G Inalhan - Journal of Intelligent & Robotic …, 2022 - Springer
Disaster management has always been a struggle due to unpredictable changing conditions
and chaotic occurrences that require real-time adaption. Highly optimized missions and …

A LiDAR based mobile area decision method for TLS-DQN: improving control for AAV mobility

N Saito, T Oda, A Hirata, C Yukawa, E Kulla… - Advances on P2P …, 2022 - Springer
Abstract The Deep Q-Network (DQN) is one of the deep reinforcement learning algorithms,
which uses deep neural network structure to estimate the Q-value in Q-learning. In the …