Motion planning in uncertain and dynamic environments is an essential capability for autonomous robots. Partially observable Markov decision processes (POMDPs) provide a …
For reinforcement learning in environments in which an agent has access to a reliable state signal, methods based on the Markov decision process (MDP) have had many successes. In …
We propose a theoretical framework for approximate planning and learning in partially observed systems. Our framework is based on the fundamental notion of information state …
H Kurniawati, Y Du, D Hsu… - The International Journal …, 2011 - journals.sagepub.com
Motion planning with imperfect state information is a crucial capability for autonomous robots to operate reliably in uncertain and dynamic environments. Partially observable Markov …
(POMDPs) provide a principled mathematical framework for motion planning of autonomous robots in uncertain and dynamic environments. They have been successfully applied to …
B Bonet, H Geffner - IJCAI, 2009 - www-i6.informatik.rwth-aachen.de
Point-based algorithms and RTDP-Bel are approximate methods for solving POMDPs that replace the full updates of parallel value iteration by faster and more effective updates at …
H Bai, D Hsu, WS Lee, VA Ngo - … of Robotics IX: Selected Contributions of …, 2011 - Springer
Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP …
In a mixed environment of autonomous driverless vehicles and human driven vehicles operating on the same road, identifying intentions of human drivers and interacting with …
Robotic exploration tasks involve inherent uncertainty. They typically include navigating through unknown terrain, searching for features that may or may not be present, and …