Supporting state-dependent action costs in planning admits a more compact representation of many tasks. We generalize the additive heuristic and compute it by embedding decision …
T Dodson, N Mattei, JT Guerin… - ACM Transactions on …, 2013 - dl.acm.org
A Markov Decision Process (MDP) policy presents, for each state, an action, which preferably maximizes the expected utility accrual over time. In this article, we present a novel …
V Strobel, A Kirsch - Knowledge Engineering Tools and Techniques for AI …, 2020 - Springer
Abstract The Planning Domain Definition Language (PDDL) is the state-of-the-art language for specifying planning problems in artificial intelligence research. Writing and maintaining …
Recent work has begun exploring the value of domain abstractions in Monte-Carlo Tree Search (MCTS) algorithms for probabilistic planning. These algorithms automatically …
Sample-based tree search (SBTS) is an approach to solving Markov decision problems based on constructing a lookahead search tree using random samples from a generative …
Replanning methods that determinize a stochastic planning problem and replan at each action step have long been known to provide strong baseline (and even competition …
Real world robotics often operates in uncertain and dynamic environments where generalisation over different scenarios is of practical interest. In the absence of a model …
Enabling an intuitive, bidirectional communication with real-time feedback to convey intentions and goals is essential in human-robot collaboration (HRC). In this paper, we …
In real world environments the state is almost never completely known. Exploration is often expensive. The application of planning in these environments is consequently more difficult …