In this paper, we discuss the learning of generalised policies for probabilistic and classical planning problems using Action Schema Networks (ASNets). The ASNet is a neural network …
Reinforcement learning in problems with symbolic state spaces is challenging due to the need for reasoning over long horizons. This paper presents a new approach that utilizes …
We consider the problem of generating optimal stochastic policies for Constrained Stochastic Shortest Path problems, which are a natural model for planning under uncertainty …
R Karia, RK Nayyar… - Advances in Neural …, 2022 - proceedings.neurips.cc
Several goal-oriented problems in the real-world can be naturally expressed as Stochastic Shortest Path problems (SSPs). However, the computational complexity of solving SSPs …
L Pineda, S Zilberstein - … of the International Conference on Automated …, 2014 - ojs.aaai.org
We introduce a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the …
We consider the problem of generating optimal stochastic policies for Constrained Stochastic Shortest Path problems, which are a natural model for planning under uncertainty …
State-of-the-art methods for solving SSPs often work by limiting planning to restricted regions of the state space. The resulting problems can then be solved quickly, and the …
Abstract Stochastic Shortest Path Problems (SSPs) are a common representation for probabilistic planning problems. Two approaches can be used to solve SSPs:(i) consider all …
D Wang, J Lipor, G Dasarathy - 2019 IEEE Data Science …, 2019 - ieeexplore.ieee.org
We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change …