G Konidaris - Current opinion in behavioral sciences, 2019 - Elsevier
A generally intelligent agent faces a dilemma: it requires a complex sensorimotor space to be capable of solving a wide range of problems, but many tasks are only feasible given the …
This open-source book represents our attempt to make deep learning approachable, teaching readers the concepts, the context, and the code. The entire book is drafted in …
We consider the problem of constructing abstract representations for planning in high- dimensional, continuous environments. We assume an agent equipped with a collection of …
We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks …
ME Taylor, P Stone - Journal of Machine Learning Research, 2009 - jmlr.org
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in …
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables …
C Grimm, A Barreto, S Singh… - Advances in neural …, 2020 - proceedings.neurips.cc
Learning models of the environment from data is often viewed as an essential component to building intelligent reinforcement learning (RL) agents. The common practice is to separate …
R Givan, T Dean, M Greig - Artificial intelligence, 2003 - Elsevier
Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for …
D Abel, D Hershkowitz… - … Conference on Machine …, 2016 - proceedings.mlr.press
The combinatorial explosion that plagues planning and reinforcement learning (RL) algorithms can be moderated using state abstraction. Prohibitively large task …