Partially observable markov decision processes and robotics

H Kurniawati - Annual Review of Control, Robotics, and …, 2022 - annualreviews.org
Planning under uncertainty is critical to robotics. The partially observable Markov decision
process (POMDP) is a mathematical framework for such planning problems. POMDPs are …

Risk-averse bayes-adaptive reinforcement learning

M Rigter, B Lacerda, N Hawes - Advances in Neural …, 2021 - proceedings.neurips.cc
In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the
problem of optimising the conditional value at risk (CVaR) of the total return in Bayes …

Online planning for constrained POMDPs with continuous spaces through dual ascent

A Jamgochian, A Corso, MJ Kochenderfer - Proceedings of the …, 2023 - ojs.aaai.org
Rather than augmenting rewards with penalties for undesired behavior, Constrained
Partially Observable Markov Decision Processes (CPOMDPs) plan safely by imposing …

Voronoi progressive widening: efficient online solvers for continuous state, action, and observation POMDPs

MH Lim, CJ Tomlin, ZN Sunberg - 2021 60th IEEE conference …, 2021 - ieeexplore.ieee.org
This paper introduces Voronoi Progressive Widening (VPW), a generalization of Voronoi
optimistic optimization (VOO) and action progressive widening to partially observable …

Optimality guarantees for particle belief approximation of POMDPs

MH Lim, TJ Becker, MJ Kochenderfer, CJ Tomlin… - Journal of Artificial …, 2023 - jair.org
Partially observable Markov decision processes (POMDPs) provide a flexible representation
for real-world decision and control problems. However, POMDPs are notoriously difficult to …

Partially observable markov decision processes (pomdps) and robotics

H Kurniawati - arXiv preprint arXiv:2107.07599, 2021 - arxiv.org
Planning under uncertainty is critical to robotics. The Partially Observable Markov Decision
Process (POMDP) is a mathematical framework for such planning problems. It is powerful …

Risk-aware meta-level decision making for exploration under uncertainty

J Ott, SK Kim, A Bouman, O Peltzer… - … on Control, Decision …, 2024 - ieeexplore.ieee.org
Autonomous exploration of unknown environments is fundamentally a problem of decision
making under uncertainty where the agent must account for uncertainty in sensor …

Constrained hierarchical monte carlo belief-state planning

A Jamgochian, H Buurmeijer, KH Wray… - … on Robotics and …, 2024 - ieeexplore.ieee.org
Optimal plans in Constrained Partially Observable Markov Decision Processes (CPOMDPs)
maximize reward objectives while satisfying hard cost constraints, generalizing safe …

Multilevel monte-carlo for solving pomdps online

M Hoerger, H Kurniawati, A Elfes - The International Symposium of …, 2019 - Springer
Planning under partial obervability is essential for autonomous robots. A principled way to
address such planning problems is the Partially Observable Markov Decision Process …

Towards sequential sensor placements on a wind farm to maximize lifetime energy and profit

A Yildiz, J Mern, MJ Kochenderfer, MF Howland - Renewable Energy, 2023 - Elsevier
The optimal design of a wind farm which maximizes energy production depends on the
spatially variable wind flow field. However, due to the complexity associated with modeling …