Autonomous driving system: A comprehensive survey

J Zhao, W Zhao, B Deng, Z Wang, F Zhang… - Expert Systems with …, 2024 - Elsevier
Automation is increasingly at the forefront of transportation research, with the potential to
bring fully autonomous vehicles to our roads in the coming years. This comprehensive …

Decision-making under uncertainty: beyond probabilities: Challenges and perspectives

T Badings, TD Simão, M Suilen, N Jansen - International Journal on …, 2023 - Springer
This position paper reflects on the state-of-the-art in decision-making under uncertainty. A
classical assumption is that probabilities can sufficiently capture all uncertainty in a system …

Learning in observable pomdps, without computationally intractable oracles

N Golowich, A Moitra, D Rohatgi - Advances in neural …, 2022 - proceedings.neurips.cc
Much of reinforcement learning theory is built on top of oracles that are computationally hard
to implement. Specifically for learning near-optimal policies in Partially Observable Markov …

Robust anytime learning of Markov decision processes

M Suilen, TD Simão, D Parker… - Advances in Neural …, 2022 - proceedings.neurips.cc
Markov decision processes (MDPs) are formal models commonly used in sequential
decision-making. MDPs capture the stochasticity that may arise, for instance, from imprecise …

Safe reinforcement learning via shielding under partial observability

S Carr, N Jansen, S Junges, U Topcu - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Safe exploration is a common problem in reinforcement learning (RL) that aims to prevent
agents from making disastrous decisions while exploring their environment. A family of …

Safe policy improvement for POMDPs via finite-state controllers

TD Simão, M Suilen, N Jansen - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
We study safe policy improvement (SPI) for partially observable Markov decision processes
(POMDPs). SPI is an offline reinforcement learning (RL) problem that assumes access to (1) …

Inductive synthesis of finite-state controllers for POMDPs

R Andriushchenko, M Češka… - Uncertainty in …, 2022 - proceedings.mlr.press
We present a novel learning framework to obtain finite-state controllers (FSCs) for partially
observable Markov decision processes and illustrate its applicability for indefinite-horizon …

Search and explore: symbiotic policy synthesis in POMDPs

R Andriushchenko, A Bork, M Češka, S Junges… - … on Computer Aided …, 2023 - Springer
This paper marries two state-of-the-art controller synthesis methods for partially observable
Markov decision processes (POMDPs), a prominent model in sequential decision making …

Parameter Synthesis for Markov Models: Covering the Parameter Space

S Junges, E Ábrahám, C Hensel, N Jansen… - arXiv preprint arXiv …, 2019 - arxiv.org
Markov chain analysis is a key technique in formal verification. A practical obstacle is that all
probabilities in Markov models need to be known. However, system quantities such as …

Task-aware verifiable RNN-based policies for partially observable Markov decision processes

S Carr, N Jansen, U Topcu - Journal of Artificial Intelligence Research, 2021 - jair.org
Partially observable Markov decision processes (POMDPs) are models for sequential
decision-making under uncertainty and incomplete information. Machine learning methods …