[HTML][HTML] Deliberation for autonomous robots: A survey

F Ingrand, M Ghallab - Artificial Intelligence, 2017 - Elsevier
Autonomous robots facing a diversity of open environments and performing a variety of tasks
and interactions need explicit deliberation in order to fulfill their missions. Deliberation is …

Verification of Markov decision processes using learning algorithms

T Brázdil, K Chatterjee, M Chmelik, V Forejt… - … for Verification and …, 2014 - Springer
We present a general framework for applying machine-learning algorithms to the verification
of Markov decision processes (MDPs). The primary goal of these techniques is to improve …

Goal probability analysis in probabilistic planning: Exploring and enhancing the state of the art

M Steinmetz, J Hoffmann, O Buffet - Journal of Artificial Intelligence …, 2016 - jair.org
Unavoidable dead-ends are common in many probabilistic planning problems, eg when
actions may fail or when operating under resource constraints. An important objective in …

Simulated penetration testing: From" dijkstra" to" turing test++"

J Hoffmann - Proceedings of the international conference on …, 2015 - ojs.aaai.org
Penetration testing (pentesting) is a well established method for identifying security
weaknesses, by conducting friendly attacks. Simulated pentesting automates this process …

Learning stochastic shortest path with linear function approximation

Y Min, J He, T Wang, Q Gu - International Conference on …, 2022 - proceedings.mlr.press
We study the stochastic shortest path (SSP) problem in reinforcement learning with linear
function approximation, where the transition kernel is represented as a linear mixture of …

Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret

J Tarbouriech, R Zhou, SS Du… - Advances in neural …, 2021 - proceedings.neurips.cc
We study the problem of learning in the stochastic shortest path (SSP) setting, where an
agent seeks to minimize the expected cost accumulated before reaching a goal state. We …

Inductive synthesis of finite-state controllers for POMDPs

R Andriushchenko, M Češka… - Uncertainty in …, 2022 - proceedings.mlr.press
We present a novel learning framework to obtain finite-state controllers (FSCs) for partially
observable Markov decision processes and illustrate its applicability for indefinite-horizon …

The 2019 Comparison of Tools for the Analysis of Quantitative Formal Models: (QComp 2019 Competition Report)

EM Hahn, A Hartmanns, C Hensel, M Klauck… - … Conference on Tools …, 2019 - Springer
Quantitative formal models capture probabilistic behaviour, real-time aspects, or general
continuous dynamics. A number of tools support their automatic analysis with respect to …

On correctness, precision, and performance in quantitative verification: QComp 2020 competition report

CE Budde, A Hartmanns, M Klauck, J Křetínský… - … applications of formal …, 2020 - Springer
Quantitative verification tools compute probabilities, expected rewards, or steady-state
values for formal models of stochastic and timed systems. Exact results often cannot be …

[HTML][HTML] Optimal cost almost-sure reachability in POMDPs

K Chatterjee, M Chmelik, R Gupta, A Kanodia - Artificial Intelligence, 2016 - Elsevier
We consider partially observable Markov decision processes (POMDPs) with a set of target
states and an integer cost associated with every transition. The optimization objective we …