Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making …
Massive parallel sequencing allows scientists to gather DNA sequences composed of millions of base pairs that can be combined into large datasets and analyzed to infer …
L Kallenberg - Lecture Notes. University of Leiden, 2011 - researchgate.net
Branching out from operations research roots of the 1950's, Markov decision processes (MDPs) have gained recognition in such diverse fields as economics, telecommunication …
CL Tomasevicz, S Asgarpoor - 2006 38th North American …, 2006 - ieeexplore.ieee.org
A method is presented to solve for the optimum maintenance policy of repairable power equipment. The approach uses a continuous-time semi-Markov process (SMP) to first find …
Considerable diversity abounds among sponges with respect to reproductive and developmental biology. Their ancestral sexual mode (gonochorism vs. hermaphroditism) …
Z Xu, A Tewari - Advances in Neural Information Processing …, 2020 - proceedings.neurips.cc
We study reinforcement learning in non-episodic factored Markov decision processes (FMDPs). We propose two near-optimal and oracle-efficient algorithms for FMDPs …
This paper is a survey of recent results on continuous-time Markov decision processes (MDPs) with unbounded transition rates, and reward rates that may be unbounded from …
X Guo, X Song - The Annals of Applied Probability, 2011 - JSTOR
This paper is devoted to studying constrained continuous-time Markov decision processes (MDPs) in the class of randomized policies depending on state histories. The transition rates …
MJ Ghedotti, HM DeKay, AJ Maile… - Journal of …, 2021 - Wiley Online Library
Bacterial bioluminescent organs in fishes have a diverse range of tissues of origin, patterns of compartmentalization, and associated light‐conducting structures. The morphology of the …