Suppose ze Z is the state at time zero of a dynamical system defined on a Banach space Z, and the state at time t is z (t). If we assume that the dynamics which govern the evolution …
N Saldi, S Yüksel - Probability Surveys, 2022 - projecteuclid.org
In many areas of applied mathematics, decentralization of information is a ubiquitous attribute affecting how to approach a stochastic optimization, decision and estimation, or …
The purpose of this paper is to compare the results which have been recently obtained in optimal stochastic control. Various maximum principles are shown to derive from a general …
NI Mahmudov - Journal of mathematical Analysis and Applications, 2001 - Elsevier
The classical theory of controllability for deterministic systems is extended to linear stochastic systems defined on infinite-dimensional Hilbert spaces. Three types of stochastic …
The commonly adopted route to control a dynamic system and make it follow the desired behavior consists of two steps. First, a model of the system is learned from input–output data …
NI Mahmudov - IEEE Transactions on Automatic control, 2001 - ieeexplore.ieee.org
We discuss several concepts of controllability for partially observable stochastic systems: complete controllability, approximate controllability, and stochastic controllability. We show …
B Larssen - Stochastics: An International Journal of Probability and …, 2002 - Taylor & Francis
We consider optimal control problems for systems described by stochastic differential equations with delay (SDDE). We prove a version of Bellman's principle of optimality (the …
In this paper we will give a general formulation of the certainty equivalence principle for stochastic optimal control problems. Special attention is paid to the question:" What do we …
SK Mitter - IEEE Control Systems Magazine, 1996 - ieeexplore.ieee.org
We attempt to give a historical account of the main ideas leading to the development of nonlinear filtering and stochastic control as we know it today. We present a development of …