DoS attacks on remote state estimation with asymmetric information

K Ding, X Ren, DE Quevedo, S Dey… - IEEE Transactions on …, 2018 - ieeexplore.ieee.org
IEEE Transactions on Control of Network Systems, 2018ieeexplore.ieee.org
In this paper, we consider remote state estimation in an adversarial environment. A sensor
forwards local state estimates to a remote estimator over a vulnerable network, which may
be congested by an intelligent denial-of-service attacker. It is assumed that the
acknowledgment information from the remote estimator to the sensor is hidden from the
attacker, which, thus, leads to asymmetric information between the sensor and attacker.
Considering the infinite-time goals of the two agents and their asymmetric information …
In this paper, we consider remote state estimation in an adversarial environment. A sensor forwards local state estimates to a remote estimator over a vulnerable network, which may be congested by an intelligent denial-of-service attacker. It is assumed that the acknowledgment information from the remote estimator to the sensor is hidden from the attacker, which, thus, leads to asymmetric information between the sensor and attacker. Considering the infinite-time goals of the two agents and their asymmetric information structure, we model the conflicting nature between the sensor and the attacker by a stochastic Bayesian game. Solutions for this game under two different structures of public information history are investigated, that is, the open-loop structure (in which players cannot observe their opponents' play) and the closed-loop one (in which players can observe the play causally). For the open-loop history case, the original game problem is transformed into a static Bayesian game. We provide the unique mixed-strategy equilibrium explicitly for this game, and analyze the sensor's advantages brought by the extra information. When it comes to the closed-loop case, the dynamic nature of history structure introduces additional difficulties solving the original problem. Thus, to derive stationary optimal power schemes for each agent, we convert the original game into a continuous-state stochastic game and discuss the existence of optimal transmission/jamming power strategies. Furthermore, an algorithm based on multiagent reinforcement learning is proposed to find such strategies, and numerical examples are provided to illustrate the developed results.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果