Leader–follower output synchronization of linear heterogeneous systems with active leader using reinforcement learning

Y Yang, H Modares, DC Wunsch… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
IEEE transactions on neural networks and learning systems, 2018ieeexplore.ieee.org
This paper develops optimal control protocols for the distributed output synchronization
problem of leader-follower multiagent systems with an active leader. Agents are assumed to
be heterogeneous with different dynamics and dimensions. The desired trajectory is
assumed to be preplanned and is generated by the leader. Other follower agents
autonomously synchronize to the leader by interacting with each other using a
communication network. The leader is assumed to be active in the sense that it has a …
This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果