J Ji, B Zhang, J Zhou, X Pan… - Advances in …, 2023 - proceedings.neurips.cc
Artificial intelligence (AI) systems possess significant potential to drive societal progress. However, their deployment often faces obstacles due to substantial safety concerns. Safe …
Reinforcement learning (RL) has achieved tremendous success in many complex decision making tasks. When it comes to deploying RL in the real world, safety concerns are usually …
Due to the trial-and-error nature, it is typically challenging to apply RL algorithms to safety- critical real-world applications, such as autonomous driving, human-robot interaction, robot …
H Yu, W Xu, H Zhang - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We consider the safe reinforcement learning (RL) problem of maximizing utility with extremely low constraint violation rates. Assuming no prior knowledge or pre-training of the …
Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there …
Reinforcement Learning (RL) agents in the real world must satisfy safety constraints in addition to maximizing a reward objective. Model-based RL algorithms hold promise for …
W Huang, J Ji, B Zhang, C Xia, Y Yang - arXiv preprint arXiv:2307.07176, 2023 - arxiv.org
The widespread application of Reinforcement Learning (RL) in real-world situations is yet to come to fruition, largely as a result of its failure to satisfy the essential safety demands of …
This paper presents a comprehensive benchmarking suite tailored to offline safe reinforcement learning (RL) challenges, aiming to foster progress in the development and …
H Ma, Y Guan, SE Li, X Zhang, S Zheng… - arXiv preprint arXiv …, 2021 - arxiv.org
The safety constraints commonly used by existing safe reinforcement learning (RL) methods are defined only on expectation of initial states, but allow each certain state to be unsafe …