State-wise safe reinforcement learning: A survey

W Zhao, T He, R Chen, T Wei, C Liu - arXiv preprint arXiv:2302.03122, 2023 - arxiv.org
Despite the tremendous success of Reinforcement Learning (RL) algorithms in simulation
environments, applying RL to real-world applications still faces many challenges. A major …

Optimization-based non-equidistant toolpath planning for robotic additive manufacturing with non-underfill orientation

Y Wang, C Hu, Z Wang, S Lin, Z Zhao, W Zhao… - Robotics and Computer …, 2023 - Elsevier
Additive manufacturing (AM) technology has achieved universal application in a great
number of fields, such as aerospace, medicine, and military industry. As a significant factor …

Probabilistic safeguard for reinforcement learning using safety index guided gaussian process models

W Zhao, T He, C Liu - Learning for Dynamics and Control …, 2023 - proceedings.mlr.press
Safety is one of the biggest concerns to applying reinforcement learning (RL) to the physical
world. In its core part, it is challenging to ensure RL agents persistently satisfy a hard state …

Guard: A safe reinforcement learning benchmark

W Zhao, R Chen, Y Sun, R Liu, T Wei, C Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Due to the trial-and-error nature, it is typically challenging to apply RL algorithms to safety-
critical real-world applications, such as autonomous driving, human-robot interaction, robot …

Manicast: Collaborative manipulation with cost-aware human forecasting

K Kedia, P Dan, A Bhardwaj, S Choudhury - arXiv preprint arXiv …, 2023 - arxiv.org
Seamless human-robot manipulation in close proximity relies on accurate forecasts of
human motion. While there has been significant progress in learning forecast models at …

State-wise constrained policy optimization

W Zhao, R Chen, Y Sun, T Wei, C Liu - arXiv preprint arXiv:2306.12594, 2023 - arxiv.org
Reinforcement Learning (RL) algorithms have shown tremendous success in simulation
environments, but their application to real-world problems faces significant challenges, with …

Hybrid task constrained planner for robot manipulator in confined environment

Y Sun, W Zhao, C Liu - arXiv preprint arXiv:2304.09260, 2023 - arxiv.org
Trajectory generation in confined environment is crucial for wide adoption of intelligent robot
manipulators. In this paper, we propose a novel motion planning approach for redundant …

Learn with imagination: Safe set guided state-wise constrained policy optimization

W Zhao, Y Sun, F Li, R Chen, T Wei, C Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Deep reinforcement learning (RL) excels in various control tasks, yet the absence of safety
guarantees hampers its real-world applicability. In particular, explorations during learning …

Reinforcement learning particle swarm optimization based trajectory planning of autonomous ground vehicle using 2D LiDAR point cloud

H Nagar, A Paul, R Machavaram, P Soni - Robotics and Autonomous …, 2024 - Elsevier
The advent of autonomous mobile robots has spurred research into efficient trajectory
planning methods, particularly in dynamic environments with varied obstacles. This study …

ModelVerification. jl: a Comprehensive Toolbox for Formally Verifying Deep Neural Networks

T Wei, L Marzari, KS Yun, H Hu, P Niu, X Luo… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep Neural Networks (DNN) are crucial in approximating nonlinear functions across
diverse applications, ranging from image classification to control. Verifying specific input …