Centralized Training for Decentralized Execution, where agents are trained offline using centralized information but execute in a decentralized manner online, has gained popularity …
Y Xiao, W Tan, C Amato - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination …
J Chen, J Chen, T Lan… - Advances in Neural …, 2022 - proceedings.neurips.cc
Covering option discovery has been developed to improve the exploration of RL in single- agent scenarios with sparse reward signals, through connecting the most distant states in …
Routing delivery vehicles to serve customers in dynamic and uncertain environments like dense city centers is a challenging task that requires robustness and flexibility. Most existing …
Abstract Centralized Training for Decentralized Execution, where agents are trained offline in a centralized fashion and execute online in a decentralized manner, has become a …
When humans and autonomous systems operate together as what we refer to as a hybrid team, we of course wish to ensure the team operates successfully and effectively. We refer to …
In the context of humans operating with artificial or autonomous agents in a hybrid team, it is essential to accurately identify when to authorize those team members to perform actions …
Transfer Learning has shown great potential to enhance single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents …
J Chen, J Chen, T Lan, V Aggarwal - IEEE Transactions on Artificial …, 2022 - par.nsf.gov
Covering skill (aka, option) discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios, where only sparse reward signals are …