作者
Qi Zhang, Satinder Singh, Edmund Durfee
发表日期
2017/6/5
研讨会论文
Twenty-Seventh International Conference on Automated Planning and Scheduling
简介
In cooperative multiagent planning, it can often be beneficial for an agent to make commitments about aspects of its behavior to others, allowing them in turn to plan their own behaviors without taking the agent's detailed behavior into account. Extending previous work in the Bayesian setting, we consider instead a worst-case setting in which the agent has a set of possible environments (MDPs) it could be in, and develop a commitment semantics that allows for probabilistic guarantees on the agent's behavior in any of the environments it could end up facing. Crucially, an agent receives observations (of reward and state transitions) that allow it to potentially eliminate possible environments and thus obtain higher utility by adapting its policy to the history of observations. We develop algorithms and provide theory and some preliminary empirical results showing that they ensure an agent meets its commitments with history-dependent policies while minimizing maximum regret over the possible environments.
引用总数
20182019202020212022202321311
学术搜索中的文章
Q Zhang, S Singh, E Durfee - Proceedings of the International Conference on …, 2017