Autonomous agents that operate in the real world must often deal with partial observability, which is commonly modeled as partially observable Markov decision processes (POMDPs) …
In centralized multi-agent systems, often modeled as multi-agent partially observable Markov decision processes (MPOMDPs), the action and observation spaces grow …
In this paper, we present a controller framework that synthesizes control policies for Jump Markov Linear Systems subject to stochastic mode switches and imperfect mode estimation …
Pragmatic or goal-oriented communication can optimize communication decisions beyond the reliable transmission of data, instead aiming at directly affecting application performance …
Online planning under uncertainty remains a critical challenge in robotics and autonomous systems. While tree search techniques are commonly employed to construct partial future …
T Lemberg, V Indelman - arXiv preprint arXiv:2501.11202, 2025 - arxiv.org
Robots operating in complex and unknown environments frequently require geometric- semantic representations of the environment to safely perform their tasks. While inferring the …
Risk averse decision making under uncertainty in partially observable domains is a fundamental problem in AI and essential for reliable autonomous agents. In our case, the …
D Bramblett, S Srivastava - arXiv preprint arXiv:2405.15907, 2024 - arxiv.org
Planning in real-world settings often entails addressing partial observability while aligning with users' preferences. We present a novel framework for expressing users' preferences …
D Kong, V Indelman - arXiv preprint arXiv:2410.07630, 2024 - arxiv.org
Online planning under uncertainty in partially observable domains is an essential capability in robotics and AI. The partially observable Markov decision process (POMDP) is a …