Active observing in continuous-time control

S Holt, A Hüyük… - Advances in Neural …, 2024 - proceedings.neurips.cc
The control of continuous-time environments while actively deciding when to take costly
observations in time is a crucial yet unexplored problem, particularly relevant to real-world …

Deep reinforcement learning for cost-effective medical diagnosis

Z Yu, Y Li, J Kim, K Huang, Y Luo, M Wang - arXiv preprint arXiv …, 2023 - arxiv.org
Dynamic diagnosis is desirable when medical tests are costly or time-consuming. In this
work, we use reinforcement learning (RL) to find a dynamic policy that selects lab test panels …

Act-then-measure: reinforcement learning for partially observable environments with active measuring

M Krale, TD Simão, N Jansen - Proceedings of the International …, 2023 - ojs.aaai.org
We study Markov decision processes (MDPs), where agents control when and how they
gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO …

Dynamic observation policies in observation cost-sensitive reinforcement learning

C Bellinger, M Crowley, I Tamblyn - arXiv preprint arXiv:2307.02620, 2023 - arxiv.org
Reinforcement learning (RL) has been shown to learn sophisticated control policies for
complex tasks including games, robotics, heating and cooling systems and text generation …

Push-and Pull-based Effective Communication in Cyber-Physical Systems

P Talli, F Mason, F Chiariotti, A Zanella - arXiv preprint arXiv:2401.10921, 2024 - arxiv.org
In Cyber Physical Systems (CPSs), two groups of actors interact toward the maximization of
system performance: the sensors, observing and disseminating the system state, and the …

Learning Computational Efficient Bots with Costly Features

A Kobanda, CA Valliappan, J Romoff… - 2023 IEEE Conference …, 2023 - ieeexplore.ieee.org
Deep reinforcement learning (DRL) techniques have become increasingly used in various
fields for decision-making processes. However, a challenge that often arises is the trade-off …

Remote Estimation of Markov Processes over Costly Channels: On the Benefits of Implicit Information

ED Santi, T Soleymani, D Gunduz - arXiv preprint arXiv:2401.17999, 2024 - arxiv.org
In this paper, we study the remote estimation problem of a Markov process over a channel
with a cost. We formulate this problem as an infinite horizon optimization problem with two …

Learning Multi-Intersection Traffic Signal Control via Coevolutionary Multi-Agent Reinforcement Learning

W Chen, S Yang, W Li, Y Hu, X Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Effective management of multi-intersection traffic signal control (MTSC) is vital for intelligent
transportation systems. Multi-agent reinforcement learning (MARL) has shown promise in …

Predicting Potential Risk: Cerebral Stroke via Regret Minimization

J Zhang, H Chen, C Jin, Q He, W He… - International Journal of …, 2023 - Wiley Online Library
Objective. The data processing of medical test report has always been one of the important
contents in biological information domain, especially the process of extracting the effective …

Pragmatic Communication for Remote Control of Finite-State Markov Processes

P Talli, ED Santi, F Chiariotti, T Soleymani… - arXiv preprint arXiv …, 2024 - arxiv.org
Pragmatic or goal-oriented communication can optimize communication decisions beyond
the reliable transmission of data, instead aiming at directly affecting application performance …