Off-policy policy evaluation for sequential decisions under unobserved confounding

H Namkoong, R Keramati… - Advances in Neural …, 2020 - proceedings.neurips.cc
When observed decisions depend only on observed features, off-policy policy evaluation
(OPE) methods for sequential decision problems can estimate the performance of evaluation …

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

H Namkoong, R Keramati, S Yadlowsky… - arXiv e …, 2020 - ui.adsabs.harvard.edu
When observed decisions depend only on observed features, off-policy policy evaluation
(OPE) methods for sequential decision making problems can estimate the performance of …

[引用][C] Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

H Namkoong, R Keramati, S Yadlowsky, E Brunskill - openreview.net
Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding |
OpenReview OpenReview.net Login Open Peer Review. Open Publishing. Open Access …

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

H Namkoong, R Keramati, S Yadlowsky… - arXiv preprint arXiv …, 2020 - arxiv.org
When observed decisions depend only on observed features, off-policy policy evaluation
(OPE) methods for sequential decision making problems can estimate the performance of …

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

H Namkoong, R Keramati… - Advances in Neural …, 2020 - proceedings.neurips.cc
When observed decisions depend only on observed features, off-policy policy evaluation
(OPE) methods for sequential decision problems can estimate the performance of evaluation …

Off-policy policy evaluation for sequential decisions under unobserved confounding

H Namkoong, R Keramati, S Yadlowsky… - Proceedings of the 34th …, 2020 - dl.acm.org
When observed decisions depend only on observed features, off-policy policy evaluation
(OPE) methods for sequential decision problems can estimate the performance of evaluation …