Time-timescale Derivative Free Optimization for Performative Prediction with Markovian Data

H Liu, Q Li, HT Wai - arXiv preprint arXiv:2310.05792, 2023 - arxiv.org
H Liu, Q Li, HT Wai
arXiv preprint arXiv:2310.05792, 2023arxiv.org
This paper studies the performative prediction problem where a learner aims to minimize the
expected loss with a decision-dependent data distribution. Such setting is motivated when
outcomes can be affected by the prediction model, eg, in strategic classification. We
consider a state-dependent setting where the data distribution evolves according to an
underlying controlled Markov chain. We focus on stochastic derivative free optimization
(DFO) where the learner is given access to a loss function evaluation oracle with the above …
This paper studies the performative prediction problem where a learner aims to minimize the expected loss with a decision-dependent data distribution. Such setting is motivated when outcomes can be affected by the prediction model, e.g., in strategic classification. We consider a state-dependent setting where the data distribution evolves according to an underlying controlled Markov chain. We focus on stochastic derivative free optimization (DFO) where the learner is given access to a loss function evaluation oracle with the above Markovian data. We propose a two-timescale DFO() algorithm that features {\sf (i)} a sample accumulation mechanism that utilizes every observed sample to estimate the overall gradient of performative risk, and {\sf (ii)} a two-timescale diminishing step size that balances the rates of DFO updates and bias reduction. Under a general non-convex optimization setting, we show that DFO() requires samples (up to a log factor) to attain a near-stationary solution with expected squared gradient norm less than . Numerical experiments verify our analysis.
arxiv.org