Stochastic optimization for performative prediction

C Mendler-Dünner, J Perdomo… - Advances in Neural …, 2020 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2020proceedings.neurips.cc
In performative prediction, the choice of a model influences the distribution of future data,
typically through actions taken based on the model's predictions. We initiate the study of
stochastic optimization for performative prediction. What sets this setting apart from
traditional stochastic optimization is the difference between merely updating model
parameters and deploying the new model. The latter triggers a shift in the distribution that
affects future data, while the former keeps the distribution as is. Assuming smoothness and …
Abstract
In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions. We initiate the study of stochastic optimization for performative prediction. What sets this setting apart from traditional stochastic optimization is the difference between merely updating model parameters and deploying the new model. The latter triggers a shift in the distribution that affects future data, while the former keeps the distribution as is. Assuming smoothness and strong convexity, we prove rates of convergence for both greedily deploying models after each stochastic update (greedy deploy) as well as for taking several updates before redeploying (lazy deploy). In both cases, our bounds smoothly recover the optimal rate as the strength of performativity decreases. Furthermore, they illustrate how depending on the strength of performative effects, there exists a regime where either approach outperforms the other. We experimentally explore the trade-off on both synthetic data and a strategic classification simulator.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果