作者
Christian D. Hubbs, Can Li, Nikolaos V. Sahinidis, Ignacio E. Grossmann, John M. Wassick
发表日期
2020/10/4
期刊
Computers and Chemical Engineering
卷号
141
简介
This work examines applying deep reinforcement learning to a chemical production scheduling process to account for uncertainty and achieve online, dynamic scheduling, and benchmarks the results with a mixed-integer linear programming (MILP) model that schedules each time interval on a receding horizon basis. An industrial example is used as a case study for comparing the differing approaches. Results show that the reinforcement learning method outperforms the naive MILP approaches and is competitive with a shrinking horizon MILP approach in terms of profitability, inventory levels, and customer service. The speed and flexibility of the reinforcement learning system is promising for achieving real-time optimization of a scheduling system, but there is reason to pursue integration of data-driven deep reinforcement learning methods and model-based mathematical optimization approaches.
引用总数
2019202020212022202320241619524026
学术搜索中的文章
CD Hubbs, C Li, NV Sahinidis, IE Grossmann… - Computers & Chemical Engineering, 2020