[口頭報告]Off-policy reinforcement learning for input-constrained optimal control of dual-rate industrial processes
Off-policy reinforcement learning for input-constrained optimal control of dual-rate industrial processes
編號:28
稿件編號:251 訪問權限:僅限參會人
更新:2024-05-20 09:56:41
瀏覽:146次
口頭報告

報告開始:2024年05月30日 15:40 (Asia/Shanghai)
報告時間:20min
所在會議:[S4] Intelligent Equipment Technology ? [S4-2] Afternoon of May 30th-2
暫無文件
摘要
Real industrial operating systems are not ideally immune to unmodeled dynamics, and industrial processes usually operate on multiple time scales, which poses a problem for operational optimization of industrial processes. In order to better address these difficulties, a composite compensated controller is designed to solve the input-constrained optimal operation control (OOC) problem in dual time scales by integrating reinforcement learning (RL) techniques and singular perturbation (SP) theory. Within this control framework, a self-learning compensatory control method is proposed to optimize the operational metrics of a dual time-scale industrial system with uncertain dynamic parts to the desired values. Finally, the effectiveness of the method is verified by an industrial mixed separation thickening process (MSTP) example.
關鍵字
Reinforcement Learning, Dual Time Scales, Optimal Operational Control, Singular perturbation Theory
報告人

發表評論