ANSWERS
COVER: AI FOR CONTROL SYSTEMS Kence Anderson, Microsoft; Winston Jenks, Wood; Prabu Parthasarathy, PhD, Wood
Evolution of control systems with artificial intelligence Can artificial intelligence (AI) prove to be the next evolution of control systems? See three AI controller characteristics and three applications.
C
ontrol systems have continuously evolved over decades, and artificial intelligence (AI) technologies are helping advance the next generation of some control systems. The proportional-integral-derivative (PID) controller can be interpreted as a layering of capabilities: the proportional term points toward the signal, the integral term homes in on the setpoint and the derivative term can minimize overshoot. Although the controls ecosystem may present a complex web of interrelated technologies, it can also be simplified by viewing it as ever-evolving branches of a family tree. Each control system technology offers its own characteristics not available in prior technologies. For example, feed forward improves PID control by predicting controller output, and then uses the predictions to separate disturbance errors from noise occurrences. Model predictive control (MPC) adds further capabilities to this by layering predictions of future control action results and controlling multiple correlated inputs and outputs. The latest evolution of control strategies is the adoption of AI technologies to develop industrial controls. One of the latest advancements in this area is the application of reinforcement learning-based controls.
Three characteristics of AI-based controllers
AI-based controllers (that is, deep reinforcement learning-based, or DRL-based, controllers) offer unique and appealing characteristics, such as: 1. Learning: DRL-based controllers learn by methodically and continuously practicing – what we know as machine teaching. Hence, these controllers can discover nuances and exceptions that are not easily captured in expert systems, and may be difficult to control when using fixed-gain controllers. The DRL engine can be exposed to a wide variety of process states by the simulator. Many of these states would never be encountered in the real world, as the AI engine (brain) may try to operate the plant much too close or beyond to the operational limits of a physical facility. In this case, these
www.controleng.com
excursions (which would likely cause a process trip) are experiences for the brain to learn what behaviors to avoid. When this is done often enough, the brain learns what not to do. In addition, the DRL engine can learn from many simulations all at once. Instead of feeding the brain data from one plant, it can learn from hundreds of simulations, each proceeding faster than what is seen in normal real-time to provide the training experience conducive for optimal learning. 2. Delayed gratification: DRL-based controllers can learn to recognize sub-optimal behavior in the short term, which enables the optimization of gains in the long term. According to Sigmund Freud, and even Aristotle back in 300 B.C., humans know this behavior as “delayed gratification.” When AI behaves this way, it can push past tricky local minima to more optimal solutions. 3. Non-traditional input data: DRL-based controllers manage the intake and are able to evaluate sensor information that automated systems cannot. As an example, an AI-based controller can consider visual information about product quality or an equipment’s status. It also takes into consideration control engineering
COVER: Applications for the Microsoft Bonsai Brain include dynamic and highly variable systems, competing optimization goals or strategies and unknown starting or system conditions, among others. Images courtesy: Wood
February 2021
•
19