Productivity PLC CPUs start at $237 data and CPU status On top of that,
Whether you are a machine builder, systems integrator, or anyone looking for an advanced, low-cost controls solution, the Productivity family of controllers has what you need. Built to go above and beyond, these controllers offer multiple networking solutions and easy device integration, plus some impressive “WOW!” factors like analog data and CPU status displays. On top of that, the Productivity PLC line also offers:
• CPUs with expansive 50MB memory
• Unmatched built-in communications capabilities, including local and remote I/O ports, EtherNet/IP, MQTTS, custom protocols, and more
• Modular rack-based or stackable footprint with many discrete and analog I/O option modules, scalable up to 59K+ I/O
• FREE advanced tag name programming environment with a convenient project simulator
• 32GB of microSD data logging
• Plus much, much more...
Affordable and reliable Productivity2000 hardware with a powerful CODESYS engine
• Full IEC 61131-3 compliance
• Modbus RTU/TCP, EtherNet/IP Scanner/Adapter
• WebVisu license included
• IIoT library included with Web Client (http, https), MQTT Client w/ TLS, AWS IoT Core Client, Azure IoT Hub Client, and more
(Starting at $298.00)
NEW! ELT Series Submersible Level Sensors
AchieVe ELT series general-purpose submersible level sensors are ideal for water applications where small size, weight, and low cost are required.
• Full scale ranges from 11.5 to 115 feet of water
• 4 to 20 mA output
• 316L stainless steel construction
• IP68 protection rating
NEW! NFLT Series Submersible Level Sensors
(Starting at $598.00)
ProSense NFLT series non-fouling, submersible level sensors feature a rugged construction and a Kynar sensing membrane with superior abrasion and puncture resistance for challenging wastewater applications.
• Full scale ranges from 11.5 to 69.2 feet of water
• 4 to 20 mA output
• 316L stainless steel construction
• Built-in lightning protection
• Hazardous location rated
• IP68 protection rating
Submersible
GPLT Series Level Sensors
(Starting at $393.00)
(Starting at $393.00)
ProSense GPLT series general-purpose submersible level sensors are designed for water applications and offer a slim housing diameter, several sensing ranges and cable lengths, integral lightning protection, and ratings for hazardous locations.
• Full scale ranges from 11.5 to 115 feet of water
• 4 to 20 mA output
• 316L stainless steel construction
• IP68 protection rating
MicroPilot FMR10 Pulsed Radar Level
Sensors (Starting at $793.00)
Endress+Hauser Micropilot FMR10 series pulsed radar liquid level sensors provide accurate, reliable non-contact liquid level measurement and offer one of the best price-to-performance ratios on the market.
Also Available
by Trihedral
MISSION CRITICAL SOFTWARE
FOR SYSTEMS OF EVERY SIZE
by Trihedral
CRITICAL
SOFTWARE SYSTEMS OF EVERY SIZE
Critical Systems
Come in All Shapes and Sizes
Large automation systems can afford to dedicate enormous resources to eliminating downtime.
When it comes to keeping the lights on, or televisions broadcasting, we all agree, failure is not an option.
Yet scores of modestly-sized applications are no less critical. Small town utilities with limited budgets are responsible for providing critical services, safeguarding public health, and preventing environmental disasters.
CRITICALITY TRANSCENDS SIZE
MISSION CRITICAL FOR SYSTEMS
SCADA developments
Think Again: Open controls: Why make it fit now?
process automation (OPA) is ready. Are you? At the 2024 ARC Industry Leadership Forum, Harry Forbes, ARC Advisory Group (not shown) discusses edge computing and OPA with (left to right) Robert Tulalian, IT-OT convergence global program manager, Global Shell Solutions; Don Bartusiak, president of Collaborative Systems Integration; Brad Mozisek, program manager, Wood OPA center of excellence; David Campain, global manager for process control systems for FLSmidth; Renato Pacheco Silva, CEO, Aimirim; and Dave DeBari ExxonMobil leader of openplus automation program. Cover images courtesy: COPA, ExxonMobil, Aimirim, Mark T. Hoske, Control Engineering
INNOVATIONS
53 | New Products for Engineers
Safety switches, Coriolis flow meters, Embedded PCs, Moisture sensors, Combined PLC and HMI, Flexible power supplies, Intelligent edge automation, ultra-mini reed switches, three-phase monitoring relay, See more products online www.controleng.com/products
55 | Back to Basics: Digital twin technology benefits Control engineers working in industrial environments should seek digital twin technology to model their processes. Work through four barriers.
NEWSLETTERS ONLINE
IIoT Sensing, Connectivity and Analytics Newsletter
• Enhanced technologies help address manufacturing challenges.
System Integration Newsletter
• Learn from system integrators and system integration projects.
CE eBooks Newsletter
• Get your cybersecurity “Throwback Attacks” eBook today! Stay ahead. Subscribe! www.controleng.com/newsletters
u Global System Integrator Report
Did you see profiles about companies receiving the System Integrator of the Year award, case studies and more. www.controleng.com/GSIR
u Control Engineering eBook series, now available: Spring Edition
Motors & Drives
Featured motors and drives articles include how to optimize industrial motor communications, Part 4, cybersecurity and new electrostatic motor design: 90% less copper, no magnets, ultra-efficiency.
Learn more at: www.controleng.com/ebooks
u AI & ML
Artificial intelligence (AI) and machine learning articles in this eBook include 6 AI motion control applications to improve OEE, how generative AI works and helps engineers and more in this 43-page eBook!
More topics at: www.controleng.com/ebooks
u Control Engineering digital edition
Useful links to more info, photos: In the digital edition, click on headlines to see online version with more text and often more images and graphics. Download a PDF version.
www.controleng.com/ magazine
Online Highlights
INSIGHTS
u “Control Systems: HMI, SCADA & PLCs” is a new research report from Control Engineering; CFE Media and Technology. See related webcast planned for April 22. (A) www.controleng.com/research and www.controleng.com/webcasts
u WEBCAST: Control system integration: Helping communications advance intelligent data sharing, analytics; with EtherCAT Technology Group, PI North America https://www.controleng.com/webcasts/control-system-integration-helping-communications-advance-intelligent-data-sharing-analytics
u Users look for open industrial software to help in three ways; Stone Shi, Control Engineering China, at the Aveva China User Conference (B) www.controleng.com/articles/users-look-for-open-industrial-software-to-help-in-three-ways
u
Bridging the Gap Podcast: Feb. 28: Ep. 9: Xavier Mesrobian on connecting OT and IT (C) www.controleng.com/articles/podcast/ep-9-xavier-mesrobian-on-connecting-ot-and-it
u Video: Expert Interview Series: Brianna Jackson, Interact Analysis on AC Drive Market www.controleng.com/video/expert-interview-series-brianna-jackson-interact-analysis-on-ac-drive-market
u 2D magnetic materials harnessed for energy-efficient computing; Adam Zewe, MIT News Office www.controleng.com/articles/2d-magnetic-materials-harnessed-for-energy-efficient-computing
ANSWERS NEWS
u PID Spotlight, part 2: PID spotlight, part 2: Know these 13 terms, interactions; Ed Bullerdiek, process control engineer, retired www.controleng.com/articles/pid-spotlight-part-2-know-these-13-terms-interactions
u What makes an HMI excel on the plant floor; Aaron Block is marketing content writer, Inductive Automation www.controleng.com/articles/what-makes-an-hmi-excel-on-the-plant-floor
u PLC hardware speed, I/O, communications, redundancy, Part 2: Future of the PLC; David Ubert, senior automation specialist at Black and Veatch, and Eelco van der Wal, managing director at PLCopen www.controleng.com/articles/plc-hardware-speed-i-o-communications-redundancy-part-2-future-of-the-plc u IT/OT cybersecurity, part 1: Security challenges, trends and methods that don’t work; John Clemons, solutions consultant, LifecycleIQ Services; Tim Gellner, system integration consultant; Vicky Bruce, global capability manager for network and cybersecurity services; Rockwell Automation www.controleng.com/articles/it-ot-cybersecurity-part-1-security-challenges-trends-and-methods-that-dont-work
u How unified namespace drives efficiency, quality in manufacturing; Kudzai Manditereza, developer advocate for HiveMQ (D) www.controleng.com/articles/how-unified-namespace-drives-efficiency-quality-in-manufacturing u How cables and wires play a key role in robotics; questions and answers with seven experts in robotics (E) Live on April 9 at www.controleng.com
How to augment sustainability with automation
Advanced process automation helps traditional industries meet green transformation goals.
Joachim Braun, division president of ABB Process Industries, told Control Engineering China that ABB intends to continue to help customers with sustainability solutions, especially in production of green hydrogen, production of car batteries, electrification of mines, industrial decarbonization, finding alternative solutions for the pulp and paper industry and other areas. Courtesy: ABB
Companies in process industries, notably mining, pulp and paper, metals and cement, face unprecedented challenges with intensifying global labor and skills shortages, supply chain uncertainty and growing demand for sustainability and decarbonization. Process automation can help. Process industry company leaders and operational teams use digital and intelligent automation technologies to develop smarter and more agile models for greater production efficiencies. ABB has products for discrete manufacturing, such as electric drives, motion control and control systems, as well as automation and digitalization software, products and services for process and hybrid industries. Control Engineering China interviewed Joachim Braun, division president of ABB Process Industries, about ABB’s business and technology progress and strategy. Excerpts follow with more online.
Industrial automation, software, services
Process automation is complex, with a variety of interconnected products, including electrical system cables, I/O setups, hardware and software systems. Braun said ABB Process Industries is dominated by two topics: Sustainability and decarbonization and digitalization.
u
Online
controleng.com
KEYWORDS: Process automation, sustainability ONLINE www.controleng.com/ international www.controleng.com/ control-systems/ dcs-scada-controllers
See more examples of digitalization, industrial sustainability here: www.controleng.com/ articles/how-to-augmentprocess-industry-
• Sustainability and decarbonization: Many ABB customer industries are heavy emitters of carbon dioxide (CO2, a greenhouse gas), and these companies have driven emission reduction and sustainable development in recent years.
• Digitalization: Industries need to embrace digitalization for higher-quality products, higher productivity, more resilience and competitiveness.
Braun said general-purpose ABB products, such as switchgear, frequency converters, motors, distributed control systems (DCS) and instrumentation, can be seamlessly integrated to automate and digitalize mine, steel and non-ferrous metals, paper and cement, as well as in food and beverage, battery manufacturing, data centers and other expanding areas. Domain expertise helps. Every mining elevator, for instance, depending on shaft depth, diameter, load and the geological conditions in the mine.
Supporting China with localization
As global digital transformation accelerates, China’s manufacturing is aiming for high-quality development driven by digitalization and innovation, evolving from the “world’s factory” to “intelligent manufacturing in China.” Braun said a similar vision from other countries creates exciting opportunities for more manufacturing-sector investments, often with large government subsidies to help. Traditional industries are eager to meet low-carbon goals and high-quality sustainable development during the energy transition process. Localization has been an ABB strength, Braun suggested. More than 90% of ABB sales in China, the company’s second-largest market worldwide, comes from locally manufactured products, solutions and services. In line with its long-term commitment of “in China, for China,” ABB has been further optimizing its business footprint by localizing its value chain. The ABB Hoist Manufacturing Center in Lingang, Shanghai, begun in 2011, has provided Chinese users with mine hoist systems, hoist brake systems, as well as training services. ABB has a high-power rectifier (HPR) manufacturing base and local engineering center in China and Switzerland. In the future, it will be possible to take advantage of additive manufacturing, collaborative robots and augmented reality/virtual reality (AR/VR) for training and field service delivery. As generative artificial intelligence (AI), such as OpenAI’s ChatGPT, has become available, AI and other emerging technologies for industrial use are expanding automation applications. Braun said many ABB’s digital solutions rely on AI or machine learning (ML), for sifting data, reading trends and identifying operating conditions. Braun said ABB seeks growth around sustainability solutions, such as production of green hydrogen, production of car batteries, electrification of mines, industrial decarbonization, finding alternative solutions for the pulp and paper industry and other areas, contributing to a better world in the future. ce
Stone Shi is executive editor-in-chief, Control Engineering China; Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media and Technology, mhoske@cfemedia.com.
AutomationDirect: much more than just a
AutomationDirect is a non-traditional industrial controls ideas from the consumer world to serve your automation quality products fast for much less than traditional suppliers, every step of the way. See below . . .
Our campus is located about 45 minutes north of Atlanta, GA, USA. We’re all here - our sales and technical support teams, purchasing, accounting, and of course our huge warehouses and speedy logistics team.
You want complete product information to make the right purchase decision.
Whether you’re deciding on purchasing our products or learning our products after you buy, why jump through hoops or even pay for the information you need?
We have exhaustive documentation all freely available online, including overviews, technical specifications, manuals and 2D and 3D CAD drawings.
We have over 1,500 videos online to get you up to speed quickly. We even provide FREE online PLC training to anyone interested in learning about industrial controls.
For over 25 years, we’ve been offering a better by running our direct business efficiently and you. No complex pricing structures or penalty everyday prices on everything from fuses to Programming software for controller products can be costly, so we help you out by offering FREE downloadable software for all our latest PLC families and C-more HMIs. No license or upgrade fees to deal with!
http://go2adc.com/why
You don’t want to wait for your
We have fast shipping, and it’s FREE if your order is over $49.*
At AutomationDirect, we strive to have what you it. We’ve invested heavily into infrastructure, inventory, automation so that we can continue to provide you products, at great prices, extremely fast!
*Order over $49, and get free shipping with delivery in 2 business the U.S. (Certain delivery time or shipping cost exceptions may Conditions online for complete details. http://go2adc.com/terms
You insist on getting better service and you want it FREE.
Our technical support team provides superior assistance and has consistently received high ratings from satisfied customers. And it won’t cost you a cent!
Before, during, and after any sale, contact us with questions and we’re glad to help, http://go2adc.com/support-ss
“.com”
company using the best automation needs. We deliver suppliers, and support you
great prices.
better value on industrial controls and passing the savings on to penalty for small orders, just low motors.
Our primary focus has always been customer service — practical products, great prices, fast delivery, and helpful assistance. But the intangible value of customer service is something that cannot be faked, automated or glossed over.
Our team members here at AutomationDirect approach every day with one goal in mind - serve the customer. If the answer to any decision is “Yes, this is good for our customers”, then we do it, whether personally or via self-serve features on our site.
need, when you need inventory, and warehouse you with quality
business days (or less) within may apply; see Terms and http://go2adc.com/terms) your order.
You want to be confident in our products and our commitment to you.
We stand behind our products and guarantee your satisfaction. We want you to be pleased with every order. That’s why we offer a 45-day money-back guarantee on almost every stock product we sell. (See Terms and Conditions online for exclusions.)
The best values in the world .
We’ve shopped around to bring you the most
industrial control products at the best prices!!
BRIAN in CLEVELAND, TN:
“I love having a single website that I can use for selecting and purchasing everything I need for our industrial automation projects. With fast shipping times, an active support forum, and competitive prices, there’s nothing else that compares. Thanks!”
CODY in MOORESVILLE, NC:
“Best phone support I’ve gotten from a supplier yet, answers the phone quickly and has a very knowledgeable staff.”
DAVE IN MITCHELL, SD:
“In 10 years of using the company products and services I’ve only had a few returns. The products are reliable and the customer service has always been second to none. I plan to continue using the company into the future.”
JOHNNY IN NEW CANEY, TX:
“Excellent company. Use it regularly and highly recommend it to others.”
BEN IN PAHRUMP, NV:
“We use AutomationDirect as a standard supplier for many of our UL508 control panel components, and have never had a single issue! Automation Direct has an incredible stock of inventory and product selections, and the two day shipping gets us the parts we need right on time!”
Mark in AYLMER, ON:
“We have been buying from Automation Direct for quite a few years now and we have been really happy with the service and range of products available and the prices are usually as good or better than elsewhere.”
Daniel in ALLIANCE, OH:
“They are the best game in town for industrial PLCs”
Automation mergers noted in February 2024
uBUNDY GROUP, an investment bank and advisory firm, said the automation market continues to experience a tremendous amount of mergers and acquisitions (M&A) and capital markets activity with nine transactions highlighted in February. Drivers of this activity include the growth-oriented nature of the automation market, the consolidation opportunities within the industry and the strength of many of the companies operating within it. The automation market has attracted a critical mass of strategic buyers and financial sponsors. Bundy Group’s current engagements and owner relationships include control system integration, robotics, automated material handling, automation distribution, artificial intelligence and cybersecurity.
February transactions
February report include companies in :
• Material handling automation systems with integrated controls and conveyors for integration into assembly systems and production lines
• IoT technology for remote monitoring solutions
• Material handling and industrial productivity equipment
• Industrial controls, monitoring and protection products
• Automation engineering services and training firm
• Automation technologies
• Instrumentation, controls and electrical solutions
• Assembly tools and material handling solutions for automotive and automation integration
• Robotics manufacturing and integration.
For more details, see: www.controleng. com/articles/automation-mergers-acquisitions-capital-markets-analysis-february-2024/ ce
Clint Bundy is managing director, Bundy Group, which helps with mergers, acquisitions and raising capital. Bundy Group is a Control Engineering content partner. Edited by CFE Media and Technology.
Low-voltage ac drives
THE LOW-VOLTAGE ac drive market is undergoing a profound shift. Drives have been treated and marketed as a standalone component; current demands from machine builders and end-users have necessitated viewing drives as part of a system. Demand for more modular, compact machinery has increased innovation, with development of space-saving and integrated solutions. The concept of multi-axis has been a defining characteristic of servo drives and not commonly associated with low-voltage motor drives. www.controleng.com/articles/low-voltage-ac-drives-evolving-toward-compactmulti-axis-integrated-solutions. ce
Brianna Jackson is a research analyst at Interact Analysis for the Industrial Automation team. Interact Analysis is a CFE Media and Technology content partner. Edited by Control Engineering.
Machine vision market, 3D cameras outlook: 13% growth in five years
u3D CAMERAS are forecast to drive the global machine vision market over the next five years, fueled by strong growth within mobile robots and robotic picking. The predicted compound annual growth (CAGR) for 3D cameras of 13% to 2028 is anticipated to be much higher than the single digit CAGR of 6.4% anticipated for the global machine vision market as a whole. Revenue for 3D machine vision cameras is expected to grow from $767 million in 2022 to almost $1.6 billion in 2028, with particularly strong growth projected for time-of-flight and stereo-vision cameras.
The report predicts a steady growth rate of 6.4% over the forecast period, with modest growth of 1.4% expected in 2024.
Key factors generating such high growth for 3D cameras, particularly over the longer term, include anticipated price declines for all 3D camera types. This enables upgrades. ce
Jonathan Sparkes is a research analyst with Interact Analysis.
Learn more at www.controleng.com/articles/ why-3d-cameras-are-driving-machine-vision-market-growth.
Every machine vision application in Interact Analysis research will see significant growth from 2022 to 2028, especially bin picking and inspection. Courtesy: Interact Analysis
Every machine vision in Interact research will
New PICMG InterEdge standard helps open, modular process control systems
uPICMG, the consortium for open hardware specifications announced the release of InterEdge, a modular architecture for process control systems (PCS). The InterEdge specification, said to be compatible with IEC 61499 and IEC 61131, “promises to revolutionize the industry with an interoperable, multi-vendor alternative to proprietary Industrial PCs (IPCs), programmable logic controllers (PLCs) and distributed control systems (DCSs),” the organization said in a Feb. 26 announcement, as explained to Control Engineering at the ARC Industry Forum in a open process automation discussion.
PICMG InterEdge standard architecture helps open process automation, said PICMG and Open Process Automation Forum. InterEdge 0 R1 supports single- and multi-channel I/O implementations; a forthcoming specification will be optimized for single-channel I/O. See www.picmg.org/openstandards/interedge. Courtesy: PICMG
InterEdge defines a vendor-neutral, open standard for edge computing and I/O module hardware, PICMG said. It segments hardware into compute modules, switch modules and I/O modules. All of these modules are connected via a common backplane, enabling easy customization and expansion of industrial automation functions. InterEdge 0 R1 supports single- and multi-channel I/O implementations, and a forthcoming specification will be optimized for single-channel I/O.
“Business needs evolve at an everincreasing rate,” said Francisco Garcia, Americas regional instrument lead at ExxonMobil Technology and Engineering Co. and member of the InterEdge technical working group. “InterEdge delivers an interchangeable base hardware standard for industrial manufacturers looking to adapt to changing business needs. As a result, providers can deploy and scale dedicated physical assets and focus on value-added software and services.”
PICMG said with the modular approach of InterEdge, it can flexibly incorporate the
functions of disparate automation systems into one platform. This common platform can be deployed across automation, chemical refining, oil and gas, pharmaceuticals, metals and mining, pulp and paper, food and beverage and other process industries.
By replacing proprietary edge devices, InterEdge eliminates vendor lock-in, the organization said, simplifying integration and maintenance and enables online upgrades, for significant cost savings when using open process automation (up to 52% in initial hardware and software costs, according to system integrators involved).
In the past, edge components remained in place for decades with static functional capabilities due to the difficulties of upgrades, PICMG explained. In contrast, the hot-swappable interoperability of InterEdge allows industrial organizations to quickly adapt to changing market demands and technological advancements, PICMG said. Now manufacturers more easily can improve competitiveness through emerging trends in artificial intelligence (AI), indus-
trial internet of things (IIoT) and Industry 4.0 initiatives.
Matt Burns, global director of technical marketing at Samtec and chair of the InterEdge Technical Working Group, said, “InterEdge allows industrial manufacturers to transition from proprietary hardware to an open architecture where they can choose fit-for-purpose components, replace obsolete hardware, add computational resource and upgrade hardware security in a running plant at virtually zero switching costs.”
“InterEdge does for industrial control systems what the Open Compute Project did for data centers,” Burns added.
Widespread industry support
InterEdge originated as part of the O-PAS (Open Process Automation) Standard from The Open Group Open Process Automation Forum (OPAF), a consortium of more than 110 leaders in process automation including system suppliers, engineering firms, governmental bodies, research institutions and end customers. PICMG InterEdge standard joins other PICMG multi-vendor hardware standards including CompactPCI and Com Express and PCI-ISA, among those familiar to automation. PICMG and OPAF have committed to working together to push for the same widespread adoption of InterEdge. ce
Edited by Mark T. Hoske, content manager, Control Engineering, mhoske@ cfemedia.com, with information from PICMG. See more at www.controleng. com/articles/new-picmg-interedge-standard-helps-open-modular-process-control-systems
ARC Industry Leadership Forum 2024
uIn addition to the posts below, news from the ARC Industry Leadership Forum 2024 ranked in top read article tallies for Feb. 12-18, Feb. 19-25, Feb. 26-March 3, for the month of February, March 4-10, March 11-17 and for the month of March. As of late March, Control Engineering related coverage included the following.
• How to protect and safeguard critical OT infrastructure - Operational technology (OT) is at greater risk from
cybersecurity attacks than ever and requires a plan, education and attention. www.controleng.com/articles/how-to-protectand-safeguard-critical-ot-infrastructure
• How to lower industrial cybersecurity risk: Help from CISA, INL, ARC Advisory Group - Industrial cybersecurity advice aims to help. www.controleng.com/articles/ how-to-lower-industrial-cybersecurity-riskhelp-from-cisa-inl-arc-advisory-group
are only beginning to realize benefits; Three things industrial AI applications should do; video. www.controleng.com/articles/arc-industry-forum-2024-industrial-ai-potential-ishere-now
• How to excel in digital transformation with strategy, software - Hexagon announced a software and business strategy to help digital transformation. www.controleng.com/articles/how-to-excel-in-digital-transformation-with-strategy-software
FIGURE: Harry Forbes, research director, ARC Advisory Group, said traditional industrial automation companies (and others) are embracing and introducing virtualized industrial controllers, including Emerson, Honeywell, Rockwell Automation, Schneider Electric, Siemens and Siemens Energy. These can help enable open process automation, as explained at the 2024 ARC Industrial Forum. Courtesy: ARC Advisory Group
• New products for nextgeneration, open automation infrastructure controller - Schneider Electric delivers next-generation, open automation infrastructure, distributed control node (DCN) with independent softwaredefined controller; worked with Intel, Red Hat. www.controleng.com/articles/ new-products-for-next-generation-open-automation-infrastructure-controller
• DCS of the future, cybersecurity services, integrated asset management
• Save time in energy, automation projects: Digital twins - Case study, Shell Deepwater: Detailed digital twin model used with major industrial automation and operations projects in energy management can cut time to find information by half: Bentley Systems. www.controleng.com/articles/save-time-inenergy-automation-projects-digital-twins
• Get ready to upgrade process controls: 3 ways to save - Honeywell says users will find more process control system advantages in its R530 Experion PKS update; savings include up to half the controllers and system cabinets needed and up to 90% of fiber-optic budget. www.controleng.com/articles/get-ready-toupgrade-process-controls-3-ways-to-save
COVER: An ExxonMobil demonstration showed how a function block from Aimirim, using model-predictive control (MPC, feedforward for disturbance rejection) instead of PID control, resulted in a significant decrease in standard deviation for pressure control and flow control, discussed at 2024 ARC Industrial Forum. Courtesy: ARC Advisory Group, ExxonMobil, Aimirim
- ABB discussed future process control capabilities, cybersecurity services and integrated asset management to help users interact with automation more easily. www.controleng.com/ articles/dcs-of-the-futurecybersecurity-services-integrated-asset-management
• ARC Industry Forum 2024: Industrial AI potential is here, now - Industrial artificial intelligence (AI) was a dominant theme; many companies and users
• New cost analysis: Open process automation saves 52% versus DCS Part 1 series: While COPA projects 52% hardware and software savings from open process automation, O-PAS interoperable OPAF-aligned products are in field trials, devices are being tested, an adoption guide is underway and system integrators are preparing use of open controls. Lifecycle costs are measured at half of a distributed control system over 25 years. www.controleng.com/articles/new-cost-analysisopen-process-automation-saves-52-versus-dcs
• How to use open process automation today; see video - Part 2 series on OPAF
Digital edition? Click on headlines for more details. See news daily at www.controleng.com
AI, ML and future-proof facilities
MATERIAL HANDLING facilities are getting more sophisticated than ever, but the right data is needed to ensure their future growth. Artificial intelligence (AI) and machine learning (ML) can help in that evolution, said Christopher Connelly, business development manager at O’Neal LLC, and Billy Few, instrumentation and controls department head at Bridge Automation LLC, in their presentation “Designing Futureproof Material Handling Facilities” at Modex in Atlanta. AI do some tasks that typically require human intelligence, and ML allows software applications to become more accurate in predicting outcomes without explicit programming. ML models can identify patterns and grow as the system receives more information. This can help predictive maintenance systems improve visual monitoring (an AI process) and ML models that learn from tabular data collected from sensors and machines. If in the thinking or planning phase, start with a fivestep Industry 4.0 audit:
1. Start with problems or deficiencies you’re experiencing
2. Identify specific goals you have in mind
3. Conduct a facility audit to investigate
4. Complete a full study with a tailored Industry 4.0 roadmap
5. Identify and prioritize high ROI projects.
Connelly said future-proofing a material handling facility starts with identifying where a vector database might help, material flow optimization, improved storage algorithms, inventor levels and minimum reorder points, demand forecasts, optimized traffic management and optimized package sizes. ce
Chris Vavra is web content manager for CFE Media and Technology.
progress provides key updates in Open Process Automation Standards (O-PAS), field trials, guidance, technologies, strategies and system integrator platforms.
• Get components, software, help for open process automation now - Part 3 series on OPAF implementations: Strategies and tools are enabling devices and platforms using technologies aligned with Open Process Automation Standards. See O-PASaligned components list and some project summaries shared.
• Five ways IT can accelerate digital transformation, OPAF - Part 4 covers how Rockwell Automation hardware and software support open process automation and how
expanded IT capabilities can influence business outcomes and digital transformation.
• Eight ways to optimize operations with industrial AI - A digital business platform can industrial customers accelerate through challenges across industries. The Siemens Industrial Copilot for operations helps resolve production challenges eight ways.
www.controleng.com/articles/ eight-ways-to-optimize-operations-with-industrial-ai ce
COVER: Compared to DCS 1, Wood calculated 47% savings for a 25-year cost of ownership using the OPA system, as explained at 2024 ARC Industrial Forum. Courtesy: ARC Advisory Group, Coalition for Open Process Automation (COPA)
FIGURE: The engineering, procurement and construction (EPC)-firm-calculated comparison of a COPA system versus DCS, initial cost savings were 52% for system hardware and software, though a 10% total project savings including system integration. Those involved said system integration costs are expected to decrease significantly over time with application reuse and portability, noted at 2024 ARC Industrial Forum. Courtesy: ARC Advisory Group, Coalition for Open Process Automation (COPA)
FIGURE: A customer comparison using DCS 1 as the =1 benchmark, showed DCS 2 and DCS 3 between 7/10 and 8/10 the cost of DCS 1, and the OPA system about half the cost of DCS 1. Including cost of plant downtime by control system maintenance, customer-calculated savings amount to 60% to 70%, Bartusiak said at 2024 ARC Industrial Forum. Courtesy: ARC Advisory Group, Coalition for Open Process Automation (COPA)
Automation creates a better world
uAdvancements in supervisory control and data acquisition (SCADA) software include cost-effective licensing, all-in-one installation, redundancy and load distribution across multiple servers, built-in alarms, trending, analysis, reports, libraries and thin-client connectivity with unlimited concurrent users, as explained at the 2024 VTScada conference keynote session March 21, during the week-long Orlando ScadaFest event.
Glenn Wadden, president, VTScada by Trihedral, a Delta Group company, provided perspective for today’s software advances, saying, “Automation is a catalyst for a civilized life, improving health, safety, efficiency to make the world a better place, resolving challenges. We focus on ease of use to leverage people’s time. People need to get their jobs done.”
Alan Hudson, U.S. sales manager,
VTScada by Trihedral, touted application sessions at this year’s conference and encouraged VTScada users to get help in the developer room training sessions and by seeking feedback in sessions.
Barry Baker, vice president, VTScada by Trihedral, and president of U.S. Trihedral, said benefits of advanced VTScada skills include avoiding unwarranted code and support, leveraging features to reduce errors, more security and reliability. Wadden said VTScada software has nearly 100 new features since the last ScadaFest. Report Studio capabilities embedded in the current update of VTScada has dragand-drop configuration, practical for ad hoc reporting, with related features to be enhanced over time. ce
Mark T. Hoske is content manager, Control Engineering, mhoske@cfemedia.com.
Glenn Wadden, president (left), VTScada by Trihedral, a Delta Group company, said supervisory control and data acquisition (SCADA) software needs to demonstrate ease of use, reliability and code quality, avoiding the technical debt created when errors add painful code fixes, rather than starting with correct coding. Barry Baker, vice president, VTScada by Trihedral, and president of U.S. Trihedral, praised 2024’s ScadaFest as the largest Trihedral event to date, noting that VTScada expertise is particularly valuable in mission-critical industries, such as aviation. VIDEO: Wadden and Baker talk to Control Engineering about SCADA trends. Courtesy: Mark T. Hoske, Control Engineering
PO Box 471, Downers Grove, IL 60515
630-571-4070, Fax 630-214-4504
Content Specialists/Editorial
Mark T. Hoske, Content Manager 630-571-4070, x2227, MHoske@CFEMedia.com
David Bishop, chairman and a founder Matrix Technologies, www.matrixti.com
Daniel E. Capano, senior project manager, Gannett Fleming Engineers and Architects, www.gannettfleming.com
Frank Lamb, founder and owner Automation Consulting LLC, www.automationllc.com
Joe Martin, president and founder Martin Control Systems, www.martincsi.com
Rick Pierro, president and co-founder Superior Controls, www.superiorcontrols.com
Eric J. Silverman, PE, PMP, CDT, vice president, senior automation engineer, CDM Smith, www.cdmsmith.com
Mark Voigtmann, partner, automation practice lead Faegre Baker Daniels, www.FaegreBD.com
Daniels,
CFE Media and Technology Contributor Guidelines Overview
Content For Engineers. That’s what CFE Media stands for, and what CFE Media is all about – engineers sharing with their peers. We welcome content submissions for all interested parties in engineering. We will use those materials online, on our website, in print and in newsletters to keep engineers informed about the products, solutions and industry trends. www.controleng.com/contribute explains how to submit press releases, products, images, feature articles, case studies, white papers, and other media.
* Content should focus on helping engineers solve problems. Articles that are commercial or are critical of other products or organizations will be rejected. (Technology discussions and comparative tables may be accepted if non-promotional and if contributor corroborates information with sources cited.)
* If the content meets criteria noted in guidelines, expect to see it first on our Websites. Content for our e-newsletters comes from content already available on our Websites. All content for print also will be online. All content that appears in our print magazines will appear as space permits, and we will indicate in print if more content from that article is available online.
* Deadlines for feature articles for the print magazines are at least two months in advance of the publication date. It is best to discuss all feature articles with the appropriate content manager prior to submission.
Learn more at: www.controleng.com/contribute
TM Technology and
Mark T. Hoske, Control Engineering
Open controls: Why make it fit now?
Open process automation promises 52% hardware/software savings, 47% lifecycle savings, more capable and frequent software upgrades, along with interoperability and interchangeability. But it requires more integration at first; likely less, later.
PAY ATTENTION: Open process automation represents a market shift; some may be left behind.
Open process automation translates into 52% hardware/software initial savings (though more system integration costs, at, least, initially), but a 10% start-up savings compared to a distributed control system (DCS). It also means 47% lifecycle savings over 25 years. For many, this could mean hundreds of millions of dollars in savings, per site, as explained at the 2024 ARC Industry Forum.
Lower cost, more capable, hardware-independent controls
The hardware-independent control software is more capable and can be applied independent of hardware brands. A hardware standard is making hardware interchangeable among participating vendors. (See PICMG standard in news.)
Automation users have looked at savings from information technology (IT)-based standards in server farms and wondered why vendors in the automation, controls and instrumentation markets, haven’t offered similar benefits.
Operational technology (OT) applications of automation have been comparatively difficult to integrate competing technologies, even when following “standards” (networking and programming and other standards didn’t allow interoperability.) Finally, end-users, seeing productivity gains in IT, wanted interoperability and interchangeability and are getting it by creating standards and voting with budgets.
Open controls means more innovation, incremental improvements, adaptability
This transition allows IT companies to make market share gains in automation, and allows automation companies faced with proving to end users and system integrators the usefulness of their technologies for adapting to sustainability, artificial intelligence and machine learning (AI/ML), labor shortages, IT/OT convergence, smart manufacturing, digitalization and Industry 4.0.
IT and OT suppliers need to prove to customers that they can serve this transitioning market. Think again about what this means for OT automation and controls. For IT suppliers, means new market opportunities. For OT suppliers, it means a transition to open systems. For end users and system integrators, it means savings, more capabilities and more opportunities. ce
Mark T. Hoske, content manager, Control Engineering, CFE Media and Technology, mhoske@cfemedia.com.
Learn more in this issue and https://www.controleng. com/articles/new-cost-analysis-open-processautomation-saves-52-versus-dcs, first of a four-part series on open process automation. https://www.opengroup.org/forum/ open-process-automation-forum
ANSWERS
Ed Bullerdiek, Process Control Engineer, Retired
PID spotlight, part 3: How to select process responses
Process type determines the rules and methodology used to tune the PID controller.
An infinite variety of process responses exist, so it would seem impossible to use an algorithm with just three tuning parameters to control all possible process responses. Unfortunately, this is true. However, a proportional-integral-derivative (PID) controller can handle the vast majority of the processes in most applications. How a PID controller is tuned depends on the type of process response; each type of process requires different tuning rules and procedures. Correctly identifying and classifying process responses is required to take the most effective approach to tuning the PID controller.
Definitions: Four process responses
Process responses fall into four general categories: Self-limiting, integrating, exponential and complex.
Each type of response is categorized based on the number and types of internal feedbacks it has. The first three have, respectively, a negative internal feedback, no internal feedback, and a positive internal feedback. All three of these can usually be controlled with a PID controller. The complex category includes any process that has multiple internal feedbacks. It may or may not be possible to control a complex process with a PID controller. Following is a discussion of each of the first three process response types, including any relevant subcategories. It’s necessary to identify each process response because the procedure used to tune a PID controller differs for each type.
Know that control loop tuning rules and processes depend on process type.
Identify process type based on internal process feedbacks.
Understand which types of processes require automatic control.
CONSIDER THIS
First things first with PID: What’s the process and process type?
ONLINE
More on PID and APC: www.controleng.com/ control-systems/pid-apc
FIGURE 1: A self-limiting process response to a controller output change eventually lines out at a new steady-state after the controller output is changed. This behavior is found in any process that has an internal negative feedback. All figures courtesy: Ed Bullerdiek, retired control engineer
A self-limiting process response (Figure 1) is one which, like the name says, eventually lines out at a new steady-state after the controller output is changed. This behavior is found in any process that has an internal negative feedback. Examples would include flow (pressure drop increases with flow) and most temperatures (heat loss increases with rising temperature). There are many more self-limiting processes. Within the world of self-limiting responses there are three subcategories: Lag dominant, “moderate” and deadtime dominant.
Lag dominant process response
First some definitions before discussing dominant process response:
• Deadtime (Dt): Time before the process starts to respond to a change in the controller output.
• Lag (T1): The time for the process to get to 63.2% of the final (steady state) response. When characterizing process responses, most will have multiple lags, however, for simplicity, generally
FIGURE 2: A lag dominant self-limiting process has a deadtime that is less than ¼ the lag time constant.
all the lags are lumped together into one lag; the process is treated as first order plus deadtime (FOPDT).
The Figure 2 process response is a lag dominant response, which is defined as a response where the lag time is more than four times the deadtime:
T 1 > 4 * D t
These definitions are according to Greg McMillan, Tuning and Control Loop Performance, 4th Edition.
Deadtime dominant process response
This is a deadtime dominant response (Figure 3), defined as a response where the lag is less than ¼th the deadtime:
T 1 < D t / 4
‘Moderate’ self-limiting process response
A “moderate” self-limiting process is by definition neither lag dominant or deadtime dominant; that is:
D t / 4 < T1 < 4 * D t
It’s helpful to take a peek ahead, to understand that:
• Moderate self-limiting processes can benefit from the use of derivative action in the PID controller. Controller gain will be roughly the inverse of the process gain.
• Lag-dominant processes can allow for a large controller gain, often several multiples of the inverse of the process gain. Derivative action is not recommended as it is usually detrimental to controller response.
FIGURE 3: A deadtime dominant process has a deadtime that is more than 4 times the lag time constant.
‘Automatic control of a self-limiting process can be considered “optional.” Automatic control of an integrating process is mandatory.’
• Deadtime dominant processes require the controller gain to be less than the inverse of the process gain. Derivative action is not recommended as it is always detrimental to controller response.
Integrating process response
An integrating response is characteristic of mass or energy balance processes. If what’s going out doesn’t match what’s going in the vessel level will fill, or empty, until it overflows or runs dry. Physically there is no internal negative feedback that will provide stability. This means that an integrating process cannot be left uncontrolled. Automatic control of a self-limiting process can be considered “optional.” Automatic control of an integrating process is mandatory.
FIGURE 4: A “moderate” self-limiting process has a lag and deadtime that is about equal (ratio between 0.25 and 4).
Figure 5 shows an integrator response (level) to changes in the output flow. When the output flow is increased the level falls continuously until the output flow is lowered. The level then rises continuously until the output flow is raised to its initial value (to match the input flow). The process shown does not have deadtime or any process lags, however these may be present and must be accounted for during controller tuning. Finally, lag dominant self-regulating processes that are sufficiently slow may be treated as a “near integrator.” Use the tuning process for an integrator
controleng.com
Part 1: Three reasons to tune control loops: Safety, profit, energy efficiency
PID spotlight, part 2: Know these 13 terms, interactions www.controleng.com/ articles/pid-spotlight-part2-know-these-13-termsinteractions
ANSWERS
FIGURE 5: An integrating process response to a controller output change: When the output flow is increased the level falls continuously until the output flow is lowered. The level then rises continuously until the output flow is raised to its initial value (to match the input flow). The process here does not have deadtime or any process lags, however these may be present and must be accounted for during controller tuning.
to tune near integrators, which allows a greatly reduced time spent tuning (because there’s no need to wait for a slow process to come to steady state).
Exponential process response
the controller) if its response roughly acts like any of the other types of processes.
Complex process responses occur when the overall process has internal mass or energy recycles. Distillation towers, feed/product heat exchange networks and reactant recycle streams will create complex process responses in temperature, pressure, level and composition controls. If a complex process cannot be controlled by an ordinary PID controller it often can be controlled through the use of advanced PID features and/or the addition of feedforward or decouplers to manage interaction between parts of the process.
Pick the process, identify type
The methodology used to tune a PID controller depends on the type of process the PID is controlling. Before you can pick the tuning methodology, you must be able to identify the type of process. Processes are categorized into four types.
Processes that exhibit exponential response have a positive internal feedback process. The process we normally associate a positive internal feedback with is an exothermic reaction. As temperature climbs heat generation increases, which leads to higher temperatures, which leads to more heat generation, and so on. Failure to positively control these processes will result in an unpleasant outcome, putting people and/or property at risk. Automatic control of an exponential process is mandatory.
FIGURE 6: An exponential process response to a brief controller output change shows the inherent danger of positive feedback.
Figure 6 shows the inherent danger of positive feedback. The cooling to the (imaginary) exothermic reaction is reduced 1% from the 3-to5-minute mark, after which it is restored to its original value. Initially the rate of climb is very small and may be missed by an operator. But eventually the positive feedback causes the reaction to run away exponentially.
Complex process response
I’m not going to show a process response for a complex process because the shape of the response could be almost anything. It may be possible to control a complex process with a PID controller (and tune
Self-limiting processes are by far the most common. These processes, when disturbed, will settle at a new steady-state value because they have internal negative feedback, which provides stability. Flows and most temperatures are examples of self-limiting processes. Automatic control of these processes may be optional.
Integrating processes are the second most common. These processes will, when disturbed, trend in a new direction until they meet a constraint. There is no internal feedback to moderate the response. Levels are the most common example. Automatic control of these processes is mandatory. Exponential processes are unusual. These processes, when disturbed, will due to positive internal feedback proceed with increasing velocity (exponentially) toward a constraint. Exothermic reactors are the most common. Automatic control of these processes is mandatory. Complex processes occur in processes that have internal recycles, either mass or energy. If these processes act sufficiently like any of the above, they can be controlled using a PID controller (and can be tuned as if they are one of the above types). If not, then control may require more sophisticated techniques. ce
Ed Bullerdiek is a retired control engineer with 37 years of process control experience in petroleum refining and oil production. Edited by Mark T. Hoske, content manager, Control Engineering, mhoske@cfemedia.com.
Getting More from OPC A&E
This whitepaper discusses leveraging OPC A&E Classic (OPC Alarms and Events) data to optimize operations and maintenance for industrial systems. While OPC A&E offers valuable insights, accessing and integrating this data can be challenging due to its decentralized nature and lack of networking tools typically used for OPC DA. A good A&E connectivity solution should facilitate secure access, networking, aggregation, protocol conversion, and redundancy for OPC A&E data.
Secure Access and Protocol Conversion
For secure access, a tunnel/mirror approach can send OPC A&E data over networks without relying on DCOM, providing a secure and reliable connection. Combined with protocol conversion, this approach allows A&E data to be translated into various formats for reporting packages, SCADA, and other systems. In the process, such a tool should be able to aggregate A&E data from multiple sources to simplify analysis and reduce network traffic.
Redundancy and Logging
Redundancy is essential for ensuring the consistency of networked OPC A&E data. An integrated redundancy broker would ensure seamless switchovers between redundant connections in case of connection failure. The white paper \highlights a real-world example of the Trans-Anatolian Natural Gas Pipeline project for both redundancy and logging OPC A&E data into databases, facilitating easy access for analysis and reporting.
Realizing the Vision
With the rise of Industrie 4.0 and Industrial IoT, there’s a growing demand for broader utilization of OPC A&E data beyond daily operations, attracting attention from management for large-scale planning and efficiency initiatives. Realizing this vision means offering comprehensive solutions for aggregating, networking, and converting OPC A&E data. A quick read of “Getting More from OPC A&E” is a good way to get started.
How industries can take advantage of APC like the refining sector
While common in the refining sector, many other industries have yet to take advantage of advanced process control technologies.
FKEYWORDS: advanced process control, APC
LEARNING OBJECTIVES
Learn about the benefits of advanced process control (APC)
Explore what might hinder APC deployment in industries.
Learn how to optimize an APC system and maximize return on investment (ROI).
ONLINE
See additional stories at PID and APC atwww. controleng.com/ control-systems/pid-apc/ CONSIDER THIS
What benefits have you gained for applying APC in your facility?
ew industries are so commoditized that it is regular business practice for a competitor to routinely sell a product as their own. However, this is the norm for the refining industry where the local BP station may be selling gasoline from the nearby refinery or vice versa. In that sort of market, you use every tool at your disposal to increase margins, and advanced process control (APC) has been relied upon by refining companies for decades. Many other companies in other industries also can use APC to facility to increase their efficiency and profit.
APC is a multi-variable control algorithm and engineers also may hear the related term model predictive control (MPC). These systems use matrix math and linear algebra techniques to build statistical relationships between variables, establishing cause and effect relationships between many variables simultaneously. APC is a control layer that sits on top of the traditional single loop proportional-integral-derivative (PID) control algorithms found in distributed control system (DCS) and programmable logic controller (PLC) systems.
The refining sector is rapidly evolving. Water, air, and pipeline transport permits are reviewed with increasing scrutiny by governments and local communities. With the increased sales of electric vehicles (EVs), institutional investors are question-
ing their longtime holdings in traditional petroleum companies. While there is still much debate over the practicality of EVs and what any transition timeline may look like, there is no question change is coming to this industry.
Even before these headwinds, refineries have dealt with a market where they have little control over fluctuating feedstock prices (oil being a globally priced commodity), as well as public and political pushback on increases in pump prices.
Since refiners deal with large volumes, they have high capital investment costs for their equipment. Any technique they can find to get more production volume out of their existing equipment can save tens or hundreds of millions of dollars.
There are generally three levers to increase total production volume out of a unit: Increase volumes, improve quality/conversion and increase uptime. APC can help with the first two levers by tightening process variability and shifting control setpoints to more desirable limits. It can also help the third lever by increasing unit uptime by keeping a process out of operating regions that can cause fouling, excessive equipment wear or other issues that result in unexpected downtime or increased maintenance.
The role APC plays in manufacturing
With these advantages, refineries began the first APC rollouts in the 1970s. Widespread APC adoption throughout the refining sector occurred in the 1990s. Other industries took notice and use of APC applications by bulk commodity chemicals producers began to occur, but it never took off in other industries like it did in refining. APC has now been readily available for nearly forty years and far predates most other “advanced” technologies commonly grouped under the Industry 4.0 umbrella, yet like many of those technologies, remains to be fully adopted.
These systems have historically been standalone software applications that send cascaded setpoints to PID loops in the DCS/PLC system through an external data connection. Some newer DCS platforms have APC/MPC functionality built in, but it’s still much more limited than the standalone systems available.
The role of APC systems is to look at the “bigger picture,” look at process dynamics by monitoring many process variables, and determine which variables to manipulate to achieve improved process control. The APC system then shifts the process to a more cost-efficient set of operating conditions. The goal is tighter control closer to operating limits and saving money.
How APC was successfully used in refineries
So, if APC advantages are so obvious and the cost benefits are so tangible, why doesn’t everyone use APC? First let’s look at why APC was successfully deployed in refining. Refineries have economies of scale. Their size and throughput ensure they have an economic driver to try new things, and small improvements can add up to big dollars. Their facilities are generally larger resulting
‘ Advanced process control (APC) can tighten process variability, shift control setpoints to more desirable limits and help increase unit uptime by keeping a process out of operating regions that can cause fouling, excessive equipment wear, unexpected downtime or increased maintenance.’
in increased staff and headcounts that offer more manpower to experiment with new tools and maintain the successful tools long-term.
Most refineries are part of larger companies that also have corporate resources and budgets. Refineries are generally well-instrumented, well-maintained, and operate largely under automatic control. Their processes also lend themselves to APC applications as it’s easily applied to steady-state continuous operations.
Refineries are the epitome of this type of operation, often running for a year or more continuously. Finally, most refineries operate similar processes, and these processes are quite well studied and
FIGURE: Even a modern, well-implemented distributed control system (DCS) can achieve improved results with the help of an APC platform. Courtesy: Hargrove Controls & Automation
ANSWERS
‘ Don't be afraid to seek outside assistance for initial APC deployment and ongoing maintenance. ’
understood. For example, the knowledge gained with an application developed at one refinery can often be applied to other refineries, either by a third party or as personnel change jobs.
APC has struggled for wider adoption in other industries due to the inverse of many of the situations already noted. Many companies don’t have economies of scale that refineries do. They tend to run lean organizations and don’t have the staff to try new things or maintain niche software packages. APC applications aren’t rocket science, but they do require knowledge, skill and time to deploy and maintain.
Insightsu
Advanced process control (APC)
insights
u In the refining sector, where competitors may sell each other's products, the use of advanced process control (APC) becomes crucial for increasing efficiency and profit margins.
uRefineries face evolving challenges, including environmental scrutiny, electric vehicle adoption, and fluctuating feedstock prices. APC proves valuable by optimizing production volume, quality, and uptime, mitigating these challenges.
uDespite APC's success in refining, wider adoption faces barriers in industries lacking economies of scale. Limited manpower, poor instrumentation and nonlinear processes hinder APC, emphasizing the importance of stable basic process control.
Some industries still operate with many manual gauges or poor instrumentation. If their processes are not reliably run by the control system in automatic mode, a cascaded layer of APC control will do little to no good. Finally, not all processes are well-suited to APC.
APC works best with steady-state processes with linear relationships. Although APC packages have gotten better at handling non-linear processes, they still have limited use in batch or semi-continuous processes.
Many manufacturers also make specialty products with specialty processes. Each one of these unique processes requires extra investigation and experimentation to develop quality APC models.
Recognizing how APC can benefit a facility
Despite these obstacles, APC may still be the right choice for a facility. Considering cost pressures, the need to compete, and the implementation of software packages, APC holds several benefits to reduce costs and increase efficiency.
However, there are still a few things that haven’t changed. APC will never be successful if basic process control in a DCS/PLC is not stable or reliable. Whether companies choose to use APC or not, there are numerous financial and safety reasons to
get a process under stable automatic DCS/PLC control. This should be a minimum standard to operate a modern, safe, and efficient plant today. The first step to operating efficiently is to install or fix necessary instrumentation and valves and then focus on making DCS/PLC control improvements.
Next, look at the process type. Batch processes don’t generally lend themselves to APC applications, but that isn’t a strict rule. Ask questions about the process such as:
• Are there long cycle times?
• Are there steady state periods during the batch that need better optimization?
• Are there support operations like distillation columns outside the batch process that operate continuously in steady state?
If so, the next step includes looking at the operating margins. How much potential profit is being wasted with suboptimal raw material or energy use or with poor conversion rates and quality issues? Based on those savings, how much could a company spend on an APC application that would pay for itself in a year or less? MPC/APC functionality built into DCS packages can have major benefits for some users. Even though these packages have their limits, they can be a great proof of concept and leverage an existing control system engineer’s skillset.
Companies shouldn’t be afraid to seek outside assistance for both initial APC deployment and ongoing maintenance. Third-party firms can often help get the system deployed and generating revenue faster than they can if they were self-deploying. Some companies have even found success with contingency fee arrangements where upfront costs are minimized for a share of operational savings.
Given all that, now may be the right time to take a page out of the refining playbook and look at APC adoption. ce
Heath Stephens, PE, is the digitalization leader for Hargrove Controls & Automation. Edited by Chris Vavra, web content manager, Control Engineering and CFE Media and Technology, cvavra@cfemedia.com.
Sean Saul, Emerson
See the
future:
Four benefits of softwaredefined industrial controls
Modern automation solutions are untethering the control system from hardware constraints, enabling operations teams to drive more flexible projects and operations.
Today’s process and hybrid manufacturers face a different set of challenges than they did five or ten years ago, and software-defined control systems are helping. Companies are making commitments to increase sustainability, even going so far as to set net zero goals to be reached within the next few years.
Simultaneously, a dynamic global marketplace has forced manufacturers to be more flexible to meet the ever-shifting needs of their customers.
To accomplish these goals, companies need easy access to data, a journey they are undertaking by
evolving toward a vision of boundless automation, where data moves seamlessly from the intelligent field, through the edge and into the cloud. Teams need easy access to data to drive the productivity and operational agility that will help them meet their goals, but legacy control technologies are more likely to silo data than to free data use. This disparity is prompting many operations teams to consider new projects to replace their aging infrastructure, but traditional expansion and modernization projects can be complex and expensive. Software-defined control technology is a model that will make projects easier to execute and more cost-effective by eliminating built-for-purpose control hardware for more flexible, scalable, future-proofed operations. Understanding software-defined control will be essential to navigating the future of automation. Today, when an operations team wants to expand or modernize its automation system, it typ-
FIGURE 1: Hyperconverged infrastructure (HCI) is a building block of the software-defined platform, combining traditional and real-time workloads. Emerson’s Boundless Automation vision is software-defined control technology, a model designed to make projects easier to execute and more cost-effective by eliminating built-for-purpose control hardware for more flexible, scalable, future-proofed operations. Images courtesy: Emerson
ically must purchase and install a variety of new hardware components. Often the team needs to purchase specific industrial hardware controllers, and then house them in built-for-purpose cabinets. They also purchase the new I/O necessary to run operations, and they need a network layer to accommodate all the new equipment.
Software-defined control will eliminate the hardware complexity of existing automation systems. Instead of built-for-purpose hardware, teams should be able to execute containerized control workloads on many different hardware platforms. One likely deployment environment for software-defined workloads is delivered via hyperconverged infrastructure (HCI), with the control system operating as a redundant service on an HCI environment (Figure 1).
Software-defined control: Four benefits
Software-defined control has four key benefits:
1. Resiliency: The high-availability redundant control today’s operations demand will be further enhanced. With multiple controllers running simultaneously on HCI, teams will be able to add additional automation workloads without facility shutdowns. Extending or adding automation will simply require deploying lightweight containerized control functions in the virtualized environment, developing the input-output (I/O) subsystem in the expansion area, and then auto sensing the field devices in the automation system. And if one containerized control function fails, HCI’s rapid failover features will empower the team to keep operating continuously and seamlessly. The flexibility provided by HCI allows for fault tolerance beyond the traditional 1:1 primary/backup architecture, including the potential to fail over to virtualized hosts in another physical location.
2. Scalability: The adaptability of control systems will fundamentally change when operations can add capacity without a complex hardware system redesign. Gone are the days of designing and setting up more cabinets for additional control capacity. With HCI, adding additional computing power is as simple as connecting another blade in a server cabinet.
3. Flexibility: Software-defined control simplifies project execution and ongoing operations. Instead of needing to calculate every device signal tag (DST) when expanding or modernizing operations, teams can safely estimate their requirements and adjust as needed. If the project team needs to make changes at any time that impact the capacity or type of I/O
points, scaling the DSTs up or down simply requires a change in the software configuration. Teams are no longer limited by the decisions made in project execution, instead growing automation with their needs.
4. Extensibility: Automation technology changes frequently. While teams would often like to take advantage of the new features available in control system updates, it can be difficult to do so when updates require production outages. Software-defined control will simplify the process. As new features are released by automation suppliers, operations teams using software-defined systems can update and change control software and strategies without interrupting production, updating one control function instance while its primary continues operations.
Why software-defined control now?
The technology building blocks to support software-defined control have been available for years, but automation users and suppliers are just beginning the journey to adapt their environments to use that technology. This shift has come just as new technologies in the field, such as Ethernet Advanced Physical Layer (Ethernet APL), are unlocking new data sources that will push current control hardware to (or beyond) its limits. Massive amounts of data will soon be coming into the control system from the intelligent field, including video, acoustic monitoring, and other areas. The most innovative companies are identifying ways to integrate that data into control strategies.
Every company will need a future-proofed architecture to ingest the exponential increase of data that will come from the intelligent field. Moving toward that future today will help teams ensure they are on a path to an automation platform that enables industryleading operational performance (Figure 2). The most advanced automation providers are already delivering on the early stages of software-defined control. ce
Sean Saul is the vice president of the DeltaV Platform at Emerson. Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media and Technology, mhoske@cfemedia.com.
FIGURE 2: As new, more complex, and more bandwidth-intensive data sources are added from inside and outside the enterprise, organizations will need an infrastructure to support the next-gen intelligent field, along with external sources of data, such as commodity prices.
controleng.com
KEYWORDS: Software-defined control, next-generation control system
LEARNING OBJECTIVES
Understand software-defined control and the four benefits of software-defined control systems.
Review how less hardware creates more control over resiliency, scalability, flexibility and extensibility.
CONSIDER THIS
What will your next-generation control system do?
ONLINE
Learn more from Emerson about artificial intelligence for industrial use: How gen AI can help amplify industrial efforts
ANSWERS
Lucas Paruch, Yaskawa America Inc.
How to choose a VFD for medium-voltage motors
Understand the advantages of multi-level output drive topology in medium voltage (MV) motor applications. Motor insulation reliability and reflected waves: What to consider when selecting a variable frequency drive (VFD).
Variable frequency drives (VFDs) are common in most industrial motor applications, including pumping, compressing, blowing, conveying, extruding and mixing. When motors are started across the line on 60Hz utility power, efficient motor operation is limited to a very narrow window around the rated motor operating speed and torque values. Drives allow motors to operate at their optimal efficiency
FIGURE 1: Most common low voltage (LV: less than 1000V) drives are comprised of three sectaions. Power flows from left (utility supply) to right (motor). The three-level drive topology has one DC bus and six insulated-gate bipolar transistors (IGBTs). Images courtesy: Yaskawa America Inc.
over a wide range of speeds, satisfying a wide range of varying torque requirements, while reducing motor stress and starting inrush current.
Fundamentals of variable frequency drives
For every motor, the optimal supply voltage and frequency changes as the speed and torque requirements of the application change. When started across the line, a 460V 60Hz motor only can operate at the utility supplied voltage and frequency. Drives overcome this limitation by continuously adjusting the output voltage and frequency to match the optimal operating conditions for the application load.
Most common low voltage (LV: less than 1000V) drives are comprised of three sections. Power flows from left (utility supply) to right (motor) in Figure 1.
The diode bridge converts three-phase utility supply power from alternating current (AC) to direct current (DC).
The DC bus acts as a battery. The bus stores the energy it receives from the diode bridge until that energy is needed by the inverter section.
The inverter IGBTs (IGBT stands for insulated-gate bipolar transistors) are switches that turn on and off at a very high rate of speed (thousands of times per second). A drive cannot create a true analog sine wave output to match the utility supply. However, by using pulse width modulation (PWM), the drive generates a series of short pulses and long pulses that, when averaged, are representative of a sine wave voltage waveform, as shown in Figure 2. When smoothed by the inductance of the motor windings, the resulting motor current is approximately sinusoidal.
Example: This is similar in principle to using a dimmer switch with an incandescent light bulb. The dimmer switch doesn’t actually reduce the peak voltage to the light bulb, it just switches it on
FIGURE 2: By using pulse width modulation (PWM), the drive generates a series of short pulses and long pulses that, when averaged, are representative of a sine wave voltage waveform in this threelevel drive output.
and off quickly enough that the pulses are not perceived, and the average illumination is reduced.
Motor insulation reliability considerations: Reflected wave
When the drive can be installed in close proximity to the motor (within 50m), no further consideration is typically required. The inductive and capacitive properties of the motor cables are dependent on length. When the cable length is short (<50m), the cable inductance and cable capacitance is generally small enough to have negligible impact on the system.
In some applications, it is not possible to install the drive near the motor. As the cable length increases, the inductive and capacitive properties of the cable become significant. When the high frequency PWM pulses travelling on the motor cable are reflected by the dissimilar impedance of the motor windings, the resulting voltage reflections will sum with the incoming pulses. The magnitude of this voltage reflection at the motor may be as much as twice the peak voltage value at the drive output. Without additional precautions, the high voltage stress created by reflected wave phenomena can exceed the cable or motor insulation system ratings and result in insulation breakdown and subsequent motor or cable failure.
NEMA MG-1 Section IV Part 31 addresses the risk of voltage spikes by requiring that motors intended for use with VFDs, or “inverter duty” motors, be designed with insulation systems capable of withstanding twice the rated peak (Vpeak = √2*VRMS) value of the supply (plus a 10% buffer). When applying VFDs, it is important to ensure the motor insulation system specified is appropriate for use with a drive and not intended only for use on utility line power.
Another common solution is to use load reactors, dV/dt filters, or sine wave filters at the output
of the drive. Adding inductance at the drive output increases the rise time of each pulse, which results in a smoother waveform, and reduces the magnitude of wave reflections at the motor. While effective for reducing voltage spikes, the addition of output filters increases the overall cost, weight and footprint of a drive system, introduces a voltage drop, generates additional heat, and reduces overall system efficiency.
Multi-level cascaded output medium-voltage drives
For motors under 250HP using low voltage drives, increasing insulation ratings and applying output filters are effective strategies to mitigate the risk of voltage reflections on motor insulation systems, especially in applications with long motor cable requirements.
The same mitigation strategies can also be applied to larger drive applications. However, for applications above 250HP, it becomes increasingly economically feasible to consider using a multi-level medium voltage drive topology. With a multi-level drive output, it is possible for the drive to create a nearly sinusoidal output waveform, eliminating the risk of reflected voltage stress at its source.
Most multi-level drives are constructed with the same basic building blocks as a typical low voltage drive (diode bridge, capacitor bus, and output IGBTs). Instead of switching a single DC bus potential on and off, multi-level drives use a cascaded topology in which the potential from multiple capacitor DC busses are summed in a series of smaller steps. As a cascaded waterfall flows over a series of small steps, a cascaded drive topology allows the output voltage to make smaller gradual steps, instead of switching from full ON to full OFF (as shown in Figure 2).
The 3-level output drive topology shown in Figure 1 is comprised of one DC bus to store energy,
FIGURE 3: The 17-level output drive topology has 12 independent DC busses to store energy and 48 cascaded insulated-gate bipolar transistors (IGBTs). Instead of switching the full output voltage, each of the cascaded IGBTs switches only a small fraction of the full output voltage.
ANSWERS
FIGURE 4: A 17-level drive output waveform is smooth and nearly sinusoidal.
A smooth cascaded output waveform resolves the reflected wave voltage stress challenges faced by most LV-drive topologies. Courtesy: Yaskawa America Inc.
and six switching IGBTs to create the output threephase waveform.
Online
controleng.com
KEYWORDS: Mediumvoltage drives, medium voltage motor maintenance, VFD tutorial
LEARNING OBJECTIVES
Review the fundamentals of variable frequency drives and motor insulation reliability considerations, including reflected wave.
Learn about resonance and motor insulation reliability considerations and the need to reduce risk of insulation breakdown.
CONSIDER THIS
Have you considered the latest medium voltage drive technologies?
ONLINE
Software tools can help when working with VFDs www.controleng.com/ articles/more-answers-onsoftware-tools-you-needwhen-working-with-ac-drivesvfds
The 17-level output drive topology shown in Figure 3 is comprised of 12 independent DC busses to store energy, and a total of 48 cascaded IGBTs. Instead of switching the full output voltage, each of the cascaded IGBTs switches a small fraction of the full output voltage. The resultant output voltage waveform seen in Figure 4 is smooth and nearly sinusoidal.
A smooth cascaded output waveform inherently resolves the reflected wave voltage stress challenges faced by most LV drive topologies. Eliminating high amplitude switching pulses at the source reduces the need to add costly filters to protect against reflected wave phenomena at the output.
A smooth 17-level output waveform reduces voltage stresses, increasing the service life of cable and motor insulation systems.
Motor insulation reliability considerations: Resonance
It is important to note that reflected wave phenomena is not the only potential source of harmful voltage stress in drive systems.
Resonance occurs when oscillatory forces are synchronized with the natural frequency of a system.
Example: When a child randomly swings legs on a swing set, that causes the swing to oscillate at a very small amplitude. When the frequency of the small leg “pumping” forces are synchronized with
the frequency of the swing oscillation, each small force is added to the energy of the system, incrementally increasing the amplitude of each swing. If the swinger continued to pump after maximum amplitude is achieved (when the chains go slack), the resonance of the pumping action would cause the system to become unstable.
In applications with very long motor cable lengths (typically greater than 300m), electrical resonance of the cable system also must be considered. Modern voltage source drives modulate the output voltage by switching IGBTs on and off thousands of times per second (“pumping”). This carrier frequency is typically expressed in kilo hertz (for example, 4kHz = 4000 cycles per second). The combination of the inductive and capacitive properties of any cable has a unique resonant frequency. When cables are less than 300m, the resonant frequency is typically much greater than the drive carrier frequency, and poses little risk of excitation. As the cable length increases, the cable resonant frequency decreases. When the cable frequency and the switching frequency are equal, damaging resonant voltages of up to five times the peak voltage amplitude may be induced.
For applications with very long cables (over 300m), a study of the drive and cable characteristics should be completed to evaluate potential risk and identify an appropriate sine wave output filter to prevent resonance.
Reducing risks of insulation breakdown
For applications that require long motor leads, multi-level cascaded output medium voltage drives provide all the benefits common to all VFDs, while reducing the risks of insulation breakdown from reflected wave voltage spikes.
For legacy applications using motors with standard insulation systems (designed only for operation on 60Hz line power) multi-level cascaded output drives provide a reliable option to retrofit systems to variable frequency control without introducing additional voltage stress. ce
Lucas Paruch is a product manager of medium voltage drives at Yaskawa America Inc. Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media and Technology, mhoske@cfemedia.com.
INDUSTRIAL DRIVES, POWER QUALITY
Rob Fenton and Austen Scudder, Eaton
Get harmonics answers now: Explore 18-pulse, AFE drives
To mitigate harmonics 18-pulse and active-front-end (AFE) drive technologies can be used; know strengths and weaknesses for various applications.
This article explores two ways to mitigate harmonics and enhance control. Rob Fenton, Eaton’s customer excellence senior manager, and Austen Scudder, product line manager, drives, soft starters, and assemblies for Eaton’s Industrial Control Division, asnwer questions.
Question: What are 18-pulse drive and active-frontend (AFE) drive technologies?
Answer: Both industrial drive solutions aim to mitigate harmonics, but they do it differently. An 18-pulse drive uses a phase shifting transformer to create nine phases that cancel out harmonics, achieving a low harmonic rating. An AFE consists of an inductor-capacitor-inductor (LCL) filter to reduce switching noise and two inverters that actively manage harmonics by switching power back and forth between the line and the direct-current (DC) bus.
Q: Where are 18-pulse drives used?
Answer: The 18-pulse drives are a common solution for mitigating harmonics in applications. We commonly see 18-pulse drives in the heating, ventilation and air conditioning (HVAC) market where there’s a lot of harmonic accumulation. In water and wastewater applications, high power and energy consumption can lead to harmonic accumulation and potential system issues. While there are other industrial drive applications, these two most frequent areas and where the benefits tend to be very strong.
Q: Where are AFE drives most used?
Answer: AFE drives find applications in various industries beyond harmonic mitigation, such as industrial areas, energy recovery systems and environments with unbalanced power conditions. AFE drives are versatile and can be used in HVAC and water and wastewater applications.
Q: Can you compare them?
Answer: Several factors influence the choice between 18-pulse drives and AFE drives. AFE drives have a smaller footprint in lower power ranges, making them suitable for applications up to around 200 horsepower. In terms of power conditioning, AFEs can compensate for unbalanced power systems. On the other hand, 18-pulse drives excel in harsh conditions and provide better isolation from surges and line distortions; 18-pulse drives also tend to have a longer lifespan due to lower voltage ripple on the DC bus.
Q: What are misconceptions?
Answer: One common misconception is that AFE drives outperform 18-pulse drives at lower load conditions due to percentage-based comparisons. However, the total harmonic current matters more than the percentage, and AFE drives can still exceed limits at lower loads. It’s important to understand the actual harmonic content in different scenarios. Modern 18-pulse transformers are efficient auto transformers, much more efficient than isolation transformers. The 18-pulse drives can be more energy efficient than AFE drives because of the number of inverter-based power conversions. An AFE typically has two inverter-based power conversion stages and an LCL filter. An 18-pulse drive has an 18-pulse auto transformer and an inverter-based power conversion stage. ce
Rob Fenton is Eaton’s customer excellence senior manager; Austen Scudder is product line manager, drives, soft starters, and assemblies for Eaton’s Industrial Control Division. Edited by Mark T. Hoske, content manager, Control Engineering, mhoske@cfemedia.com.
FIGURE: Eaton’s PowerXL EGP VFD 18-pulse drives use advanced 18-pulse technology that significantly reduces line harmonics at the drive input terminals, resulting in one of the purest sinusoidal waveforms available.
With this article online, see the answer to this question: " What’s the future for 18-pulse drives and AFE drives?" https://www.controleng.com/ motors-drives/power-quality
ANSWERS
Michael Wrinch, Hedgehog Technologies
Important advancements in VFD and motor control
Advancements
in variable frequency
drives enhance safety and improve energy efficiency. See three VFD advanced control schemes.
As technologies progress, modern variable frequency drives (VFD) enhance safety, energy efficiency and controls. Selecting the right VFD for industrial applications requires considering potential harmonic issues, manufacturer reputation and cost effectiveness.
Recent enhancements to safety features in VFDs are driven by more emphasis on safety protocols. A notable development is integrated network safety protocols, such as PROFIsafe (PI North America) and CIP Safety (ODVA). This integration involves incorporating VFDs into safety networks and protocols.
The key distinction is in the VFD’s capacity to communicate with other safety systems, facilitating a coordinated response to hazardous conditions through real-time safety monitoring and control. This
ensures the reliable and integral transmission of safety-related data, including emergency stop signals and light curtain status. To achieve this, techniques like unique identification, sequence counters and cyclic redundancy checks (CRC) are employed, guaranteeing that safety data is not lost, altered or delayed during transmission. Another recent advancement in safety features is the implementation of safe torque off, commonly known as “STO.” While STO is not a new concept, its application in modern VFDs has become more sophisticated.
Older STO-like features involved a hard stop of the process through direct inhibition of the scan cycle or the use of a contactor with a safety relay to open the circuit and disable the motor. The scan inhibitor was subject to noise and errors and the external contactor added parts. The contemporary STO features are integrated directly with the drive’s control system to offer a higher level of safety using fewer parts. This ensures an immediate removal of power to prevent unintended motor startup or movement, providing a more efficient and reliable safety mechanism.
The latest VFDs employ sophisticated methods for detecting and responding to overcurrent, ground faults and thermal overload conditions. Such methods use advanced algorithms and real-time line monitoring, enabling faster and more accurate protection compared to older systems. These modern additions to VFD safety features reflect a trend toward more integrated, proactive and data-driven approaches to safety. The focus is not just on responding to hazards but on anticipating and preventing them, which aligns with broader trends in industrial safety and automation.
Energy, efficiency, soft starters
Modern VFDs exhibit significantly improved energy efficiency when compared to their predecessors. These enhancements are attributed to advancements in transistor technology, such as low-loss surface mount gallium nitride (GaN) and silicon carbide
FIGURE 1: Michael Wrinch is commissioning motor drive control panels for a natural gas plant in Australia. Images courtesy: Hedgehog Technologies
(SiC) power transistors. These technologies contribute to faster switching, higher bus voltage and lower gate voltage drop, resulting in overall higher efficiency.
In contrast to older VFDs with efficiencies below 85%, some manufacturers now boast upper efficiencies of up to 98% at full load. Limits of older transistor technologies, which required a trade-off between low voltage fast switching and higher voltage slow and inefficient switching, do not apply with GaN or SiC; small form factors can handle higher currents.
With the advancements in transistor technology, many VFDs have been miniaturized, resulting in a more compact design. Downsizing makes them suitable for more applications previously not feasible. When a soft starter was the sole option, a VFD can often substitute for it, offering advanced benefits such as controlled starts, stops and speed adjustments.
Soft starters were traditionally considered a costeffective alternative to harsh on-off motor starting. With the increasing compactness and versatility of modern VFDs, they are becoming more competitive and viable options in various applications. This shift reflects the broader advantages of VFDs and their ability to handle diverse motor control needs.
Contemporary VFDs are equipped with sophisticated algorithms that optimize energy use and harmonic generation based on the load requirements. These algorithms dynamically adjust operating parameters, such as voltage and frequency, to align with the load, resulting in reduced energy consumption. This is a shift from older VFDs that operated at fixed or less adaptable parameters.
It is not uncommon for VFDs to include advanced regenerative braking capabilities, which allow them to capture and reuse energy that would otherwise be wasted during deceleration or braking. This feature proves especially effective in applications with frequent stop-start cycles. VFDs incorporate enhanced designs and components to mitigate electrical harmonics, which can affect power quality and lead to inefficiencies.
Three VFD advanced control schemes
Current VFDs provide enhanced control over motor speed, torque and overall performance, coupled with advanced diagnostics and improved connectivity features. Among state-of-the-art control schemes used today are vector control, direct torque control and sensorless vector control, which are explained below.
Vector control (field-oriented control - FOC): Vector control, or field-oriented control (FOC),
is one of the most advanced control methods. This approach decouples the motor’s torque and flux components, enabling independent control over each. It’s particularly effective in applications demanding precise speed and torque control, such as in robotics and CNC machines.
Direct torque control (DTC): DTC is a method that directly regulates motor torque and flux, providing a rapid and dynamic response. Unlike vector control, DTC operates without the need for position sensors and offers a straightforward implementation with robust performance, making it advantageous in applications with swift and frequent load changes.
Sensorless vector control: This control scheme delivers performance comparable to FOC but without the need for a rotor position sensor. It estimates the motor's magnetic flux using mathematical models, which enables precise control even at low speeds. This method is widely used in applications where installing sensors is impractical or excessively costly.
Advanced control schemes boost VFD capabilities, facilitating precise control, improved efficiency and adaptability to a wide range of industrial and commercial applications. Future integration of artificial intelligence and machine learning will provide greater performance and efficiency. ce
Michael Wrinch is the founder of Hedgehog Technologies. Edited by Tyler Wall, associate editor, Control Engineering, CFE Media and Technology, twall@cfemedia.com.
controleng.com
KEYWORDS: VFD, motor controls, control schemes
LEARNING OBJECTIVES
Understand the evolving safety features in modern VFDs
Evaluate energy efficiency upgrades in contemporary VFDs
Explore three advanced control schemes in current VFD technology.
CONSIDER THIS
Where do VFD advancements fit into your facility?
ONLINE
If reading from the digital edition, click on the headline for more resources.
www.controleng.com/ motors-drives
FIGURE 2: Michael Wrinch, with engineer Alia Gola, holds the Allen-Bradley PowerFlex AC Motor Drive from Rockwell automation, to be installed in the panel behind
Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.
Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.
Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.
Sona Dadhania,
Road to 6G: low-loss materials
Stakeholders are working on 6G next steps as 5G expands into industry.
As the world awaits the full take-off of the next generation of telecommunication technologies, 5G, important stakeholders are preparing for the future of future telecommunications – 6G. This may seem premature, given that deployment of 5G infrastructure and base stations are not at their peak. IDTechEx forecasts that the high-frequency, high-performance bands of mmWave 5G will only take off in several years. For 6G technologies to be deployed globally in a decade, key research and development activities by stakeholders across the supply chain are underway. This includes R&D for low-loss materials, which IDTechEx explores in its report, “Low-Loss Materials for 5G and 6G 2024-2034: Markets, Trends, Forecasts.”
A look into 6G and its current status
It is important to understand the 5G frequency bands to understand why the 6G frequency bands seem so promising. 5G’s frequency bands include the sub-6GHz band (from 3.5 – 6 GHz) and the millimeter wave (mmWave) band (from 24 – 40 GHz). While these 5G bands can offer faster data rates, low latency and enhanced reliability for end-users, 6G can go further. 6G will likely include frequency bands extending into the THz (terahertz) range (from 0.3 to 10 THz), which will be able to offer Tbps (terabits per second) data rates, microsecond latency and extensive network dependability. Compared to 5G, 6G is expected to have a 50x higher data rate and 100x faster speeds.
Research on 6G technologies has been accelerating since 2019. The first major milestone occurred in 2017 when Huawei began its 6G research. Since then, key governmental authorities like the U.S. Federal Communications Commission (FCC) have opened up THz frequencies for research, while the Chinese government began its research
activities for 6G. Partnerships and consortiums are shaping up to be important hubs of innovation for future 6G technologies. The AI-RAN Alliance was launched with the goal of effectively combining artificial intelligence (AI) with wireless communication technologies. Founding members include Samsung Electronics, Arm, Ericsson, Microsoft, Nokia, NVIDIA, SoftBank and Northeastern University.
Technical challenges of 6G
The two biggest challenges that will need to be addressed for 6G technologies are: very short signal propagation range and signal loss due to line-ofsight obstacles such as buildings and trees. Minimizing transmission loss will require different technical advancements, including innovations in materials for 6G. For THz communications, lowloss materials help minimize signal loss and are critical to enabling new 6G applications. ce
Sona Dadhania is a technology analyst, IDTechEx; Edited by Tyler Wall, associate editor, Control Engineering, twall@cfemedia.com.
KEYWORDS: Industrial wireless
LEARNING OBJECTIVES
6G research and development activities are aiming for its deployment in a decade with focuses on high-frequency bands and low-loss materials.
Understand that 6G is expected to offer significantly enhanced performance over 5G, including data rates up to 50x higher and speeds 100x faster.
CONSIDER THIS
Are you planning for the advanced capabilities of 6G wireless?
ONLINE
This article online has more on low-loss materials for 6G. www.controleng.com/ industrial-networking/wireless
ANSWERS
Sunil Doddi, CAP, CFSE, CFS
How to create comprehensive automation safety for process industries
Proper industrial control system (ICS)
safety requires attention to functional safety and cybersecurity. Know the definitions and industries standards to help.
ISA (International Society of Automation) defines automation as the creation and application of technology to monitor and control the production and delivery of products and services.
For the process industries another useful though general term is IACS (Industrial Automation Control System); according to IEC 62443-11, an Industrial Automation and Control System (IACS) is a collection of processes, personnel, hardware and software that can affect or influence the safe, secure and reliable operation of an industrial process. Though the word IACS is gaining momentum, ICS (industrial control system) is still widely used when referring to the IACS system. ICS is generally acceptable and widely used.
ICS safety can be comprehensively categorized as functional safety and cybersecurity. Looking into each separately can help with understanding how they overlap.
IEC61508 (International safety standards for the design of safe systems for hardware and software) has a definition for safety, and IEC62443 (International series of standards that address cybersecurity for operational technology in automation and control systems) defines security. I believe, the primary purpose of any security is safety, especially in the process industry, as pub-
lic and personnel health may get compromised. Below, security and safety are comprehensively referred to as ICS safety.
Functional safety definition
What is functional safety? It is a part of the overall safety of a system or piece of equipment that depends on automatic protection and operates correctly in response to its input feedback or failure in a predictable manner (fail-safe).
Cybersecurity definition differs in ICS environments
What is cybersecurity? Cybersecurity is broadly categorized as cyber safety and physical security. Though the word cybersecurity implies that the intention is to look at only the “Internet” connection, this is not true regarding ICS environments.
Until a few years back, functional and cyber safety were considered separate and treated separately. That cannot be the case anymore, as process industry safety standards require cyber assessment. Cyber risk assessment is required per ANSI/ISA 61511/IEC 61511 to meet the standard.
Before further understanding ICS safety, failures and threats need to be understood.
The ICS should be designed to address functional safety adequately, and the failures may come from hardware failures, human errors, systematic errors and operational and environmental stress.
ICS points of hardware failure
Hardware failure comes from field equipment, sensors and instruments in the ICS. Hardware failures, also known as random failures, are common. These failures happen for various reasons, and they can be due to the failure of a subset of components
in the equipment, operational, environmental stress or improper maintenance.
Systematic errors are design faults that also could arise from documentation errors. Hardware failures also can be said to be systematic errors. However, they should be separate and not treated as one.
Operational or environmental stress happens depending upon where ICS is located, in a controlled environment or classified area.
Types of cyber threats
Cyber threats can be external or internal and categorized as deliberate or accidental. Typical external threats are hackers (professional, amateur or so-called script kiddies), rival business competitors and rival organizations or nation states. Typical internal threats are erroneous actions, inappropriate behavior and insider threats. (Environmental threats are excluded in this discussion.)
The figure shows that proper ICS safety means both functional safety and cybersecurity must be met and must be integrated. Proper ICS safety can be achieved when both areas are adequately addressed.
It is a misconception that having a safety instrumented system (SIS) is enough, and cyber safety is optional. Attacks can happen on the SIS itself, compromising safety. Concurrently, not having an SIS system does not mean that cyber safety is not required as BPCS independent protection layers can get compromised and thus compromise safety.
Three processes of safety and security lifecycles, standards to help
Both safety and security life cycles depend on three processes: Analysis, implementation and maintenance.
Governments and industry organizations are developing safety and security guidelines and recommendations to support this.
IEC 61508/ANSI/ISA 61511/IEC 61511 for functional safety. IEC 61508 is considered a primary or “umbrella” standard. ANSI/ISA 61511/IEC 61511, the sector-specific standard for the process industries. In the process industries, IEC61508 is primarily applicable to vendor-specific components. Therefore, the ICS safety and reliability analysis should be performed within the framework of these two standards.
ANSI/ISA 61511/IEC 61511 has three parts: PART 1: Framework, definitions, system, hardware, and application programming requirements.
PART 2: Guidelines for the application of Part 1
PART 2: Guidance for the determination of safety integrity levels (SIL).
Meeting the 61511 standards may result in a SIS or not, in some cases, based on the inherent design of the process and available instrumentation and controls implementation. Also, it is not mandatory to use a safety PLC, as an SIS system also can be achieved through hardware wiring design. Hardwire wiring design usually brings complex wiring and maintenance issues. The standards do not necessarily recommend safety PLC, but a safety PLC has many advantages, such as simplifying complex wiring, ease of configurable options and availability of field diagnostics.
Using smart instruments and safety PLC gives advantages like predictive and preventive maintenance using data collection methods and increases plant reliability.
Cybersecurity help for process industries, general and specific
For cybersecurity, the ISA/IEC62443 series of standards are available and are divided into four parts: part 1 for general, part 2 for policies and procedures, part 3 for system, and part 4 for component level.
Guidance also is available through some industry and sector-specific guidance and standards also available, including:
The figure shows that proper industrial control system (ICS) safety means both functional safety and cybersecurity must be met and must be integrated. Proper ICS safety can be achieved when both areas are adequately addressed. Courtesy: Sunil Doddi
ANSWERS
‘ Operational technology (OT personnel should play a critical role in developing recovery
plans.
’
-API 1164 Standard by American Petroleum Institute
- ChemITC – Chemical Sector Cyber Security Program from the American Chemistry Council
Standards and guidelines are only as good as implemented. Standards typically are not prescriptive as addressing every process plant design is impossible. Although standards are not necessarily laws, they carry a certain level of certainty; hence, the responsibility of meeting the standards with proper design falls on the users.
End users are responsible for meeting the standards and have higher stakes than vendors.
Functional safety can be achieved by meeting and maintaining SIL target levels 1-4. SIL measures system performance regarding the probability of failure in demand (PFD). In the process industries, PFDAvg is widely used. PFH (probability of failure per hour) is rarely used in process industries.
ANSI/ISA 61511/IEC 61511 requires vendors to have their functional safety management (FSM) plan if any vendor claims functional safety for their equipment. The end-user organization should have its own FSM.
Per ANSI/ISA 61511/IEC 61511, FSM personnel working on SIS design must be competent. The competence can be achieved either by external or internal training.
Three SIL target parameters
Three parameters are crucial to achieving any SIL target: architectural constraints, systematic capability, and probability of failure.
For SIL 3 targets, partial stroke testing can be an option, but this will result in complex design changes, like new bypass lines during the testing and complex partial stroke testing equipment. Hence, it is better to address other protection layers before considering this option.
For cybersecurity, standards set best practices and provide a way to assess security performance. IEC62443 assigns security assurance level (SAL) 0-4, much like the SIL target levels. SAL depends on seven factors, which are called foundational requirements. Seven factors for SAL are: Access control, use control, data integrity, data confidentiality, restricted data flow, timely response to an event and resource availability.
SIL is quantifiable, but SAL is not (yet). It is possible to quantify SAL when enough data is available across different industries and together agree on proper modeling methods. However, cyber threats and intentions keep changing constantly, and quantifying them anytime soon may not be possible.
For the SAL qualitative approach, a risk graph is a good tool. Companies can use any existing process safety risk graph or develop a new one for cyber.
Timely response to an event: It is advised to develop and keep an emergency response plan in the control room to that operator personnel may have immediate access.
Resource availability: This is much like mean time to repair (MTTR) in functional safety and needs to be adequately maintained. Keep system backups and equipment inventory as part of the incident response plan.
Recovery plan: Having a proper recovery plan is advised. Operation technology (OT) personnel should play a critical role in developing recovery plans as information technology (IT) personnel typically do not have functional knowledge of ICS installations. An SME (subject matter expert) in OT can play this role.
Owners must maintain proper test records and maintenance procedures as ICS safety is a life cycle until the project’s decommissioning. It is impossible to achieve 100% safety and security, but we can try. ce
Sunil Doddi is a Certified Automation Professional, Certified Functional Safety Expert and Cybersecurity Fundamental Specialist. Edited by Mark T. Hoske, content manager, C ontrol Engineering, CFE Media and Technology, mhoske@cfemedia.com.
• Concurrent analytics and real-time control
• Data gateway between OT and IT
• Local processing, low latency
• Data aggregation for Cloud memory optimization
• Linux operating system
ANSWERS
Eric Reiner, Beckhoff Automation
A phased approach to a cabinet-free enclosure design
The benefits of cabinet-free machines are attainable in a three-level phased approach.
Designing toward a cabinet-free machine requires a phased approach. New machine control technologies are helping leading-edge systems begin to ditch the enclosures previously needed to protect power supplies, input/output (I/O), networking, human-machine interface (HMI), control and other equipment from harsh production environments. Users who want to reap the benefits of eliminating electrical cabinets can start by taking smaller steps first.
1: The C7015 Industrial PC from
is an IP65/67 device with EtherCAT P ports, so it supports a range of applications by adding IoT functionality to complete machine control. Images courtesy: Beckhoff Automation
controleng.com
KEYWORDS: I/O systems, machine design
LEARNING OBJECTIVES
Understand the advantages of cabinet-free machine design.
Explore three levels of implementation, from simply adding functionality.
Recognize how to select a future-proof, scalable automation platform that will preserve your intellectual property (IP) in the transition to cabinet-free machine design.
ONLINE
For more information:
CONSIDER THIS
What considerations do you have for a cabinet-free enclosure?
There are many advantages for machine builder original equipment manufacturers (OEMs) and equipment end users. A complete machine-mounted control system can reduce overall costs. Beyond getting rid of the enclosure itself, the machine no longer needs fans, filtration systems, cooling equipment and the associated power to run it all. With standard connectors on all components, users reduce wiring time, eliminate wiring errors and cut documentation. Manufacturers can reduce footprint requirements to pack more machinery into the same square footage. And emerging modular options for cabinet-free machine control help simplify everything from wiring and installation to support and spare parts inventories.
To see how to get from a cabinet-dependent design to a cabinet-free one, it’s important to understand the steps needed to get there.
LEVEL 1: Distributed I/O
First steps would be adding functionality without expanding the control cabinet. The place to start is adding remote I/O and IP65/67-rated machine controllers that don’t require the protection of an enclosure. While ruggedized I/O blocks in IP67 or even IP69K are not exactly new, the strategic application
of these technologies has offered significant advances in many applications. These devices gather more data in more places, whether that’s for communication, acquiring signals from sensors, integrated functional safety and more. By distributing the I/O closer to the end devices, users will cut back cabling runs, as well. Industrial PCs (IPCs) that can be installed in the field offer new possibilities. Leveraging these technologies doesn’t necessarily mean using the control hardware to run the machine. The IPC could handle control logic for a specific machine module that can be unplugged and moved around the factory floor.
In brownfield applications, one of these IPCs could be added as an IoT gateway or edge computing device. Flexible mounting options make it easy to physically attach the controller where it makes the most sense. This eliminates the need to find space in the cabinet and worrying about the additional heat source, along with the need to run multiple cables all the way back to the cabinet. With a port for EtherCAT P, which combines power and communication in a standard 4-wire cable, the controller can support the addition of I/O blocks for data acquisition or special functions.
FIGURE
Beckhoff
2: The AMP8000 distributed servo drive system shrinks control cabinet requirements by integrating the servo amplifier into the motor. Courtesy: Beckhoff Automation
Equipped with an analytics software package, the IPC could perform some preprocessing of data, then send important metrics to the enterprise or cloud level using a cellular or Wi-Fi transmitter. This is a great option if older equipment still meets throughput and quality requirements, but it needs to provide data to measure machine health or energy efficiency.
LEVEL 2: Machine-mountable controls
The next level is shrinking control cabinet requirements for new machines. Technologies to build cabinet-free machines are continuing to evolve. As a result, many engineers are taking a wait and see approach. Numerous machine-mountable devices are available to minimize the need for large, protective enclosures. Employing a dual- or quadcore cabinetless IPC as the main machine controller and using servomotors with integrated drives makes this possible. Some electronics – power supplies, fuses, contactors, etc. – may need to remain in an enclosure, but with these cabinet-free options, the necessary footprint shrinks dramatically.
We’ve seen numerous systems take this route. Typically, they use an IP65/67 IPC or a fully enclosed panel PC, which combine the machine control CPU with rugged HMI hardware. Building on the remote I/O solutions and EtherCAT P communication, a distributed servo drive system allows the machine builder to incorporate dynamic motion control without needing to reserve a large section of the electrical cabinet for separate drives. These distributed servo drive solutions integrate the amplifier onto the back of the motor. An IP67 distribution module streamlines daisy-chaining for systems with advanced motion requirements. The distribution module can connect via EtherCAT P to other sensors, actuators and more, enabling a complete machine with reduced requirements for electrical enclosures.
In addition to stationary machines in packaging, intralogistics, assembly and other industries, these
types of solutions are ideal for mobile robotics. Automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) have very compact enclosures. Including a controller on the exterior or at least in a space that’s partially exposed to harsh production environments offers a significant level of flexibility.
In either case, the PC-based controller could consolidate several functions that usually require separate hardware, for example, programmable logic controller (PLC), safety, machine vision and navigation. This level of cabinet-free control could be a stopping point — or it could be a step forward into the next challenge. The cabinet-free machine controller, I/O and servo solutions should build on proven technologies.
LEVEL 3: Complete cabinet-free control
The final level is ditching the cabinet entirely. The technologies required to make this happen remain a work in progress, since not every type of device needed for control and automation is available in IP65 or higher. Technologies are entering the market that allow for a modular design of cabinet-free control. With scalable baseplates, these systems have IPCs, drives, I/O, functional safety and more as pluggable building blocks. They can be attached and secured with set screws quickly, without requiring hours of a cabinet building specialist’s time.
Estimates show control system installation processes that would regularly require 24 hours or more could now be completed in just one hour using pluggable automation components in a machine-mounted platform.
Space savings grow with this completely cabinetfree approach. Distributing the control components across the machine, rather than concentrating them
3: In this example, a fully enclosed, IP67-rated Panel PC from Beckhoff provides machine control logic. The AMP8000 distributed servo drive components spread motion control across the machine, reducing footprint requirements.
FIGURE
FIGURE
ANSWERS
FIGURE 4: The MX-System from Beckhoff provides a modular, machine-mounted system for control and automation, incorporating all components into pluggable modules.
Insightsu
Enclosure insights
uTransitioning to cabinetfree machine design mirrors a runner's journey, starting small with distributed I/O, then advancing to machinemountable controls before aiming for complete cabinet-free solutions.
uCabinet-free systems reduce costs, streamline wiring, and minimize footprint requirements, offering modular options that enhance installation, support, and maintenance processes for manufacturers and end users.
uWhile cabinet-free design presents significant advantages, users should strategically assess their needs, embracing scalable automation platforms and open communication protocols to future-proof their systems amidst ongoing technological advancements.
in a single enclosure with numerous wires snaking back, can lead to significant footprint reductions.
Using a modular system built on scalable baseplates will offer a range of benefits including:
• Reducing overall machine footprint by up to 70%.
• Slashing the number of control components required by a factor of 10.
• Consolidating documentation by as much as 80%.
• Eliminating costly wiring errors through pluggable devices.
• Removing numerous points of failure.
• Accelerating time to market through rapid commissioning.
The fully machine-mounted control platform’s footprint and complexity reductions facilitate better collaboration between departments. For example, engineers will no longer have to design special installation spaces and brackets for the system. More modular, pluggable designs will also reduce the need to disassemble completed machines before transporting them to end user facilities.
For equipment end users, the advantages will include improved operating reliability, reduced routine maintenance requirements and simplified troubleshooting and repair, if needed. For example, some will feature diagnostics viewable on smartphones via Bluetooth, unique serial numbers in the form of DataMatrix codes on each module and the usual status LEDs. Technicians can scan these codes using a smartphone app to retrieve diagnostic data from the controller and corresponding function module.
EtherCAT communication can help providing comprehensive, system-wide diagnostics. The
FIGURE 5: Cabinet-free machine designs using the MX-System can reduce overall machine footprint by up to 70% and slash the number of control components required by a factor of 10.
industrial Ethernet system supports hot swap capabilities, which makes replacing modules, if needed, as straightforward and quick as possible.
Get in the running for cabinet-free enclosure machine design
It’s no surprise that engineers are engaged at all levels. It requires some reconsideration of how electrical and mechanical systems work together. Some find that level two with machine-mountable control offers all the advantages they need, and their end user customers aren’t asking for a more intensive redesign at this time. Whatever level is right, it’s important to consider the options carefully. A flexible, scalable automation platform will allow users to leverage existing software platform and design, whether it’s adding an IP65/67 IPC or switching to a cabinet-free machine control design.
Users will want to make sure the leading-edge systems support open communication protocols. It’s highly likely they’ll need to communicate to legacy equipment or incorporate small, remote enclosures for components that aren’t yet available in cabinet-free design. In these cases, users may find that some vendors have approached cabinet-free design as a marathon, not a sprint. Machine-mountable technologies have continued to build upon each other to get closer to the finish line. Machine builders and system integrators can begin deploying more extensive cabinet-free solutions (standalone IPCs, I/O and drive technology) to claim a head start when cabinet-free automation systems emerge as the main event. ce
Eric Reiner, IPC and MX-System product manager, Beckhoff Automation LLC. Edited by Chris Vavra, web content manager, CFE Media and Technology, cvavra@cfemedia.com.
Using cybersecurity insights to manage site-level risk and compliance for OT facilities
Organizations can use timely and actionable operational technology (OT) cybersecurity insights to help identify and respond to the latest threats.
In the wake of high-profile ransomware attacks disrupting gasoline distribution at Colonial Pipeline and meat production at JBS Foods, many organizations have increased their focus on building more comprehensive cybersecurity programs to improve their ability to protect their operational technology (OT) assets. One of the key capabilities in defending against growing threats is by developing advanced and actionable cybersecurity insights.
What cybersecurity insights do
Cybersecurity insights are similar to early warning radar for an air force trying to defend its airspace. A country trying to defend its airspace is blind without detection abilities and contextual insights into what could be attacking it.
Insights go deeper than simple threat detection. They provide critical analysis of cybersecurity threats and actionable data into how to best address the intrusion. By combining detection with context, insights can provide cybersecurity managers with a critical understanding about potential intrusions. Organizations can then use these insights to decide on how to respond. This could include distinct actions depending on characteristics of the threat. They might include new firewall rules, system quarantine and/or bringing assets offline to install patches as countermeasures to address a particular intrusion.
Quite a few organizations have threat detection capabilities, but many lack OT-specific contextual insights, which leave them vulnerable to threats. Insights can bring vital information to help determine the scope and severity of intrusion. They
can also better inform decision making on how to respond to threats. These insights must be able to provide the organization with near real-time data about identified threats and vulnerabilities.
Cybersecurity in OT environment
Hackers are becoming more sophisticated, particularly with cyber-physical attacks, by refining intrusions from generic designs to specific targets. They also continue to introduce new tactics and techniques and improve upon their capabilities to manipulate industrial protocols.
Increased sophistication and new intrusion methods are not the only concerns. The industri-
FIGURE 1: The diagram lists other potential consequences of poor visibility for manufacturers. Images courtesy: Honeywell
FIGURE 2: The diagram summarizes how Honeywell’s Cyber Insights is designed to provide visibility to site vulnerabilities, detect threats and help reduce risks at the site-level.
al sector faces a sustained risk and higher volumes of malware infiltrating and disrupting operations. Even worse, an increasing volume of malware can establish control of operational assets.
According to the 2023 USB Threat Report, some of the key findings included:
• More than 53% of all detected malware are designed to penetrate industrial systems via a USB and establish command and control (C2), which is an increase from 9% in 2019.
Understand the importance of operational technology (OT) cybersecurity insights
Understand the different types of cybersecurity insights along with the capabilities they can provide users.
Learn how to implement insights into the overall cybersecurity posture.
CONSIDER THIS
What cybersecurity insights do you consider most important?
• Malware capable of disrupting crucial industrial operations, is up to 82% from only 26% in 2019. Without clear insights into cyber-physical threats, the potential for disruption increases. Cybercriminals can use malware to access industrial systems, establish remote control and install malicious payloads.
Insights can provide cybersecurity managers with a better understanding of threats and their characteristics so they may be able to prevent some of these serious risks to their operations. Cybersecurity insights can differ. There are two distinct types organizations may use in their cybersecurity management. Cybersecurity vendors offer software providing site-level and enterprise insights. Site-level insights include data on vulnerabilities, threats and compliance issues detected at an individual site. These site insights can be used by cybersecurity managers located on site or at headquarters to better understand cyber security threats so they can isolate and neutralize them.
Enterprise insights are slightly different. In addition to offering a view of site-level data and threats, they include a portfolio-wide view of anomalous behavior and threats. A chief cybersecurity officer, CISO or senior corporate leader often has access to enterprise insights. They can help cybersecurity leaders understand OT risks by providing visibility through enterprise dashboards using near real-time data across all sites. Both types of insights bring their own levels of visibility and value. Many large companies will need both types of insights. An organization with one or two locations might only need site-level insights. Since insights are a key enabler to respond to threats, organizations should determine which types fit best with their organizational structure to quickly respond to cybersecurity risks.
Cyber experience, compliance
Cybersecurity service providers offer customized software solutions to help organizations with threat detection and visibility. They can help organizations meet challenges related to the scarcity of in-house OT cybersecurity experience and improve an organization’s ability to achieve critical compliance outcomes. There continues to be a shortage of cybersecurity professionals with skills in operational technology. ISC2 said there is a cybersecurity workforce gap of more than 3.4 million people. Insights are more critical; OT risk visibility and response may be limited when OT
risks are not fully understood. Companies may have a portfolio of OT assets that are very complex. Risk compliance is as critical in OT as in the information technology (IT) space. Cybersecurity managers must comply with regulations and standards that apply to distinct assets and operating environments. Software can help organizations in meeting and recording important compliance requirements. This capability becomes more important in an OT environment that generates more security data.
A cybersecurity program
All organizations need to assess the cybersecurity posture of their OT assets and risks. This includes evaluating systems, networks, policies, procedures and OT assets. An OT cybersecurity professional can conduct a cybersecurity assessment designed to map out OT assets, analyze their functions, document their network connections, examine risks associated with the connected assets, and classify identified vulnerabilities by severity. The assessment is designed to help examine how effective the existing security procedures and controls are in responding to cyber threats.
Once the assessment is complete, organizations can use the assessment findings to help them begin to understand insight gaps in how they understand risk. Leaders may decide insights are not sufficient to provide adequate detection and response to OT cyber risks. At this point they should consider cybersecu-
rity vendors who offer software solutions capable of providing better insights for responding to OT risks at the site and enterprise-level. These insights can help an organization address these vulnerabilities and help the organization to improve availability, reliability and safety of their industrial control systems and operations. For organizations that need an enterprisewide solution, some vendors offer software to enable insights across multiple sites. They should choose a solution that provides needed visibility and actionable insights to better understand and act against threats.
Building OT cybersecurity resilience requires timely and actionable insights. A complex operating environment with a vast inventory of OT assets, growing connectivity, OT skill shortages and more sophisticated cyber threats make insights more important to improve operational security. Organizations must develop insights to improve visibility into cybersecurity to quickly respond to identified OT threats in operations. Implementing cybersecurity insights helps companies improve ability to identify and mitigate incidents across the enterprise. Insights can help an organization prevent incidents that threaten production, safety and reputation. ce
Nav Sharma is senior cybersecurity product management lead at Honeywell. Edited by Chris Vavra, web content manager, CFE Media and Technology, cvavra@cfemedia.com.
FIGURE 3: The diagram summarizes how Honeywell’s Cyber Watch is designed to leverage Cyber Insights capability to provide visibility across multiple sites to aggregate data and manage risks.
Insightsu
OT cybersecurity insights
u In response to escalating cyber threats, organizations are prioritizing cybersecurity insights, crucial for defending against potential attacks, enhancing operational resilience and threat prevention.
uCybersecurity professionals face challenges in the industrial sector, including a shortage of OT-specific skills.
uTailored software solutions that provide site-level and enterprise insights are vital for organizations to assess, manage risk, and ensure compliance in the evolving OT environment.
ANSWERS
John Butler, TechB2B Marketing
How digital transformation is adapting
Digital transformation aims to enhance business decision-making, improve efficiencies and lower costs over the long term for manufacturers.
Looking around the manufacturing industry today, one cannot help but take note of the increased presence of technology. Everything from material handling to cutting, shaping, inspecting, packaging and palletizing can be done with some kind of automation and digital capability. Statistics related to sales and deployment of robotics, vision systems, and other automation equipment continue to achieve new highs. Yet despite the rapid introduction of digital solutions to aid manufacturers, plenty of manual and low-tech methods remain, particularly among more conventional sectors.
digital transformation. The shift to digital technologies is part of their strategy for long-term success.
Advanced digital solutions help make an organization more flexible and agile—able to respond more quickly to changes in market demand, to optimize supply chains with real-time visibility and predictability, and to improve their customer service through faster response times and more accurate order tracking. On the manufacturing floor, the benefits extend further, enabling automation, process and quality data tracking, and improved resource allocation, all with reduced downtime. With digital transformation—coupled with the Internet of Things (IoT), cloud services, machine learning, and artificial intelligence—manufacturers can see deep into their businesses, predict and prevent failures, and ensure a sustained level of operational resilience.
Digital transformation approach
controleng.com
KEYWORDS: digital transformation, manufacturing
LEARNING OBJECTIVES
Understand how digital transformation is changing manufacturing processes and the effects it’s having.
Learn how to overcome potential obstacles
CONSIDER THIS What is your company doing to embrace digital transformation?
ONLINE
See related topics online.
In response to increasing costs for labor and materials as well as consumer and market pressures for higher production quantities, shorter manufacturing times, and improved quality, many manufacturers are starting to integrate advanced digital technologies. Known more commonly as digital transformation, the process aims to enhance business decision-making, improve efficiencies and lower costs over the long term.
Why shift to digitalization?
In the age of Industry 4.0, the market landscape is in a perpetual state of shift and disruption, making it difficult for organizations that rely primarily on manual and conventional processes to keep up. The consequence is that inefficiency increases, productivity declines, waste and costs trend upward, and competitiveness suffers. To remain competitive, many manufacturers have started to leverage
If an organization is beginning its journey into adopting digitization, the task can at first appear daunting. To avoid feeling overwhelmed and to maximize the chance of achieving successful outcomes, it is important to take small steps. Jonathan Weiss, CRO at Eigen Innovations, suggests that having clearly defined objectives and outcomes is key.
“Starting a project without a clear understanding of success or quantifiable business impact will almost guarantee failure when it comes to adoption," he said. Once the desired outcome has been defined, other considerations should center around ease of use, integration with existing technologies, and scalability.
With persistent labor shortages, costs for skilled labor continue to climb. So digital systems that are easy to use are attractive and advantageous as the barriers for operators come down, making these systems usable by a broader range of workers. Sophie Ducharme, marketing manager for Vention, sees ease of use as a major factor. She noted Vention’s manufacturing automation platform (MAP) focuses on “simple,
easy-to-use tools and solutions that enable those with a basic knowledge of implementing automation and manufacturing processes to design, automate, deploy, and operate digital systems.”
Overcoming challenges and obstacles
Digitizing manufacturing brings with it several barriers that must be successfully navigated. While many organizations can quickly grasp the benefits of adopting digital transformation technologies, elements such as large capital expenses, implementation costs, and complexity require consideration. The challenges don’t end there. At the personnel level, the availability of qualified workers, general resistance to change, and potential disruption to the status quo must also be factored in. Thirdly, when adding digitization to existing infrastructure, compatibility with and interoperability of existing or legacy systems may rank high among concerns. Davide Pascucci, founder and CEO of Bright IIOT, believes that upgrading a control system should usually be done in a way that preserves the same functionality as before. He says, “Much of the systems integration cannot be done in a vacuum, as interfacing the manufacturing floor with IT systems is not always a seamless process and requires coordination across the organization.”
Learning from others
The best insights and advice often come from those with experience, and their successes and challenges are great teachers. The importance of careful planning and consultation with all necessary stakeholders is top of mind for both Weiss and Pascucci. The planning process helps to identify potential trouble spots and provides an ability to anticipate and mitigate them as part of the overall business strategy for adopting digitization.
With digital transformation and automation being ongoing processes with an ever-changing landscape, it’s important that manufacturers remain agile and flexible so they can react quickly to unexpected changes. Ducharme sees manufacturing floors as living entities. Experience has taught her that “manufacturing systems are no longer designed for decade-long life spans.” Rather, “it is now common to see assembly lines and equipment upgraded, repurposed, or replaced every two to three years.”
With the evolution of manufacturing technology, there's been a fast-paced convergence of different technologies on and around the manufacturing floor, driving continuous progression and improve-
ment. Machine vision, machine learning (ML) and artificial intelligence (AI) are further accelerating the transition into the digital era, enhancing data collection to further advance Industry 4.0 sophistication. Automated inspection has been used for many years to detect defects and failures, but Weiss explains that at Eigen Innovations, they pair machine vision systems that inspect products with process data. This shows not only where defects have occurred but why they occurred and, most importantly, how to prevent them in the future.
Bright IIOT sees vision systems as crucial to obtaining and providing inspection information and to identifying and eliminating product variances. Furthermore, vision systems can contribute to resource optimization as well. Where inspection can be reliably performed by machine, humans can be freed up for tasks that are higher value and more critical.
Ducharme sees machine vision, machine learning and AI as key to establish smart factories. She echoes her peers’ thoughts on the value of combined data: “The combination of quality and process data enhances real-time quality control, predictive maintenance, data-driven decision-making, and overall efficiency.”
Whether a manufacturer is small, medium-size, or large, advanced and innovative digital technologies are becoming necessary to keep up with an ever-competitive global manufacturing environment. ce
John Butler is contributing editor for Tech B2B Marketing. This originally appeared on the Association for Advancing Automation's (A3) website. A3 is a CFE Media and Technology content partner. Edited by Chris Vavra, web content manager, CFE Media and Technology, cvavra@cfemedia.com.
FIGURE: Implementing digitalization can add to Industry 4.0 befits in key ways. Courtesy: Control Engineering with information from Tech B2B Marketing.
Insightsu
Digital transformation insights
uThe manufacturing industry is witnessing a surge in digital transformation, driven by Industry 4.0 demands. Integrating advanced technologies enhances flexibility, agility, and operational resilience, ensuring long-term competitiveness.
uDespite the benefits, organizations face challenges in adopting digitization, including high costs, skilled labor shortages, and resistance to change. Successful transition requires careful planning, consultation, and a focus on user-friendly, scalable solutions.
ANSWERS
Bruce Slusser, Actemium Avanceon
Leveraging edge computing’s power in Industry 4.0
Transforming raw data into valuable, actionable insights in real-time is a complex task that necessitates advanced technologies; edge computing helps.
Industry 4.0 represents a significant shift from the Third Industrial Revolution, with the focus on optimizing entire systems and production lines, rather than automating single machines and processes by taking what was data in Industry 3.0 and producing information with Industry 4.0. The advent of Industry 4.0 has brought about increased connectivity and data sharing, leading towards improved efficiency, productivity and performance in the industrial landscape.
fractions of a second, necessitating the need for Edge products to consume, contextualize, and publish this data in manageable payloads.
controleng.com
KEYWORDS: Edge computing, Industry 4.0
LEARNING OBJECTIVES
Understand how edge computing can help Industry 4.0 applications.
Learn about the benefits and challenges that come with using edge computing.
Learn what it takes to develop the right edge computing architecture for an application.
ONLINE
See additional edge computing and Industry 4.0 stories at https:// www.controleng.com/ edge-cloud-computing/
CONSIDER THIS
How can edge computing improve your Industry 4.0 operations?
The amount of data generated by a smart factory can produce upwards of 5 petabytes of data each week. To put that into perspective, that's about nine and a half times as much data as YouTube's entire video database where users upload an average of 35 hours of new video every minute. Managing and leveraging this much data is a significant challenge, requiring systems for data collection, storage and analysis. Furthermore, transforming this raw data into valuable, actionable insights in real-time is a complex task that necessitates advanced technologies.
Edge computing’s role in Industry 4.0
Cloud services have become a critical component of digital transformation due to their scalability and ability to store, process and analyze data in a central location. Edge products complement these services by providing structure and contextualization to complex or disparate plant floor assets, sensors, and historians. Unlike data at the manufacturing execution system (MES) and enterprise resource planning (ERP) levels, plant floor equipment and sensors can collect data in
Edge applications, which are often embedded within or located near industrial machines and sensors, collect raw data from these sources. This data could be anything from temperature readings and vibration levels to energy consumption and production rates. Instead of only sending this raw data to a centralized server or cloud service for processing, edge applications have the capability to locally analyze this data. This involves filtering out irrelevant data, aggregating relevant data, and applying advanced analytics and machine learning (ML) algorithms to the data. Through this local processing and analysis, edge computing devices can transform raw data (values) into meaningful information. For example, patterns and trends can be identified, anomalies can be detected and predictions can be made. This information is much more valuable to industrial processes as it provides actionable insights that can be used to improve efficiency, quality and sustainability.
Edge computing benefits for manufacturers
The decentralized nature of analysis performed on an edge device has a benefit over the big data analytics approach in that it is faster and more realtime. Edge nodes are deployed near the devices they are consuming information from and can scale to meet growing needs to alleviate bottlenecks in an infrastructure. While the ability to perform analytics in near-real time leveraging an edge device is a benefit, another major benefit for edge applications is the ability to implement high speed decision making, allowing for semi-autonomous models to provide feedback to operators and managers leveraging the insights derived by the ML models implemented. Edge devices play a crucial role in the creation of a unified namespace, which is a com-
mon data model that represents all the data sources and destinations in an industrial system.
By processing and analyzing data at the source, edge devices can provide structure and context to the data, making it easier to integrate and communicate across different devices, platforms, and protocols. This results in a single point of truth for data, improving data quality, consistency and reliability. A unified namespace also can reduce data duplication and complexity, making the data more manageable and useful. Therefore, edge devices not only facilitate the creation of a unified namespace but also enhance the overall data integrity in an Industry 4.0 environment.
Developing the architecture
Edge computing requires a robust combination of hardware and software infrastructure to function effectively. Edge computing relies on a distributed computing architecture that brings data processing closer to the source of data generation, reducing latency and enhancing real-time decision-making. The hardware infrastructure for edge computing often involves a network of Edge Nodes, which can include devices such as sensors, IoT devices, gateways and edge servers. These nodes are strategically placed at the edge of a network, allowing them to process and analyze data locally before transmitting relevant information to centralized cloud servers. The hardware should be capable of handling diverse workloads, ranging from simple data filtering and aggregation to more complex analytics. Edge devices also may need to be energy-efficient, rugged and capable of operating in harsh environments.
On the software side, edge computing relies on a robust and flexible software infrastructure. This includes edge computing frameworks that enable developers to deploy and manage applications at the edge. These frameworks facilitate the orchestration of computing tasks across diverse edge nodes, ensuring seamless integration and coordination.
Edge computing software also involves edge analytics tools for processing data locally, reducing the need for extensive data transfers to centralized servers. Security is a critical consideration, and software solutions should include encryption, authentication, and other measures to protect data at the edge. Edge computing platforms also leverage containerization and virtualization technologies to enhance scalability and manageability, allowing for the deployment of a variety of applications on edge
nodes. A well-integrated hardware and software infrastructure is essential for the success of edge computing, addressing the unique challenges posed by decentralized data processing.
Edge computing challenges for users
Edge computing, while offering advantages in reduced latency and improved efficiency, presents challenges such as limited resources on edge devices, variable network connectivity, and security concerns due to the distributed nature of these devices. Managing data at the edge becomes complex, requiring effective governance and storage solutions to prevent inconsistency and duplication. Scaling edge deployments and ensuring interoperability among diverse devices and platforms pose additional hurdles.
The complexity of developing applications for distributed computing, coupled with lifecycle management difficulties for remote devices, further complicates edge computing adoption. Compliance with data privacy regulations and the consideration of costs associated with maintaining distributed infrastructure are also critical factors that demand attention. Addressing these challenges necessitates a comprehensive approach, integrating advancements in hardware, software, and network technologies alongside the establishment of standards and best practices for effective edge computing deployment and management. Challenges are not insurmountable.
Implementing edge computing frameworks that prioritize resource-efficient application design, such as containerization and microservices architecture, helps overcome limited resources on edge devices. This allows applications to be broken into smaller, manageable components, optimizing resource use and facilitating efficient deployment on devices with constrained capabilities. Using edge-to-cloud communication protocols that can adapt to varying network conditions help address connectivity challenges. Technologies such as edge caching, where frequently accessed data is stored locally, reduce dependence on constant network connectivity. Implementing edge gateways that aggregate and preprocess data before transmitting it to centralized systems minimizes impact of intermittent or low-bandwidth connections. ce
Bruce Slusser is digital transformation practice director for Actemium Avanceon, a CFE Media and Technology content partner. Edited by Chris Vavra, web content manager, CFE Media and Technology, cvavra@cfemedia.com.
‘ Edge computing platforms leverage containerization and virtualization technologies to enhance scalability and manageability. ’
u
Insights
Edge computing insights
uIndustry 4.0 emphasizes optimizing entire systems, leveraging connectivity, and contextualizing data for enhanced efficiency and productivity in industries.
uEdge computing complements cloud services, enabling realtime data analysis, faster decision-making, and unified data management, crucial for Industry 4.0's success.
ANSWERS
Dan White, Opto 22
Edge to cloud: Understanding new industrial architectures
Modern edge devices put emphasis on cybersecurity, data democratization and programmability to enable advanced cloud capabilities.
In an era where digital transformation dictates the pace of business innovation, understanding the synergy between edge devices and cloud technology is more than a necessity. It is a strategic imperative. The landscape of data management and processing is undergoing a radical change, marked by the emergence of sophisticated edge devices and seemingly infinite storage and computing power in the cloud.
Industrial journey from edge to cloud
However, when it comes to moving critical production and infrastructure data to the cloud, a number of concerns arise; cybersecurity is paramount among them. Add the recurring costs of cloud storage and software tools to fears surrounding network reliability, and it’s easy to understand the hesitancy to select a cloud-based architecture for mission-critical operations. Even putting those considerations to the side, there’s still the issue of contextualizing the enormous swaths of data that cloud servers are capable of storing.
Fear not. New technologies embedded in modern edge devices are transforming the industrial internet of things (IIoT) landscape by enabling secure and seamless data transmission to the cloud. This advancement allows for more than just data collection; it ensures that data is transmitted safely and efficiently, ready for analysis in cloud-based systems. These capabilities are vital in an increasingly connected world, where the quick and secure handling of data is essential for timely and informed decisionmaking across various industries.
Industrial cybersecurity: A continuous coordination
Traditionally, edge devices like input/output (I/O) systems and programmable logic control-
FIGURE 1: Edge architecture bridges operational technology and information technology (OT and IT) for secure integration. Images courtesy: Opto 22
lers (PLCs) were the most vulnerable links in network security. These devices, critical in industrial settings, lacked advanced cybersecurity features, making them easy targets for cyber threats. In stark contrast, modern edge devices are designed with a strong emphasis on cybersecurity. They come equipped with a range of protective features like firewalls, secure socket link/transport layer security (SSL/TLS) encryption, virtual private network (VPN) clients, secure authentication, network zoning capabilities and regular updates to guard against evolving threats. This shift marks a significant advancement in securing industrial networks, transforming edge devices from weak points into fortified gateways in the digital ecosystem.
Alongside the existing cybersecurity features of modern edge devices, the implementation of outbound communication protocols like message queuing telemetry transport (MQTT) adds another layer of security. These protocols facilitate device-originated communication, which inherently reduces vulnerability to external threats. By allowing edge devices to securely initiate and control data exchange, MQTT minimizes the need for open inbound network ports, thus significantly decreasing the risk of cyber-attacks. This proactive approach in data communication reinforces the security framework of industrial networks, further transforming edge devices into robust, secure gateways in the digital infrastructure.
Costs contrasted: Industrial cloud capabilities versus on-premise provisioning
On-premise industrial server solutions come with significant initial capital costs. This includes the expense of purchasing server racks, servers, cooling equipment, software packages and IT administration tools. Beyond these upfront investments, there are also substantial ongoing maintenance costs, encompassing hardware repairs, software updates and energy consumption for operation and cooling systems.
In contrast, cloud-based storage and computing services typically operate on an annual subscription model. While this might seem costly at first glance, it often proves more economical in the long run. The cloud service provider handles much of the management and maintenance, from server upkeep to software updates. This reduces the direct costs associated with physical infrastructure and shifts the burden of ongoing maintenance away from the
user. This shift can lead to significant savings in time and resources, allowing businesses to focus on core activities rather than IT management.
An additional advantage of cloud-based storage is its scalability. Starting with a small virtual setup at a minimal cost, scaling up as needed is remarkably easier and cost-effective compared to on-premise solutions. With cloud services, increasing storage or computational power doesn’t require physical hardware additions, but a simple adjustment in the service plan, offering flexibility and efficiency in resource management. This scalability feature makes cloud solutions not only economical but also adaptable to evolving business needs.
Context clarity: Coordinating industrial cloud and edge data
Transferring data from edge devices to the cloud can initially seem chaotic, but modern data modeling tools like user-defined types (UDTs) and advanced data models provide a solution. These tools ensure that by the time data reaches the cloud, it’s already contextualized. This means data is not just raw numbers; it’s processed and tagged with relevant context (like location, device type and operational status), making it immediately useful and comprehensible. This preprocessing at the
KEYWORDS: Edge computing, industrial cloud, edge to cloud architectures
LEARNING OBJECTIVES
Evaluate technologies and requirements for industrial edge and cloud capabilities, including the continuous coordination of industrial cybersecurity.
Contrast costs of industrial cloud capabilities versus on-premise provisioning and consider clarity of industrial cloud and edge data.
Examine industrial edge device resilience amid cloud reliability concerns with refined integration for edge and cloud technologies.
CONSIDER THIS
Have you considered new edge and cloud technologies in light of cybersecurity and data requirements?
ONLINE
See three more images with this article online. www.controleng.com/ edge-cloud-computing
FIGURE 2: Virtual private network (VPN) and port redirect provide secure access to isolated network zones.
ANSWERS
FIGURE 3: Using a secure message queuing telemetry transport (MQTT) architecture allows multiple clients with one trusted broker.
edge simplifies cloud data management and analysis, turning potential confusion into clear, actionable insights.
Industrial edge architecture and cybersecurity insights
uTechnologies and requirements for industrial edge and cloud capabilities include the continuous coordination of industrial cybersecurity.
uWhen deciding between cloud or on-premise industrial architectures, contrast costs of industrial cloud capabilities versus on-premise provisioning and consider clarity of industrial cloud and edge data.
uRefined integration for edge and cloud technologies can help with industrial edge device resilience and cloud reliability concerns.
Reliability concerns, especially during network outages, often shadow cloud computing. Yet, the advanced capabilities of edge devices provide a robust solution. With multi-core processors, varied control and programming options and built-in human-machine interfaces (HMIs), these devices ensure local operations continue smoothly, even when cloud connectivity falters.
Edge devices stand out for their autonomous operation. They handle critical functions and data processing independently, including local data storage and buffering, proving essential when cloud services are interrupted. This autonomy is key to maintaining uninterrupted operations, ensuring that essential systems remain operational and efficient.
While edge devices offer local resilience, cloud computing elevates capabilities with advanced analytics, artificial intelligence (AI), machine learning (ML), anomaly detection (AD), and large language models (LLMs). This synergy between edge
autonomy and cloud-based intelligence creates a balanced digital infrastructure, effectively addressing cloud reliability challenges while harnessing its advanced analytical potential.
Refined integration of edge and cloud technologies
In the exploration of edge-to-cloud architectures, there’s an intricate balance between the localized robustness of edge devices and the expansive capabilities of cloud computing. This synergy is shaping a more efficient and secure future for data management and processing in various industries.
Edge devices have evolved significantly, now embodying secure and autonomous units capable of sophisticated local processing. Their enhanced cybersecurity features and ability to operate independently are pivotal in maintaining operational integrity, even in the absence of cloud connectivity. This local control is crucial in industrial settings, where even brief downtimes can have significant impacts.
On the other hand, the cloud offers advanced computational power and storage capacity, making it an invaluable resource for large-scale data analysis. With capabilities like AI, ML, AD and LLMs, the cloud extends beyond mere data storage, providing deep insights and analytics that are transforming decision-making processes in businesses.
The convergence of edge and cloud technologies represents not just an advancement in individual capabilities, but a collaborative force. This integration allows for a more resilient and adaptable digital infrastructure, where local and cloud systems complement each other, ensuring immediate operational efficiency and long-term analytical depth.
As industrial customers progress, this balanced interplay between edge and cloud computing is poised to become a fundamental element in the digital transformation narrative of numerous industries. Grasping and harnessing this synergy is crucial for unlocking unprecedented levels of innovation, security, and operational efficiency in the rapidly changing digital world. ce
Dan White is director of technical marketing at Opto 22. Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media and Technology, mhoske@cfemedia.com.
Rudy Da Anda, Stratus
Accelerating digital transformation in oil and gas for competitive edge
Digital-first operations, in highly dispersed, asset-intensive oil and gas operations, offer opportunities to reduce costs while improving productivity.
Oil and gas companies need to take meaningful steps to find ways to bridge the digital gap, the key to which is edge computing. By leveraging edge computing, oil and gas companies can begin to transform business operations, seamlessly integrating digital tools and solutions, such as remote monitoring of advanced process control systems that will help them keep pace with demands of the rapidly evolving energy market.
Edge computing cuts costs, adds productivity
Up- and mid-stream companies have remote locations filled with a growing sea of data, all of which hold potentially crucial insights that could shed light on impending problems, opportunities for optimization and areas of waste that could be eliminated. When data like this exists in so many varying locations, visibility and control integration are key. The first step here is to anticipate problems and opportunities, then act upon them with realtime data-driven decision making.
Anticipating problems is critical to avoiding costly downtime. In the Colonial Pipeline cyberattack in 2021, downtime for an oil and gas company can mean massive swaths of customers are unable to access critical utilities for days at a time, putting a company’s reputation at risk. Ensuring the continuous availability of critical applications and the integrity of data is essential to businesses. In some cases, recovering from a failure or downtime is not an option due to not just financial concerns, but rather safety related. In addition, data transfer can become costly if volumes are high when uploading and downloading data from the cloud, but edge computing drastically reduces this cost by capturing and processing data locally.
Automation in the oil and gas industry enables
employees to work more efficiently and enhances productivity and safety by automating mundane, repetitive tasks that are prone to human error. With that, employees have more time to focus on more strategic and business-critical initiatives such as data analysis and skill development that drives continued business and operational performance gains. Combined with remote capabilities and streamlined operations, leaders can create a smarter, more efficient workforce.
Tap into automation to boost efficiency
As in other industries, embracing automation can pay dividends for businesses in the energy industry. Automation ensures operations will run smoothly and automating key processes can help spot potential issues before they happen, helping avoid costly downtime. A Siemens predictive maintenance report said oil and gas companies have seen the cost of an hour’s downtime more than double in just two years, to almost $500,000, and the total losses to downtime are also rising sharply. With automation practices set in place as a much-needed boost in support to staff particularly in the face of a global skills shortage in the industry, they can better manage and maintain information technology (IT) systems in operational technology (OT) environments is provided. Monitoring system health is an important task for IT teams, but in a company as geographically vast as an oil and gas provider, monitoring those systems is a major time-consuming and cost-intensive task. But with an edge computing platform helping handle the flow of data, oil and gas companies can now easily pri-
FIGURE 1: Stratus ztC Endurance Computing Platform provides reliable, easy-to-implement edge-computing resources with rugged design. All images courtesy: Stratus Technologies
KEYWORDS: Edge and cloud computing, digital tools
LEARNING OBJECTIVES
Learn how to gain visibility and control integration of disparate assets and systems. Understand how the workforce can focus on more strategic and business-critical initiatives.
Learn how HMI/SCADA, edge computing, data, analytics and artificial intelligence can help. ONLINE
See the Control Engineering edge and cloud computing page. www.controleng.com/ edge-cloud-computing
CONSIDER THIS
How can edge computing accerate digitalization efforts?
ANSWERS
oritize integrating automation into their monitoring workflows and better institute industry best practices. Creating this autonomous monitoring environment means companies can more easily take inventory of assets, determine the health and status of all systems and operating assets and identify and release software patches and updates as needed.
Bringing in edge computing and edge analytics better supports IT and OT teams at oil and gas companies because these solutions enable teams to gather data and analyze it at the source of origination, enabling more efficient, real-time decision-making. On the OT side, edge computing allows teams to have access to modern local control with full local support. From an IT perspective, edge computing incorporates the latest industry-standard components, making systems easier to manage, service and protect. With the ability to have real-time monitoring across the organization, oil and gas companies can more readily spot if there’s a problem and ensure they have the right resources dedicated to solving it. With this new level
of oversight, companies can ensure their staffs are free to tackle the most pressing challenges, without wasting time checking on remote locations.
Uncover new revenue opportunities
Beyond strictly modernizing technology infrastructure, leveraging digital tools is an important way for an oil and gas business to increase revenue and generate new value-producing opportunities.
Taking advantage of technologies like human-machine interface (HMI), supervisory control and data acquisition (SCADA) software, edge computing, data, analytics and artificial intelligence helps oil and gas companies turn the sea of data into actionable insights. This combination of technologies empowers leaders to uncover where the next big opportunity lies and most importantly, gives them the agility to capitalize on those opportunities. Digital transformation adds benefits that extend far beyond the IT teams. A strong foundation of these digital tools will make it easier to provide business partners with a better experience and consistent, reliable service. When systems are supported by edge computing and digital technology that operate with minimal downtime or disruption, this results in happy customers that are confident in the service they receive.
Regardless of what industry a business operates in, modernizing and embracing digital transformation is no longer an option and ultimately can hamper long-term success and energy companies are no exception. Businesses face serious challenges when it comes to managing infrastructure, systems health and handling systems in remote locations and the complexity will only continue to grow. By leveraging edge computing and other digital technologies, oil and gas businesses have a powerful means to tear down data silos, effectively analyze data in remote locations at the source and create revenue-generating sources. With edge computing and the right tools in place, oil and gas companies can make the most of their operations and more importantly, won’t find themselves being left behind in an increasingly digital-first environment. ce
Rudy De Anda is the head of strategic alliances for Stratus Technologies. Edited by Tyler Wall, associate editor, Control Engineering, CFE Media and Technology, twall@cfemedia.com.
FIGURE 3: Identifying key opportunity areas for applying edge computing across critical business operations to help guide overall edge solution design.
FIGURE 2: Edge computing is a distributed computing model in which computing takes place at the edge of operations. An edge computing platform collects critical data from sensors and equipment in a manufacturing environment.
Innovations
Safety switches series are simple, modular; SIL3 rating
AutomationDirect added modular Z-Range safety switch system components to its safety products. The safety switch system offers simplicity and modularity, allowing up to 30 Z-Range safety devices to be connected to one safety relay while maintaining PLe performance rating. These devices are an excellent choice for modular machines and are rated for SIL3 as well as PLe and feature built-in diagnostics. Devices include non-contact safety switches, solenoid locking RFID tongue interlock safety switches, tongue interlock safety switches, hinge interlock safety switches, cable-pull safety switches, emergency stop control stations and accessories. Wiring is simplified with premade wires, T-cables and junction blocks. AutomationDirect, www.automationdirect.com
Coriolis flow meters measure resin
AW-Lake’s Tricor Coriolis Flow Meters can verify that accurate amounts of fiberglass resin are dispensed during the fabrication of panels molded for doors and windows of motor homes and RV trailers. During the manufacturing process, fixed-ratio pumps are used to dispense a specified ratio of resin to catalyst to ensure accurate panel curing. Without a verification method, production lines were forced to stop when pumps were not on ratio, resulting in downtime and lost yields. A PLC-based monitoring system uses the pulse output from the Coriolis flow meters to calculate real-time flow for monitoring the resin-to-catalyst ratios. AW-Lake, https://aw-lake.com
Embedded PCs support PLC, HMI, motion control applications
The CX5600 Embedded PC series from Beckhoff expands machine-control capabilities with AMD Ryzen processors, additional interfaces and operating system options. With the same form factor as previous generations, these DIN rail-mounted embedded PCs support PLC, HMI, motion control and other applications. Two AMD Ryzen processor options are: CX5620 (1.2 GHz) and CX5630 (2.0 GHz). With low-power consumption, the fanless embedded PCs can expand on the left side with two additional 1GB Ethernet ports or fieldbus interfaces as needed. Beckhoff Automation, www.beckhoff.com
Moisture sensors for the food and beverage industry
MoistTech Corp.’s IR3000-F series moisture sensors uses near-infrared (NIR) measurement for accurate and reliable moisture analysis of virtually any product or raw material. The instant, continuous moisture measurement enhances quality, productivity and energy efficiency in the food industry, with repeatable results year after year with little or no maintenance. Pre-calibrated in the factory with customer samples, the sensor is guaranteed to never drift over time or need recalibration. MoistTech Corp., www.moisttech.com
Flow meter delivers reliable, accurate measurement
The Emerson Rosemount 9195 Wedge Flow Meter has a wedge primary sensor element, supporting components, and a selectable Rosemount pressure transmitter. Flexible design is ideal for measuring process fluids with a wide range of demanding characteristics in various heavy industry applications, including metals and mining, oil and gas, renewable fuels, chemicals and petrochemicals, pulp and paper and others. It can be very difficult to measure volumetric flow accurately and reliably in applications where the process liquid is highly viscous, extremely abrasive, prone to plugging, at high temperature or a combination. Emerson, www.emerson.com
Innovations
Combined PLC and HMI has larger display size, I/O options
Idec Corp. expanded its SmartAxis touch family with the new FT2J Series combined PLC+HMI. A compact all-in-one form factor combines built-in full function controller features and functions, onboard and expandable I/O and an advanced technology 7-inch touchscreen display, providing many advantages for many industries. The compact design uses less panel space than separate units. Because the programmable logic controller (PLC) and human-machine interface (HMI) are internally connected, require only one power supply and share the same network connection, installation is simplified. An intuitive and easy-to-use integrated development environment for PLC and HMI functions cuts configuration and programming time. Idec Corp., www.idec.com
Flexible power system has advanced controller features
OmniOn Power, formerly ABB Power Conversion, expanded the BPS power system with its new BPS-Flex, providing advanced controller features in a compact, cost-efficient footprint. Power system options vary by number of shelves and rectifiers used and if battery backup is mandatory or not, delivering 2 to 18 kilowatts (kW). The configurable, -48V power system can include up to two 1RU rectifier shelves and a primary distribution panel. Supplemental distribution panels can be added. The primary distribution panel can be configured with an optional low voltage battery disconnect. Other options include pluggable DC breakers, GMT fuses and up to 30A small-form breakers. Rectifiers deliver high power densities of 45.85 watts/in3 and peak efficiencies exceeding 96%. Ethernet facilitates remote network management and provides monitoring and control for the rectifiers, batteries and distribution.
OmniOn Power, www.omnionpower.com
Intelligent edge automation platform can control, help with industrial data
Red Lion’s FlexEdge Intelligent Edge Automation Platform, powered by Crimson, gives industrial organizations complete, scalable access to industrial data embedded in operations. The enhancement features strain gauge modules and J1939 and CAN protocol sleds. New strain gauge modules are easily installable and configurable and have SSR output and relay output options. Both offer single-loop PID capabilities to monitor, measure and control equipment with rugged design. Modules accept signals from load cell, pressure and torque bridge transducers. Red Lion, www.redlion.net
Ultra-miniature overmolded reed switch fits in small spaces
Littelfuse Inc. introduced the 59177 Series ultra-miniature overmolded reed switch, offering designers unparalleled flexibility for space-constrained applications. With a compact design and low power consumption, the reed switch provides a reliable solution for various high-speed switching applications. It measures 9.0 mm x 2.5 mm x 2.4 mm (0.354 x 0.098’ x 0.094 inches). It handles up to 170 VDC or 0.25 A at up to 10 W, ensuring optimal performance in demanding applications. It operates without consuming power, saving energy and adding efficiency for battery-powered devices. Overmolded design resists mechanical shock and vibration for use in challenging applications. Littelfuse Inc., www.littelfuse.com
Teledyne Relays new three-phase sequence/loss/undervoltage monitoring relay series is a state-of-the-art relay designed to address critical power quality issues such as incorrect phase sequence, total and partial phase loss and undervoltage. It is ideal for protecting motors and other 3-phase powered machinery from the damaging and dangerous effects of phase anomalies and undervoltage. Teledyne Relays, www.teledynerelays.com
Back to Basics
Digital twin technology benefits
Control engineers working in industrial environments should seek digital twin technology to model their processes. Work through four barriers.
The digital twin combines the digital and real worlds, allowing data-driven decisions to be made throughout the lifecycle of an asset, plant or process across all functions and levels. It enables real-time monitoring, simulation, and analysis of data, which assists in understanding and predicting the performance of processes.
“Digital twins enable control engineers to simulate and emulate virtual models of machinery in the early stages of development. This includes evaluating the design in combination with application data to ensure that the correct solution has been proposed with the most energy efficient components,” said Josh Roberts, a Festo product manager.
The resulting virtual model can replicate the physical capabilities of the solution in combination with software development and evaluate the efficiency/output of the physical machinery. Simulation within the virtual model can ensure that any errors are identified and corrected before a physical model is created.
3. Development: Standardization of data across multiple platforms is key; The standardized AAS format provides the framework, but all suppliers need to adhere to this standard.
4. Security: Cybersecurity is a core focus for many end users; A solution needs to be found to apply AI in value-added services for preventive and predictive maintenance in this area.
Wide-ranging digital twin applications
Arianna Locatelli, EMEA digital engineering specialist at Rockwell Automation, said many control engineers are already using digital twins across a wide range of applications – ranging from validating initial design concepts through to conducting controls testing.
To reduce on-site commissioning, simulate intricate production processes.
Roberts said digital twins can offer benefits throughout the machine lifecycle. “For example, each sub-component within a machine can have a multitude of information, which can all be compiled within the digital twin. This removes the need to store information in different formats across the business instead it brings all this data into a central location,” he said.
To enable this, the standardized asset administrative shell (AAS) formats for information from suppliers ensures that all the information is captured and available in a structured format, to help reduce the time spent on documentation creation and opening the potential for algorithms to be implemented on this data in the future. Roberts identified four main barriers to the implementation of digital twins:
1. Performance: Reliability and robustness of the emulation; Ensuring that components modeled in the digital twin can achieve the application parameters; Sizing tools from suppliers provide data into the model. Parameters identify what would occur if the application data changes.
2. Organization: Specialists and expertise in the area within the business; Be willing to upskill staff and restructure workforces and project approaches.
“The primary motivation behind these use cases is risk mitigation,” she said. “To reduce on-site commissioning time, engineers simulate intricate production processes to assess performance and validation of control systems against their digital counterparts.”
Green or brownfield? Yes.
Jan Rougoor, head of product management process industries software at Siemens AG, said digital twin technology can be applied to greenfield and brownfield applications. “For greenfield projects in the process industry the digital twin can be created automatically during the engineering phase. By using simulations for the initial process modeling it is possible to start to optimize the design of the process," he said. In brownfield projects, “one might start by using AI-based tools to consolidate, often unknown or unused, information into a cloud based digital twin portal." ce
- This originally appeared on Control Engineering Europe . Edited by Chris Vavra, web content manager, CFE Media and Technology, cvavra@cfemedia.com.
controleng.com
KEYWORDS: digital twins, digitalization LEARNING OBJECTIVES
Understand what digital twins can do for control engineers.
Learn about the four limitations that are keeping companies from implementing digital twins. CONSIDER THIS How can digital twins help you as a control engineer? ONLINE www.controleng.com/ digital-transformation/ digital-twins/
DIGITAL TWINS
STRONGARM
TADIRAN BATTERIES
Trihedral
WAGO Corp .
Yaskawa America, Inc
.Bellyband, 10
.37
.6
.tadiranbat .com
.www .VTScada .com/time
.www .wago .us
.www .yaskawa .com
MEDIA SHOWCASE FOR ENGINEERS
Sales
Sales
Sales
Sales
Sales
Sales
Sales
MWaddell@CFEMedia.com 312-961-6840
BGross@CFEMedia.com 847-946-3668
RGroth@CFEMedia.com 774-277-7266
DHoughton@cfemedia.com 508-298-9021
RLevinger@CFETechnology.com 516-209-8587
513-205-9975 DMorris@cfemedia.com
847-624-8418 JPinsel@cfemedia.com
MWorley@CFEMedia.com 331-277-4733
Publication Services
Jim Langhenry, President, Co-Founder, CFE Media JLanghenry@CFEMedia.com
Steve Rourke, Co-Founder, CFE Media SRourke@CFEMedia.com
Patrick Lynch, CEO, CFE Media 847-452-1191, PLynch@cfetechnology.com
Courtney Murphy, Marketing and Events Manager CMurphy@cfemedia.com
Paul Brouch, Director of Operations 708-743-5278, PBrouch@CFEMedia.com
Rick Ellis, Audience Management Director 303-246-1250, REllis@CFEMedia.com
Michael Smith, Creative Director 630-779-8910, MSmith@CFEMedia.com
Michael Rotz, Print Production Manager 717-422-3622, mike.rotz@frycomm.com
Custom reprints, print/electronic: Paul Brouch, PBrouch@CFEMedia.com
Jeff Mungo, List Rental Account Director, DataAxle 402-836-6278, Jeff.Mungo@data-axle.com
Information: For a Media Kit or Editorial Calendar, go to www.controleng.com/mediainfo.
Marketing consultants: See ad index
Letters to the editor: Please e-mail us your opinions to MHoske@CFEMedia.com or fax 630-214-4504. Letters should include name, company, and address, and may be edited.
DataHub is the only broker that parses and manages MQTT data intelligently. Data coming from multiple devices can be filtered, aggregated and processed, monitored and secured while ensuring data consistency from the field to the dashboard.
If you’re using MQTT for IoT, be smart! Get the DataHub Smart MQTT Broker.
MOVIKIT
Pre-configured software modules for motion control
Faster automation startup, less programming
Would you rather enter parameters, or code? MOVIKIT ready-to-use automation modules are pre-configured software elements for many common motion control tasks ranging from simple speed control and positioning to complex multi-axis sequences. These intuitive, user-friendly modules are hardware independent, and can save commissioning time and costs. Simply enter paramters, and GO!