Internet of Things Handbook 2022

Page 1

LoRaWAN IoT for temperature-sensitive applications Page 9

The quest to optimize Page 24

APRIL 2022

IoT module energy use: protocols matter Page 34

A SUPPLEMENT TO DESIGN WORLD

internet of things handbook

Cover FINAL — IOT HB 04-22.indd 1

4/6/22 6:05 PM


220304_IIoT_DW_US.indd 1

3/4/22 10:20 AM


It would have been much easier if the Egyptians had used the wheel to build the Pyramids

PROVEN

40 YEAR

BATTERY OPERATING

LIFE

*

With the right tools you can make a wonder of a device much easier. Speak to an expert early on if you require a space-saving battery that can handle extreme environments. The load will be lighter.

* Tadiran LiSOCL2 batteries feature the lowest annual self-discharge rate of any competitive battery, less than 1% per year, enabling these batteries to operate over 40 years depending on device operating usage. However, this is not an expressed or implied warranty, as each application differs in terms of annual energy consumption and/or operating environment.

Tadiran Batteries 2001 Marcus Ave. Suite 125E Lake Success, NY 11042 1-800-537-1368 516-621-4980 www.tadiranbat.com


INTERNET OF THINGS HANDBOOK

How to turn off a smart meter the hard way Potential cyber attacks have a lot of people worried thanks to the recent conflict in Ukraine. So it might be appropriate to review what happened when cybersecurity firm FireEye’s Mandiant team demonstrated how to infiltrate the network of a North American utility. During this exercise, Mandiant hacked into the utility’s industrial control systems and switched off one of its smart meters. A point to note is that most large industrial firms wall-off their industrial networks from their ordinary IT networks somehow. And the utility that Mandiant stress-tested thought it had protected its network this way. These measures slowed Mandiant down but didn’t stop its researchers from eventually owning the industrial network. In the first phase of the attack, the Mandiant team adopted techniques used by Iranian hackers to breach an industrial network in an attack on a Saudi petrochemical plant. The usual approach, says Mandiant, is to first break into the company IT network, rather than the industrial network, to collect information about security operations. The way Mandiant hacked into the network during its exercise was almost embarrassingly simple: It embedded a link for a malicious file in an email attachment to a Microsoft Office document containing auto-executable macro code. This got the white-hat hackers to a point where they could execute code on a single user workstation connected to the IT side of the network. Then they used a set of publicly available offensive security tools to make it look as though their code had the privileges of a domain administrator. It is interesting to review some of the tools they employed, all of which are publicly available. One called ldapsearch retrieves information from LDAP servers

2

(which often stores usernames and passwords). Another called PowerSploit is a collection of programs written in the PowerShell scripting language used to manage IT resources. Typical PowerSploit tasks include listing installed security packages, impersonating logon tokens, and creating logons without triggering suspicious event warnings. To get from the initial compromised workstation out to other equipment installed on the network, the Mandiant hackers used a program called WMImplant, also written in PowerShell, to access remote servers and run programs or issue commands on them. Then a program called Mimikatz extracted credentials for local user and domain administrator accounts. Once they had free run of the IT network, Mandiant’s team determined targets of interest (people, processes, or technology) and looked for avenues from the IT to the industrial network. There turned out to be several ways of getting control of the industrial side. Perhaps most obvious was to get someone to copy a malicious file onto a USB stick which then got plugged into the industrial network. Mandiant also found that some applications on the industrial network accessed data and services on the compromised IT side; similarly, some applications on the compromised IT side could get to the industrial server. Perhaps the biggest security screwup was that the industrial utility used a single centralized admin that handled resources on both the IT and industrial network. This software resided on the IT network. So once Mandiant got control of the IT network, it pretty much had admin status on everything. That made it easy for researchers to steal login credentials for the meter control infrastructure and issue a command to disconnect the smart meter. For a bit of irony, consider that back in 2015 a popular TV series called Mr. Robot depicted a hack of a climate

DESIGN WORLD — EE NETWORK

4 • 2022

control system. The show was praised at the time because experts claimed it’s hacking approach was realistic. The hack hinged on issuing bogus commands from a rogue controller spliced onto the industrial network which could be accessed via an ordinary internet connection. Today, sophisticated firewalls between IT and industrial networks, VPNs, and similar measures are supposed to thwart such antics. But clearly even companies that should know better are still susceptible to the Mr. Robots of the world.

leland teschler • Executive editor

eeworldonline.com | designworldonline.com


POWER SUPPLIES FOR ELECTRIC VEHICLE CHARGING Ranging from low-power to high-power, our power supplies an excellent option for keeping your EV applications energy efficient and within your BOM.

▪Economical ▪Highly efficient ▪Minimal footprint ▪Internationally certified ▪Custom Products

WE POWER YOUR PRODUCTS

re c o m - p ow e r. c o m / e v - c h a r g i n g

RECOM Power GmbH - PE HB 02-22.indd 3

4/6/22 6:16 PM


DESIGN WORLD FOLLOW THE WHOLE TEAM

EDITORIAL VP, Editorial Director Paul J. Heney pheney@wtwhmedia.com @wtwh_paulheney Senior Contributing Editor Leslie Langnau llangnau@wtwhmedia.com @dw_3Dprinting Executive Editor Leland Teschler lteschler@wtwhmedia.com @dw_LeeTeschler Senior Editor Aimee Kalnoskas akalnoskas@wtwhmedia.com @eeworld_aimee Editor Martin Rowe mrowe@wtwhmedia.com @measurementblue Executive Editor Lisa Eitel leitel@wtwhmedia.com @dw_LisaEitel Senior Editor Miles Budimir mbudimir@wtwhmedia.com @dw_Motion Senior Editor Mary Gannon mgannon@wtwhmedia.com @dw_MaryGannon

CREATIVE SERVICES & PRINT PRODUCTION VP, Creative Services Mark Rook mrook@wtwhmedia.com @wtwh_graphics Art Director Matthew Claney mclaney@wtwhmedia.com @wtwh_designer Senior Graphic Designer Allison Washko awashko@wtwhmedia.com @wtwh_allison

Graphic Designer Mariel Evans mevans@wtwhmedia.com @wtwh_mariel Director, Audience Development Bruce Sprague bsprague@wtwhmedia.com

IN-PERSON EVENTS Events Manager Jen Osborne jkolasky@wtwhmedia.com @wtwh_Jen

ON TWITTER

MARKETING VP, Digital Marketing Virginia Goulding vgoulding@wtwhmedia.com @wtwh_virginia Digital Marketing Specialist Francesca Barrett fbarrett@wtwhmedia.com @Francesca_WTWH Digital Production/ Marketing Designer Samantha King sking@wtwhmedia.com

@DESIGNWORLD

ONLINE DEVELOPMENT & PRODUCTION Web Development Manager B. David Miyares dmiyares@wtwhmedia.com @wtwh_WebDave Senior Digital Media Manager Patrick Curran pcurran@wtwhmedia.com @wtwhseopatrick Front End Developer Melissa Annand mannand@wtwhmedia.com

Marketing Graphic Designer Hannah Bragg hbragg@wtwhmedia.com

Software Engineer David Bozentka dbozentka@wtwhmedia.com

Webinar Manager Matt Boblett mboblett@wtwhmedia.com

Digital Production Manager Reggie Hall rhall@wtwhmedia.com

Webinar Coordinator Halle Kirsh hkirsh@wtwhmedia.com

Digital Production Specialist Nicole Lender nlender@wtwhmedia.com

Webinar Coordinator Kim Dorsey kdorsey@wtwhmedia.com

Digital Production Specialist Elise Ondak eondak@wtwhmedia.com Digital Production Specialist Nicole Johnson njohnson@wtwhmedia.com

Event Marketing Specialist Olivia Zemanek ozemanek@wtwhmedia.com

VP, Strategic Initiatives Jay Hopper jhopper@wtwhmedia.com

Event Coordinator Alexis Ferenczy aferenczy@wtwhmedia.com

VIDEOGRAPHY SERVICES Video Manager Bradley Voyten bvoyten@wtwhmedia.com @bv10wtwh Videographer Garrett McCafferty gmccafferty@wtwhmedia.com

PRODUCTION SERVICES Customer Service Manager Stephanie Hulett shulett@wtwhmedia.com Customer Service Representative Tracy Powers tpowers@wtwhmedia.com Customer Service Representative JoAnn Martin jmartin@wtwhmedia.com Customer Service Representative Renee Massey-Linston renee@wtwhmedia.com

FINANCE Controller Brian Korsberg bkorsberg@wtwhmedia.com Accounts Receivable Specialist Jamila Milton jmilton@wtwhmedia.com

Associate Editor Mike Santora msantora@wtwhmedia.com @dw_MikeSantora

2011 - 2020

WTWH Media, LLC 1111 Superior Ave., Suite 2600 Cleveland, OH 44114 Ph: 888.543.2447 FAX: 888.543.2447

2014 Winner

2014 - 2016

2013- 2017

DESIGN WORLD does not pass judgment on subjects of controversy nor enter into dispute with or between any individuals or organizations. DESIGN WORLD is also an independent forum for the expression of opinions relevant to industry issues. Letters to the editor and by-lined articles express the views of the author and not necessarily of the publisher or the publication. Every effort is made to provide accurate information; however, publisher assumes no responsibility for accuracy of submitted advertising and editorial information. Non-commissioned articles and news releases cannot be acknowledged. Unsolicited materials cannot be returned nor will this organization assume responsibility for their care. DESIGN WORLD does not endorse any products, programs or services of advertisers or editorial contributors. Copyright© 2022 by WTWH Media, LLC. No part of this publication may be reproduced in any form or by any means, electronic or mechanical, or by recording, or by any information storage or retrieval system, without written permission from the publisher. SUBSCRIPTION RATES: Free and controlled circulation to qualified subscribers. Non-qualified persons may subscribe at the following rates: U.S. and possessions: 1 year: $125; 2 years: $200; 3 years: $275; Canadian and foreign, 1 year: $195; only US funds are accepted. Single copies $15 each. Subscriptions are prepaid, and check or money orders only. SUBSCRIBER SERVICES: To order a subscription or change your address, please email: designworld@omeda.com, or visit our web site at www.designworldonline.com POSTMASTER: Send address changes to: Design World, 1111 Superior Ave., Suite 2600, Cleveland, OH 44114

4

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

|

designworldonline.com


CONTENTS INTERNET OF THINGS HANDBOOK • APRIL 2022

02 06

09

HOW TO TURN OFF A SMART METER THE HARD WAY

21

MORE AND MORE USER INTERFACES NOW CONSIST OF A CONVERSATION. NEW DEVELOPMENT HARDWARE MAKES IT EASIER TO ‘TALK’ TO DEVICES WITHOUT PHYSICALLY TOUCHING ANYTHING.

HOW TO CHOOSE THE RIGHT EDGE COMPUTER FOR THE ARTIFICIAL INTELLIGENCE OF THINGS (AIoT )

24

LORAWAN IoT FOR TEMPERATURE- SENSITIVE APPLICATIONS

26

THE NEED TO CHILL VACCINES RELIABLY DURING THEIR TRIP CROSS COUNTRY HAS HIGHLIGHTED THE VALUE OF LORAWAN-ENABLED SENSORS AND NETWORKING.

12

HOW TO DESIGN BETTER COMMUNICATION SYSTEMS FOR MACHINE VISION

SMART BUILDING TECHNOLOGIES: ZONING SYSTEMS • PART 1

AN EMERGING SET OF SMART BUILDING TECHNOLOGIES CALLED ZONING SYSTEMS COULD MAKE OFFICES MORE COMFORTABLE WHILE SIMULTANEOUSLY REDUCING THEIR ENERGY FOOTPRINT.

16

29

18

PAVING THE WAY TO IIoT WITH SINGLE-PAIR ETHERNET

PRACTICALITIES OF SPECIFYING 5G ANTENNAS FOR THE IoT

FROM DESIGN TO CERTIFICATION: CONNECTING YOUR LTE NB-IoT OR M1 MODULE TO AN ANTENNA

INTEGRATING A WIRELESS MODULE AND ANTENNA INTO A SYSTEM REQUIRES THOUGHT, MAKING DESIGN TRADEOFFS, ACHIEVING CERTIFICATION, AND TESTING IN PRODUCTION.

32 34

SMART BUILDING TECHNOLOGIES: ZONING SYSTEMS • PART 2

THERE ARE SIGNIFICANT BENEFITS TO MOVING SOME DECISIONS IN SMART HVAC SYSTEMS CLOSE TO THE APPLICATION AND AWAY FROM THE CLOUD.

THE QUEST TO OPTIMIZE

WITH IOT SYSTEMS, ENGINEERS CAN REIMAGE PRODUCTION AND MANUFACTURING.

EQUIPMENT IN THE 5G BAND CAN’T AFFORD TO USE ANTENNAS THAT ARE AFTERTHOUGHTS IN THE DESIGN PROCESS.

INDUSTRY 4.0 OFFERS A RELIABLE COMMUNICATIONS INFRASTRUCTURE. ENSURING THAT CONNECTED DEVICES TO THIS INFRASTRUCTURE OPERATE RELIABLY REQUIRES A GOOD UNDERSTANDING OF THE UNDERLYING TECHNOLOGY OPTIONS.

14

DESIGNING VOICE RECOGNITION INTO IoT DEVICES

MANAGING SMART STREET LIGHTING DESIGN

WI-SUN MAY WELL BE THE LINCHPIN THAT TURNS STREET LIGHTING INTO A SCALABLE IoT UMBRELLA OFFERING A MULTITUDE OF SOLUTIONS BEYOND ILLUMINATION.

IoT MODULE ENERGY USE: PROTOCOLS MATTER

TIME EQUALS POWER WHEN TRANSMITTING DATA TO AND FROM NB-IOT AND LTE-M DEVICES. HOW YOU USE COMMUNICATION PROTOCOLS CAN AFFECT POWER USE.

36

ON THE DRAWING BOARDS: DIRECT CHIP-BASED POWER CONTROL

THE NEXT GENERATION OF IOT POWER CONTROL ELECTRONICS MAY SIT ON A PCB THAT’S SMALLER THAN A U.S. QUARTER.

A NEW UNIVERSAL CONNECTIVITY STANDARD WILL HELP BRING SINGLE-PAIR ETHERNET TO THE INDUSTRIAL INTERNET OF THINGS.

eeworldonline.com

|

designworldonline.com

4 • 2022

DESIGN WORLD — EE NETWORK

5


INTERNET OF THINGS HANDBOOK

How to choose the right edge computer for the Artificial Intelligence of Things (AIoT) By 2025, forecasters predict that there will be 41.6 billion devices(1) connected to the Internet. Consider the amount of data each device could potentially produce. Now, consider manually analyzing all the data generated by all the sensors on a manufacturing assembly line; it could take a lifetime(2), notes a recent report from Moxa. Certain devices produce a lot of data, for example, IP cameras which can produce exabytes of video data daily. Analysts claim that 10% or less of this amount of data is analyzed. It should not be surprising that less than “half of an organization’s structured data is actively used in making decisions— and less than 1% of its unstructured data is analyzed or used at all”(3). Is it any wonder why businesses are looking into artificial intelligence and machine learning as a way to grapple with the ability to collect more and more information? Thus, many forecasters and corporate executives believe that the Artificial Intelligence of Things” (AIoT) offers a path to reduced labor costs, reduced human error,

along with optimized preventive maintenance. (4) But what do these groups mean by AI and how does it fit into the IIoT? Artificial intelligence (AI) can be defined as a field of science that studies how to construct intelligent programs and machines to solve problems traditionally performed through human intelligence. This view includes “machine learning” (ML), which enables systems to automatically learn and improve through experience without being programmed to do so. The learning is done through various algorithms and neural networks. A related term, deep learning (DL), is a subset of machine learning in which multilayered neural networks learn from vast amounts of data. AI is a broad discipline. We’ll look at one example of AIoT using computer vision or AI-powered video analytics. Computer vision and video analytics are used in a range of applications, including remote monitoring and preventive maintenance, identifying vehicles(5) for controlling traffic signals(6) in intelligent transportation systems, agricultural drones and outdoor patrol robots, and automatic optical inspection (AOI) of tiny defects in various products.

GOING TO THE EDGE With all the massive amounts of data nearly any process can generate, moving that data to a public cloud or private server for storage or processing will require considerable bandwidth, availability, and power consumption(7). In many industrial applications constantly sending large amounts of data to a central server is not possible. Even with sufficient bandwidth and infrastructure, there would be a latency in data transmission and analysis. And this does not factor in the cost it would take to maintain such an infrastructure. For some processes, a delay is not a concern. However, for mission-critical industrial processes, delays introduce a number of negative consequences ranging from defective parts and excess scrap to slower time to market to loss in profits. Thus, to reduce latency, a number of IIoT applications are moving to “the edge,” closer to the devices that gather data and that can process and analyze the data there. The data created and processed at far-edge and near-edge sites is expected to increase from 10% to 75% by 2025(8), and the overall edge AI hardware market is expected to see a CAGR of 20.64% from 2019 to 2024(9). Fortunately, advances in edge computing processing power will meet this challenge.

TIPS ON CHOOSING AN EDGE COMPUTER Edge computing devices, whether they use AI or not, process raw data at the point of gathering, rather than waiting until the data reach a cloud device. When you factor in AI, then industrial AIoT applications function best with reliable hardware at the edge. Thus, implementors should consider the processing needs at the edge as well as: 1. 2. 3. 4.

Processing requirements for different phases of AI implementation Edge computing levels Development tools Environmental concerns

DATA COLLECTION

While a goal of data collection is to gather large amounts of data, it is also there to train the AI model. Raw, unprocessed data

6

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

|

designworldonline.com


EDGE COMPUTING

could contain duplications, errors, and outliers. One reason for using edge devices is to preprocess the collected data to identify patterns and missing information, and to correct errors and biases. Potential computing platforms used in data collection include Arm Cortex®or Intel Atom/Core processors.

reduces lag time, bandwidth, data transmission fees, power consumption, and hardware costs. An Arm-based platform without accelerators, for example, can be used on IIoT devices to collect and analyze data to make quick inferences or decisions. • MEDIUM EDGE COMPUTING LEVEL

TRAINING THE AI

An aspect of many AI models is that they need to be trained on the algorithms for neural networks and resource-hungry machine learning to support parallel computing in order to analyze large amounts of collected and preprocessed training data. This training involves selecting a machine learning model and using it on collected and preprocessed data. Be sure to evaluate and tune the parameters to ensure accuracy. A number of training models and tools are available to choose from, including PyTorch, TensorFlow, and Caffe. To make things easier, training can be done on designated AI training machines or cloud computing services, such as AWS Deep Learning AMIs, Amazon SageMaker Autopilot, Google Cloud AI, or Azure Machine Learning, instead of in the field. INFERENCING

After training, the next step is to install the trained AI on the edge computer so that it can make inferences and predictions based on the collected and preprocessed data. At this point, you may not need as much compute resources, so a CPU or lightweight accelerator may be sufficient for the AIoT application. However, you may need a conversion tool to convert the trained model to run on specialized edge processors/accelerators, such as Intel OpenVINO or NVIDIA CUDA. Inferencing includes several different edge computing levels and requirements, so it’s important to determine the proper level of edge computing you need when selecting the appropriate processor. The levels are Low, Medium, and High edge computing. • LOW EDGE COMPUTING LEVEL

Transferring data between the edge and the cloud can be expensive and time consuming, resulting in latency. Low edge computing addresses this by sending a small amount of useful data to the cloud. This action

At this level, edge devices can handle the amount of data that comes from computer vision, video, facial recognition, and similar data intense applications. Edge processor choices include a high-performance CPU, entry-level GPU, or VPU. For instance, the Intel Core i7 Series CPUs can handle computer vision with an OpenVINO toolkit and software-based AI/ ML accelerators for inferencing at the edge. • HIGH EDGE COMPUTING LEVEL

This level is for heavier loads of data that will use AI expert systems for applications involving complex pattern recognition, such as behavior analysis for video surveillance in public security systems. Frequently, accelerators are used, such as high-end GPUs, VPUs, TPUs, or FPGAs that consume 200 W or more or power. At this power level, there may be a need to manage thermal issues.

DEVELOPMENT TOOLS More vendors in the IIoT space are offering tools to help with application development. These tools include deep learning frameworks, which are interfaces, libraries, or tools for creating deep learning models. They deliver a clear and concise way to define models using pre-built components, making it easier to work with the algorithms. Popular options include: • PYTORCH

PyTorch is an open-source machine learning library based on the Torch library and has been mainly developed by Facebook’s AI Research Lab. Applications include computer vision and natural language processing. It is a free and open-source software released under the Modified BSD license. https://pytorch.org/

XGL6020 Series

Ultra-low Loss Power Inductors

• The industry’s lowest DCR and ultra-low AC losses across a wide frequency range • Wide range of inductance values from 0.18 to 10 µH • Isat ratings up to 33 Amps with soft saturation • High operating voltage: 80 V

Full Specs & Free Samples @ coilcraft.com


INTERNET OF THINGS HANDBOOK

• TENSORFLOW

This tool’s Kerabased APIs speed the development of prototypes, and help research and production. The APIs define and train neural networks. https://www.tensorflow.org/ • CAFFE

Caffe lets users define and configure models and optimizations without hard-coding. Users can train a model on a GPU machine, and then deploy to commodity clusters or mobile devices. https://caffe.berkeleyvision.org/ Hardware vendors offer AI accelerator toolkits, which can help accelerate artificial intelligence applications like machine learning. Selections include: •INTEL OPENVINO

The Open Visual Inference and Neural Network Optimization (OpenVINO) toolkit from Intel helps developers build computer vision applications on Intel platforms. This toolkit can enable a faster inference for deep learning models. https://software.intel.com/content/www/us/en/ develop/tools/openvino-toolkit.html • NVIDIA CUDA

This toolkit helps users develop highperformance parallel computing for GPUaccelerated applications on embedded systems, data centers, cloud platforms, and supercomputers built on the Compute Unified Device Architecture (CUDA) from NVIDIA. https://developer.nvidia.com/cuda-toolkit

ENVIRONMENTAL CONSIDERATIONS Users need to consider the physical location of the application. Industrial applications installed outdoors or in harsh environments — such as oil and gas, mining, power, or outdoor patrol robots — will need an appropriate temperature range and heat dissipation mechanisms for reliable operation. Some applications will need industryspecific certifications or approvals as well, such as fanless design, explosion proof construction, vibration resistance, and so on. Also consider size for situations with limited space, or with specific communication needs, such as cellular or Wi-Fi connection.

Data Collection, AI Model Training, and AI Inferencing. These algorithms perform different tasks, and thus have different processing requirements. For example, depending on data complexity, computing platforms for data collection are usually based on Arm Cortex or Intel Atom/Core processors. AI model training needs advanced neural networks and resourcehungry machine learning or deep learning algorithms, so cloud-based services and tools are used. They should be deployed on edge computers with a conversion tool such as Intel OpenVINO or NVIDIA CUDA to have the trained model run on specialized edge processors/ accelerators. AI inferencing has a number of different edge computing levels, each with its own set of requirements. Processing needs for AIoT applications usually depend on how much computing power is needed, the algorithms needed for the various tasks, and whether all of that requires a CPU or accelerator. Options include: The Moxa UC-2100 Series Arm-base computing platform is for embedded data acquisition and processing applications. It comes with up to two software selectable RS232/422/485 full-signal serial ports and single or dual Ethernet ports. A variety of models are available for a ange of interface requirements, such as single or dual serial and Ethernet ports, Gigabit Ethernet, and wireless connections. These communication capabilities let users adapt the UC2100 for a variety of complex solutions. The Cortex-A8 Arm-based processor provides a reliable and secure gateway for data acquisition and processing at field sites. Models are available for various temperature applications and use in extreme environments such as those found in the oil and gas industry. All models use the Moxa Industrial Linux platform.

2 Stack, Tim (2018). Cisco. Internet of Things (IoT) Data Continues to Explode Exponentially. Who Is Using That Data and How? https://blogs. cisco.com/datacenter/internet-ofthings-iot-data-continues-to-explodeexponentially-who-isusing-that-dataand-how 3 Dallemulle, L. & Davenport, T.H. (2017). “What’s Your Data Strategy?” Harvard Business Review, May–June 2017, 112-121. https://hbr.org/2017/05/whats-yourdata-strategy 4 TechTarget (2019). “Artificial Intelligence of Things (AIoT)” https://internetofthingsagenda. techtarget.com/definition/ArtificialIntelligence-of-Things-AIoT 5 Frost, A. (2019) Traffic Technology Today. AI-based traffic monitoring system developed by researches https://www.traffictechnologytoday. com/news/congestion-reduction/ ai-based-traffic-monitoring-systemdevelopedby-researchers.html 6 Pozanco, A., Fernandex, S., & Borrajo, D. (2016) “Urban Traffic Control Assisted by AI Planning 7 Reese, B. (2019) GigaOm. “AI at the Edge: A GigaOm Research Byte” Knowingly, Inc. http://files.iccmedia.com/ pdf/190206arm.pdf

9 Markets and Markets (2019). “Edge AI Hardware Market by Device (Smartphones, Cameras, Robots, Automobile, Smart Speakers, Wearables, and Smart Mirror), Processor (CPU, GPU, ASIC and Others), Power Consumption, Process, End User Industry, and Region Global Forecast to 2024” https:// www.marketsandmarkets.com/ Market-Reports/edge-ai-hardwaremarket-158498281.html

Different algorithms are used when building an AI edge computing application for DESIGN WORLD — EE NETWORK

Notes: 1 IDC (2019). “The Growth in Connected IoT Devices Is Expected to Generate 79.4ZB of Data in 2025, According to a New IDC Forecast” https://www.idc.com/getdoc. jsp?containerId=prUS45213219

8 Van der Muelen, R. (2018) Gartner. What Edge Computing Means for Infrastructure and Operations Leaders https://www.gartner.com/ smarterwithgartner/what-edgecomputing-means-for-infrastructureand-operations-leaders/

DATA PROCESSING

8

This article is excerpted from a paper, Edge Computing for Industrial AIoT Applications, from Moxa.

4 • 2022

eeworldonline.com

|

designworldonline.com


LONG-RANGE IoT NETWORKS

LoRaWAN IoT for temperature-sensitive applications The need to chill vaccines reliably during their trip cross country has highlighted the value of LoRaWAN-enabled sensors and networking. Chris Boorman, Laird Connectivity

Ordinarily, cold supply chain wouldn’t be a dinner table conversation. COVID-19 vaccines, of course, have changed the situation. Specialized refrigerated containers, refrigerated vehicles, and ultra-cold storage warehouses are the foundation for the last-mile delivery channels as long as this pandemic lasts.

An example of a LoRaWAN sensor. Sentrius RS1xx temperature and humidity sensors enable batterypowered, local and wide-area sensor applications using LoRaWAN and Bluetooth 4.2. The RS1xx are small, rugged, and easily configurable sensors. They can be configured to transmit infrequently and last for years on the same set of two replaceable AA batteries.

eeworldonline.com

|

designworldonline.com

Cold supply chain technology historically has involved manual processes prone to human error and full of gaps in visibility throughout a product’s journey. Previously, employees would manually take temperature readings during the distribution process and jot them down – a process that has always been notoriously errorprone. And the readings also only represented snapshots of temperature at certain moments. Thus there were major time gaps where products could have experienced temperatures outside the required ranges. Today, wireless sensors are allowing continuous temperature and humidity monitoring that is far more accurate, efficient and effective than intermittent, manual processes. However, some wireless technologies commonly used with these sensors can have problems performing in harsh cold supply chain environments. Here, insulated metal walls, ultra-low temperatures, and other factors can hamper technologies like Bluetooth and Wi-Fi. But a type of Low–Power, Wide-Area Network (LPWAN) called LoRaWAN can handle those challenging environments and so has become preferred for use in the cold supply chain. LoRaWAN is designed to optimize LPWANs for battery lifetime, capacity, range, and cost. LoRaWAN is often used in long-range applications because it can cover 10 miles or more on a single hop (the LoRa is for long range). That range can be quite valuable for some aspects of cold supply chain. But the biggest reason LoRaWAN is preferred for the cold supply chain is its ability to maintain connectivity in the facilities, vehicles and equipment that challenges other wireless protocols. The LoRa physical layer translates data into RF signals that can be sent and received over the air using chirp spread spectrum communications. In a spread-spectrum system, signals are spread across a wide bandwidth. Consequently, narrow4 • 2022

band signals appearing in the wide transmission bandwidth generally don’t interfere with communication. Chirp spread spectrum techniques use wideband linear frequency modulated chirp pulses to encode information. Use of chirp modulation has the effect of increasing signal/ noise ratios and reducing power requirements. The result is excellent signal propagation, penetration and resilience against interference in environments where other technologies struggle. As an example, LoRaWAN sensors can sit inside thick insulated fridges and produce signals that will make it through to gateways for delivery to the cloud. LoRaWAN has several other advantages for cold supply chain uses. Its extraordinary energy efficiency gives years of battery life. It’s also highly scalable, highly interoperable, and compatible with both public and private networks for the data backhaul and bidirectional communications. With a data rate ranging from 290 bps to 50k bps, throughput is a primary limitation with LoRaWAN IoT applications. Nevertheless, the protocol works extremely well for sensor networks that generate small packets of periodic or event-driven data beamed back to a central location. This type of data transfer characterizes cold chain supply distribution. So data throughput generally isn’t a concern.

BEST PRACTICES There are some important best practices and design considerations when it comes to selecting and applying temperature sensors in LoRaWAN schemes. RTDs are more precise than other types of sensors – Three types of temperature sensors dominate cold supply chain projects: thermocouples, thermistors, and RTDs (resistive temperature detectors). RTDs are far more precise than thermocouples—their typical accuracy is 0.1°C compared to 1°C for most thermocouples. RTDs also typically contain a platinum, nickel, or copper resistance element that is stable and resists corrosion and oxidation. There are also cryo versions that measure temperatures down to -200°C. RFD sensors change resistance when their metal sensing material changes temperature. That relationship between temperature and resistance is generally linear and repeatable. Thus an RTD can be accurate over time and over DESIGN WORLD — EE NETWORK

9


INTERNET OF THINGS HANDBOOK

The Sentrius RG1xx LoRaWANEnabled Gateway makes it possible to gather data from as far as 10 miles away via LoRaWAN and sync to the cloud via Wi-Fi/Ethernet or add LTE in the U.S. IP67 enclosures are available. The gateways run a full onboard Linux OS and are configurable via web GUI.

10

DESIGN WORLD — EE NETWORK

many heating/cooling cycles. Where accuracy is critical and temperatures are at extremes, RTD sensors deliver precision and reliability that other technologies cannot. Remember that hot is as important as cold – Besides keeping things frozen, cold supply chain teams working on IoT projects often must focus on precise high-temperature management. Restaurants provide a perfect example: The same sensor systems that ensure that food stays cold or frozen often must be flexible enough to also ensure hot foods do not cool to unsafe temperatures. The same is true for industrial settings where sensors may be asked to both measure the temperatures of super-cooled gases and of machines that generate a lot of heat. For this reason, it is important for engineering teams to have a clear idea of the overall temperature ranges to ensure sensors are versatile enough for the job. Decide early whether mobile controls will be important – Discussions about the user interface often happen late in an IoT planning process, after key decisions have been made about the technologies to be deployed. This approach can add complexity and delay because the engineering team must build cloud connectivity and mobile interfaces into the project plan. Efforts directed at both facilities can be arduous, slow and costly. Thus to prevent headaches on the back end, it is advisable to select sensors that operate on a platform having cloud connectivity and mobility baked in. Sensors must be rugged in numerous ways – “Rugged” can mean different things. For some situations, it implies weatherproofing and resistance to the outdoor elements. For others, rugged can mean sturdy enough to survive vibrations that arise in industrial settings. For cold supply chain uses, rugged covers several factors. The outer

4 • 2022

casing must be tough because the chances of impact are high in bustling warehouses, restaurant kitchens, and so on. But the casing must also withstand constant cleaning; devices in food-related environments may get sprayed and wiped down repeatedly or see high-temperature sanitizing during CiP (Clean in Place) procedures. Rugged also means adaptable to dramatic temperature swings. A typical scenario involves food or other temperature-controlled products being moved into and out of cold storage and into much warmer staging areas or food prep areas. LoRaWAN also has a robust ecosystem of companies able to support rapid, large-scale IoT deployments. There are an abundance of LoRaWAN network ecosystem partners and LoRaWAN Network Server (LNS) providers (e.g. ChirpStack, Senet, The Things Network) and application servers to support and accelerate implementations. In recent months LoRaWAN has also been embraced by Amazon Web Services (AWS) through its new AWS IoT Core for LoRaWAN platform. This service enables users to connect their LoRaWAN networks directly into AWS IoT Core via AWS-qualified LoRaWAN gateways. Thus, the cold supply chain deserves attention not only because of its critical role in distributing COVID-19 vaccines but because modern life wouldn’t be possible without it. We now enjoy many foods yearround only because we can safely transport them using technologies such as LoRaWAN, making the phrase “It’s out of season” a thing of the past.

References Laird Connectivity, www.lairdconnect.com

eeworldonline.com

|

designworldonline.com


The World's Leading Conference on Energy Conversion Keynote Speaker Linda Zhang

Chief Nameplate Engineer, Ford Motor Company Speaking on her keynote titled "From Concept to Road: The F150 Lightning Story," Linda is leading the team delivering Ford’s first ever all-electric F-150 pickup. We are thrilled to hear her speak at ECCE 2022! For more information visit us at: www.ieee-ecce.org/2022/ Or contact us at: info@ieee-ecce.org


INTERNET OF THINGS HANDBOOK

How to design better communication systems for machine vision Industry 4.0 offers a reliable communications infrastructure. Ensuring that connected devices to this infrastructure operate reliably requires a good understanding of the underlying technology options. This article was based on a whitepaper by Richard Anslow, System Applications Engineer and Neil Quinn, Product Applications Engineer

as more companies implement Industry 4.0 communications among their factory floor equipment, the need for more bandwidth increases. Along with the need for bandwidth, Industry 4.0 systems must also handle faster communication interfaces and safety against environmental hazards and electromagnetic compatibility (EMC). Let’s look at some of these needs in relation to specific factory equipment.

ROBOTICS AND MACHINE VISION Robots equipped with vision offer greater flexibility to handle more tasks. Vision enables a robot to, for example, scan a conveyor belt for defective parts and remove them. In a hazardous EMC environment, the reliability and effectiveness of the vision/robot interface is dependent on the chosen wired link technology. There are several ways to implement the machine vision camera interface, including USB 2.0, USB 3.0, Camera Link, or gigabit Ethernet. Table 1 compares the USB, Ethernet, and Camera Link standards using several key metrics. Industrial Ethernet offers a long cable reach up to 100 meters for 2-pair 100BASE-TX and 4-pair 1000BASE-T1, and up to 1 km with the new 10BASE-T1L standard on a single twisted pair cable with high EMC performance. Cable reach using USB 2.0 or USB 3.0 is 5 meters or less without specialized active USB cables, and EMC performance needs to be boosted using protection diodes and filtering. The ubiquity of USB ports on industrial controllers and bandwidth up to 5 Gbps, though, can offer some advantages. The Camera Link standard, which was introduced in late 2000, needs dedicated frame grabber hardware on

the industrial controller. USB or Ethernet do not require an additional frame grabber card in the industrial controller. USB and Ethernet-based machine vision cameras are widely used today, but Camera Link and frame grabbers are still used for applications that require preprocessing for multiple cameras to reduce the main CPU load. Over a short distance, Camera Link systems can push twice as much data out as gigabit Ethernet. The Camera Link physical layer is low voltage differential signaling (LVDS) based, with inherent EMC robustness due to commonmode noise coupling to each wire effectively canceled out at the receiver. EMC robustness on the LVDS physical layer can be boosted using magnetic isolation. Synchronizing the time between the camera and robot movement is typically best done using Ethernet on both camera and robot links, and an industrial controller that uses an IEEE 802.1 Time Sensitive Network (TSN) switch. TSN defines the first IEEE standard for timecontrolled data routing in switched Ethernet networks.

HUMAN MACHINE INTERFACE A human machine interface (HMI) typically displays data from a programmable logic controller (PLC) for operators. A standard HMI can track production time and monitoring various performance indicators (KPI) and machine output. An operator uses the HMI to monitor and even manage the manufacturing process. HMIs are available in a range of configurations. Those with external display options, such as a HighDefinition Multimedia Interface (HDMI), can be easily placed on control racks using standard DIN rails, which are also used to mount the monitored PLC. Users can

TABLE 1. COMMUNICATION INTERFACE STANDARD FOR MACHINE VISION CAMERA

12

Parameter

USB 2.0

USB 3.0

Industrial Ethernet

Camera Link

Bandwidth

1.5 Mbps (low speed) 12 Mbps (full speed) 480 Mbps (high speed)

5 Gbps (SuperSpeed)

10 Mbps, 100 Mbps, 1 Gbps

2.04 Gbps (base) 4.08 Gbps (full) 6.8 Gbps (deca)

Cable Length

5m

3m

10 Mbps up to 1 km, 100 Mbps/1 Gbps up to 100 m

10m

Power and Data on One Cable?

Yes

Yes

Yes, power over Ethernet (PoE) or power over data Yes lines (PoDL)

Frame Grabber Required?

No

No

No

Yess

Cable Costs

Low

Low

Low

High

EMC Performance

Low, requires EMC protection, filters, and signal/power isolation

Low, requires EMC protection, filters, and signal/power isolation

High (transformer Medium (LVDS), requires magnetics are part of the signal/power isolation Ethernet specification) for best performance

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

|

designworldonline.com


COMMUNICATION SYSTEMS

Analog Devices offers a complete suite of Ethernet technologies, including physical layer transceivers and TSN switches, as well as systemlevel solutions, software, and security capabilities.

run up to a 15-meter cable with HDMI displays. Because these cables are susceptible to EMC, longer cable lengths are not recommended. If motors and pumps are connected to a DIN rail-mounted PLC, a stray indirect transient overvoltage could affect the HMI. A careful selection of communications technologies will help make your system robust. Industrial Ethernet is a common choice. Communication organizations claim that more than 61 million RS-485 (PROFIBUS) nodes are installed worldwide. PROFIBUS process automation (PA) is growing at about 7% year on year. The PROFINET (an Industrial Ethernet implementation) install base is at 26 million nodes. Use of Ethernet-based technologies can deliver high EMC performance as magnetics are written into the IEEE 802.3 Ethernet standard and must be used at every node. RS-485 devices can include magnetic isolation to increase noise immunity, and protection diodes can be integrated on-chip, or placed on the communication PCB to increase robustness to electrostatic discharges and transient overvoltages. HMI devices are commonly protected against electrostatic discharges. ESD protection diodes can ensure signals are strong. Integrated reinforced isolation can protect operators from electrical hazards. Video links are isolated using fiber optics capable of gigabit transmission speeds.

ISOLATING HDMI USING CIRCUIT NOTE CN-0422 Adding safety isolation to a video interface can be a challenge due to the video protocol itself. Implementors need to find a way to isolate each of the video, control, and power signals, which can be quite a task. One approach is to use drop-in reference design solutions, which reduce system development time. One such solution is HDMI, a de facto standard used frequently in commercial highdefinition televisions and displays since its release in late 2002. Video data in the HDMI 1.3a protocol are carried across four TMDS lanes: three data lanes and one clock lane. Each lane must be individually isolated. Traditional digital isolators are not suitable because they do not support the high bandwidth or the differential nature of TMDS. Many TMDS installations can use simple passive components that are compatible with

LVDS compatible devices because TMDS is just slightly different from LVDS. For example, the passive components can be used in conjunction with two dual-channel gigabit ADN4654 isolated LVDS transceivers to isolate all four TMDS lanes. This arrangement can achieve a pixel clock frequency to 110 MHz, supporting 720 p resolution at a frame rate of 60 Hz. HDMI protocol contains other low speed signals used for control purposes: the data display channel (DDC), consumer electronics control (CEC), and hot plug detect (HPD). The DDC lets the source read display EEID data from EEPROM and exchange relevant formatting information. The CEC signals allow shared functions between multiple connected source and sink devices. HPD is asserted by the sink device when it has detected an attached source, signaling a connected device. Control signals are isolated with isolators that can deliver bidirectional isolation of signals as needed.

INDUSTRIAL ETHERNET Machine vision applications can be easily connected using multiprotocol Ethernet switches, Ethernet physical layer transceivers, and full platform solutions. Industrial Ethernet embedded switches let communications designers choose the type of processor that fits the application rather than attempting a force-fit of a specific protocol stack. REM attaches to the memory bus of a processor and looks like any other peripheral out on that bus. With some switches, the memory cycle goes down to 32 ns (125 Mbps with a 32-bit bus) to support the 12.5 µs cycle time for EtherCAT and the 31.25 μs cycle time for PROFINET IRT. Data may be transferred to

and from the switches using Priority Channel queues to transfer real-time data, interrupting nonreal-time data transfers to minimize delay. Switch drivers and interfaces to the protocol stack manage such data queues for efficient data transfers. Such an arrangement eliminates the need for application software to manage the switch or keep track of time management processes. The Priority Channel of Industrial Ethernet embedded switches are immune to network loading effects, ensuring applications are running all the time. REM switches will filter packets to keep unwanted traffic from the processor, manage low priority traffic based on the loading of the processor, and guarantee timely delivery of high priority packets, regardless of overall packet load. Examples of Industrial Ethernet physical layer devices include the ADIN1100, ADIN1200, and ADIN1300 line from Analog Devices. These devices support data rates from 10 Mpbs to 1 Gbps, multiple MAC interfaces, and operate in hazardous areas.

References Analog Devices, analog.com 4 Dave Wilson. “Next-Generation Image Sensors.” NovusLight. November 28, 2016. 5 HDMI 1.3a specification.

Analog Devices’ fido5100/fido5200 REM switch family includes two, 2-port Industrial Ethernet embedded switches that interface to any processor, including any Arm CPU, and ADI’s fido1100 communication controller.

4 • 2022

13


INTERNET OF THINGS HANDBOOK

Smart Building Technologies: Zoning Systems

An emerging set of smart building technologies called zoning systems could make offices more comfortable while simultaneously reducing their energy footprint. Asem Elshimi, Silicon Labs

PART 1

We’ve probably all experienced an office environment that always felt too cold or hot no matter the weather. And we’ve all experienced the fruitlessness of asking the building facilities team for help. The problem: Making the environment a bit more comfortable for you would make the temperature in the rest of the office uncomfortably warm or cold. An emerging set of smart building technologies called zoning systems could consign this dilemma to the dustbin of history. Zoning systems are a set of smart building technologies that handle a lot more than just room temperature. Additionally, they accommodate humidity, CO2, ventilation, purification, and more. And they adjust the environment in different areas of the same building based on occupant or operator preference. By using multiple thermostats, active vents and active ductwork dampers, zoning systems allow users to create granular temperature zones and control HVAC settings room by room. These systems enhance occupant comfort and can also unlock tremendous energy savings. While an office that is slightly too cold or too warm may seem like a minor inconvenience, studies have shown that room temperature significantly affects the productivity of office workers. In 2006 the Berkeley National Lab studied the effects of temperature on office work performance. Researchers found that performance rates rise when temperatures are between 21 and 22°C and drop when temperatures are above the 23 to 24°C range. Moreover, with the highest productivity is at 22°C. Further, the study showed that at a temperature of 30°C, worker performance dropped by 8.9%. Another recent study found that offices that are too cold can halve employee output; at offices that are too warm, attendance drops by 18% and project turnaround times rise by 13%. Temperature stability across a building is only one dimension of zoning systems. Other HVAC conditions like CO2 levels correlate directly to the well-being of building occupants. According to a 2016 study by the Harvard School of Public Health, high CO2 levels in buildings can degrade thinking and decision making, thus reducing worker performance and harming a company’s bottom line. Zoning systems use hundreds of HVAC sensors that collect granular data about the conditions in every room in a smart building. Wireless SoCs and backend gateways then push this data into the cloud, where cloud AI algorithms make dynamic and granular decisions per the collected data. These decisions are fed back to smart HVAC controls that fine-tune temperature, humidity, and air flow in various zones (based on decisions made by the cloud AI). The result is optimized comfort and reduced energy consumption through elimination of over-cooling, over-heating and over-ventilating. A system that maintains optimal temperature and humidity levels will always consume less energy. It is worth emphasizing that these zoning systems depend on wireless technology. Unlike conventional HVAC systems that use only a limited number of sensors (mostly a few thermostats per floor), there are hundreds of sensors monitoring HVAC conditions and collecting granular, detailed and dynamic data sets in zoning systems. The considerable number of sensors required make it impractical to run wires for every single sensor. Also, a considerable amount of trial-and-error goes into determining the

14

DESIGN WORLD — EE NETWORK

4 • 2022

The ordinary optimization cycle for room environments can be extremely slow and energy demanding. But datasets on occupancy can also improve smart HVAC system energy efficiency and help mitigate the problem.

eeworldonline.com

| designworldonline.com


T HERM O STAT WARS

HVAC ZONING SYSTEM BASICS

The right office temperature keeps productivity up, employees healthier, HVAC costs down, and a building’s carbon footprint small. But yet, most Facility Managers are struggling to maintain the optimal temperature and keep their employees comfortable. It’s time to end the “It’s too cold”, “It’s too hot” war.

A 2015 study* states that in a typical office….

This pie chart is great for displaying percentages / parts of a whole.

It’s not just employee comfort, office temperatures affect your company’s bottom line directly!

The graph, though not reflecting actual measurements, shows conceptually the advantage of integrating occupancy information into a smart HVAC system. A system with more information can adapt more quickly to changes in its environment and, in turn, consume less energy. Because buildings consume close to 40% of global energy, this energy savings enhances environmental sustainability.

best location for each sensor node. Wireless sensors tend to be cheaper and offer the installer flexibility. Additionally, wireless sensors can be moved to optimize conditions for the best experience, important when office spaces are designed to be adjustable without static walls. The dynamic nature of partitioned office spaces means that the boundaries between zones also should remain flexible. The best way to maintain flexible boundaries is by using battery-based wireless sensors.

ZONING + OCCUPANCY The downside of the optimization cycle is that it is extremely slow and energy demanding; it is common knowledge how long it takes and how expensive it is for an air conditioner to cool a room by The dramatic effect a few degrees. But datasets on occupancy can also improve of temperature on productivity becomes smart HVAC system energy efficiency and help mitigate the clear in the Thermostat problem. For example, a crowded conference room can get Wars infographic warm quickly, just as an open office area with high ceilings by 75F, a Midwestern can rapidly get chilly (because warm air rises and people are startup focusing on accelerating the closer to the floor). Occupancy and people-flow information adoption of zoning enables cloud-based algorithms to more quickly adjust the systems. Nearly 80% of HVAC controls. Thus, when a meeting room grows crowded, employees complain about temperatures at the algorithm can immediately trigger faster cooling and their workplace, and ventilation before things heat up, leading to a constant 21°C approximately 29% have temperature across the building. confessed to taking Information about the dynamic changes in room longer breaks just to escape uncomfortable occupancy and people flow speeds HVAC response time, temperatures. greatly reducing the energy consumed for cooling and ventilation. Considering that buildings consume close to 40% of global energy, this energy savings directly enhances environmental sustainability.

Factors that have an impact on Comfort

Heat Loads

These need to be accurately computed at the outset while calculating the tonnage of airconditioning. These include radiant heat from the sun (also orientation, time of day etc.), heat generated by occupants, heat generated by the office equipment e.g. computers, printers, copiers

Set point of Air Temperature

Desired temperature of the air surrounding the occupants. Generally 23-25 deg. C is recommended.

Air Flow

The speed and direction of airflow. Generally 100-150 cfm per occupant is recommended

Humidity

The amount of moisture in the air surrounding the occupants. Generally 45-55% is recommended.

Air Quality

This can include numerous parameters like SPM, VOCs, NOx, SOx, CO2, etc. There are WHO-specified standards for all of these.

References Silicon Laboratories, www.silabs.com 75F, www.75f.io/

The end of Thermostat Wars is just a call away! Say goodbye to hot and cold spots with a system that learns your building’s unique needs and adjusts itself automatically. Using the Internet of Things (IoT) design, the 75F system packs the computing power of the cloud into smart HVAC devices that make everyone more comfortable and can save up to 40% on Use this space to write a short conclusion for energy bills!

Berkeley National Lab study, effect of office temperature on productivity, https://indoor.lbl.gov/publications/effect-temperature-task-performance Your logo here:

your infographic and/or to provide a call-to-action.

Harvard School of Public Health CO2 study, https://ehp.niehs.nih.gov/doi/ pdf/10.1289/ehp.1510037

*Sources: 1) LBNL Study: Effect of Temperature on Task Performance in Offfice Environment, 2015 and 2) Software Advice

eeworldonline.com

| designworldonline.com

4 • 2022

DESIGN WORLD — EE NETWORK

15


INTERNET OF THINGS HANDBOOK

Smart Building Technologies: Zoning Systems

There are significant benefits to moving some decisions in smart HVAC systems close to the application and away from the cloud Asem Elshimi, Silicon Labs

PART 2 With the COVID-19 pandemic fostering the rise of hybrid work policies, occupancy-enabled zoning systems are becoming particularly useful for office environments. With desks and conference rooms empty as much as 50 to 60% of the time, energy for heating and cooling is needlessly being used for people who are not there. But occupancy-enabled zoning systems can eliminate such waste. Sensors are strategically placed around buildings to detect

is one reason the global HVAC systems market is expected to grow at a compound annual growth rate of 5.9% from 2021 to 2028 (though another reason is mushrooming energy costs, making it easier to justify expenditures for energy efficient technologies). On the other hand, the pandemic-stirred supply shortage has slowed new installations. Strategically viewed, the supply shortage is an opportunity for manufacturers and vendors to spend time optimizing their technological offerings.

motion and adjust room temperature with

ON THE EDGE

the help of the HVAC control system. When

Look up edge computing on Wikipedia and you’ll find it defined as a distributed computing paradigm that brings computation and data storage close to the sources of data, the point being to improve response times and save bandwidth. The Wikipedia entry goes on to say there are common misconceptions about what constitutes the “edge.” One is that edge and IoT are synonymous. In fact, edge computing is a topology- and location-sensitive form of distributed computing, while IoT is a specific instance of edge computing. As the leader in wireless SoCs, Silicon Labs say the network’s edge is in the processing at the sensor level; the edge lies alongside the sensed phenomena. However, this is merely one characterization. There are significant benefits to moving AI computations even further from the cloud. Similarly, zoning systems are good

occupants leave an area, the system goes to a stand-by state with minimal cooling/heating/ ventilation, ensuring energy is not spent on cooling unoccupied spaces. While maintaining a stable, comfortable temperature is a great convenience, keeping the air that we breathe clean and healthy has become mandatory since the COVID-19 outbreak. The global pandemic has hit the HVAC market from both the demand and the supply sides. An influx of regulations dictating cleaner air inside buildings has led to expedited upgrade cycles to improve the purification and ventilation that HVAC systems provide. Many manufacturers view these mandates as an opportunity to upgrade their HVAC system technologies with smart features. This

candidates for edge intelligence because they are close to sensed environments. They can make use of edge-intelligent wireless MCUs-wireless SoCs capable of running AI models with optimized power consumption and at a reasonable speed. These SoCs have two major benefits: faster decentralized decision making and reduced data traffic by eliminating static data flow. To understand these benefits, we need to look at the improvement an edge AI can offer versus a regular MCU. Generally, an AI accelerator can use data to help make decisions about applications quickly and with high confidence while using much less power. The key word here is confidence. It means an AI application can detect an anomaly or perturbation in an environment and classify it into a preset category with confidence. With an AI accelerator-equipped SoC, AI applications can make more reliable and efficient decisions at the edge without the need to relay any information to the cloud. In zoning systems, this means that AIpowered applications running on wireless MCUs can make spontaneous decisions based on disturbances in HVAC conditions and changes in occupancy. Making these decisions locally significantly reduces the optimization cycle response time. In addition, it allows the system to run first-order optimizations without sending data to the cloud, enhancing data privacy and security. And this localized decision-making minimizes problems resulting from the building backend going offline. Furthermore, because AI-powered applications can make reliable judgments locally, they can filter out static data transmission to the cloud. To appreciate this

Edge-intelligent wireless MCUs reduce the amount of HVAC data sent to the cloud by filtering out data from events that don’t involve cloud-based AI algorithms.

16

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

| designworldonline.com


HVAC ZONING SYSTEM BASICS

FASTER INFERENCING AND LOWER POWER CONSUMPTION WITH ML ACCELERATOR

benefit, we need to understand that in the absence of edge AI, cloud-based HVAC systems forward all sensed data--including that collected during stand-by-to the cloud for decision-making. Sensors transmit to the cloud even when nothing happens. These constant transmissions are tolerable with conventional HVAC systems using a handful of sensors. But each zoning system can put hundreds of sensors online; the flow of data from millions of sensors to the cloud every few milliseconds would swiftly overload the network. AI accelerators can reduce this data traffic problem. They have the computing horsepower to filter out static data that sensors generate and reliably make a judgment call on whether there’s been a change, and then only send event-driven data to the cloud. Such filtering reduces network traffic and the use of cloud resources. Building operators particularly benefit from this strategy because it minimizes charges they pay to cloud operators. Why run AI applications on the wireless SoC? One reason is it minimizes the cost of upgrading to wireless and intelligent technologies. Processors in HVAC systems are busy running HVAC equipment applications under heavy safety constraints. Hence, a wireless SoC powered by an AI accelerator can add zoning without putting a strain on the existing HVAC computing facilities. Zoning technologies are set to be major lifeeasing IoT applications. Within a decade or so, it’s likely we will become as familiar with smart vents as we are with smart lights.

eeworldonline.com

| designworldonline.com

Machine language (ML) accelerators speed inferencing operations associated with AI applications while reducing power consumption by up to a factor of six, depending on the model.

References Silicon Laboratories, www.silabs.com

4 • 2022

DESIGN WORLD — EE NETWORK

17


INTERNET OF THINGS HANDBOOK

Standardization around an IEC 63171-7 interface is expected to simplify the integration of systems that must communicate sensor data to the cloud and back, particularly for high-power applications like robotics and dc servo drives used in all manner of manufacturing applications.

Paving the way to IIoT with Single-Pair Ethernet A new universal connectivity standard will help bring Single-Pair Ethernet to the Industrial Internet of Things. Eric Leijtens, TE Connectivity

The rise of the Industrial Internet of Things (IIoT) places new demands on industrial networks. With the increasing number of IIoT nodes, there’s a need for a simplified, economical communication infrastructure at the field level. That’s why Single Pair Ethernet (SPE) is getting a lot of attention for IIoT. SPE uses two wires for power and data rather than four or eight and is a good candidate for open and scalable ethernetbased networks within automation systems. For several years, SPE has demonstrated its value in the automotive industry. There, the growing number of communication nodes in vehicle networks has brought both a mushrooming of data traffic and a need to minimize the weight and bulk of the data cabling. Industrial networks now face similar challenges, and SPE can address them. SPE can be deployed in factories and warehouses to help reach the next level of automation.

18

DESIGN WORLD — EE NETWORK

4 • 2022

With a single twisted copper pair, SPE enables data transmission up to 1 Gbps to the sensor and actuator levels of industrial automation. This communication channel provides a simple, economical way to communicate from the cloud all the way down to an individual sensor. It is a gamechanging alternative to the difficult and costly task of connecting equipment to the cloud with analog and fieldbus technologies that do not speak the same “language” as high-speed ethernet. SPE infrastructure also helps address other key challenges in today’s smart factories, including the trend toward miniaturization and growing needs for speed and precision. Thanks to its simple wiring, SPE enables the design of more compact industrial machinery with greater movement freedom – developments that are shaping the future of IIoT implementations and manufacturing overall. One factor that will help promote the use of SPE is the new International Electrotechnical Commission (IEC) 63171-7 standard. It creates broad

eeworldonline.com

|

designworldonline.com


SPE CONNECTORS

industry consensus on a power-and-data hybrid format for SPE that easily integrates machines into a network and improves power distribution. Most importantly, the IEC 63171-7 standard has the support of more than 80 companies and multiple trade groups. Its development was driven by active committee work of the SPE Industrial Partner Network, of which TE Connectivity (TE) is a founding member. The network brings together leading industrial companies in an effort to advance SPE technology and create uniform, global standards for its use in industrial applications. The new IEC 63171-7 standard helps address one of the key challenges design engineers are facing today: How to deal with networks that have grown increasingly complex with the introduction of more automation. Modern industrial networks typically include numerous communication protocols including ethernet, industrial ethernet, fieldbus and analog communication. The resulting communication infrastructure is not fully transparent. The growing list of interface types and power delivery mediums in use today creates compatibility and translation issues. One reason SPE simplifies IIoT communication is because it integrates with existing industrial ethernet networks. SPE technology enables the network infrastructure to be IP-based and essentially fully transparent, reducing the need for expensive gateways that complicate the network and delay communication.

Moreover, the new IEC 63171-7 standard specifies the use of the same M12 connector used in factory automation applications for actuators, sensors, and Fieldbus. The M12 is a circular connector with a 12-mm locking thread that makes it easy to connect with existing ethernet infrastructure and replace traditional fieldbus and analog solutions. M12 is a tried-and-true format that integrates up to five power contacts plus the SPE contact pair. Consequently, it can be used to distribute power across the network instead of the point-to-point connections required in power-over-data lines. Benefits of this format include higher data and power supply levels (power transmission up to 11 kW/16 A and data transmission up to 1 Gbps/600 MHz in one cable) and reduced electromagnetic interference thanks to separate data and power contacts. The M12 format is one of the most common connector sizes found in fieldlevel applications. Its use not only simplifies system design but also allows companies to invest in SPE with greater certainty that the technology will be supported well into the future. The M12 format specified by IEC 63171-7 is called a hybrid format because it allows for both power and data lines in the same connection. This format not only simplifies things for the end user, it also creates new possibilities for manufacturers developing SPE equipment. The development of hybrid interfaces allows SPE to more easily integrate with existing infrastructure.

The industry collaboration that led to the hybrid format does not end with the development of the IEC 63171-7 standard. As a next step, we will continue to identify solutions that will make widespread implementation of SPE possible. The SPE Industrial Partner Network is one important source of collaboration, and additional efforts are underway to support the natural progression toward hybrid interfaces. I am excited about what these developments mean for the future success of IIoT, and I look forward to the ways they will move our industry forward.

References TE Connectivity, www.te.com Single Pair Ethernet Industrial Partner Network, www.single-pairethernet.com/en

M12 connectors are widely used in industrial settings. That’s one reason more than 80 companies and multiple trade groups are supporting the IEC 63171-7 standard for SPE that brings both power and data lines to IIoT nodes via M12 connections.

eeworldonline.com

|

designworldonline.com

4 • 2022

DESIGN WORLD — EE NETWORK

19


www.OKdo.com #LetsOKdo


VOICE RECOGNITION

Designing voice recognition into IoT devices

Vikram Shrivastava , Knowles Corp.

More and more user interfaces now consist of a conversation. New development hardware makes it easier to ‘talk’ to devices without physically touching anything.

A LOT OF IoT devices these days will understand what you say when you talk to them. Voice is still in the early phases of adoption, however, and is just beginning to expand beyond mobile devices and speakers. Eventually voice will become the standard way users interact with their IoT devices. But to date, voice interactions are generally limited by inconsistent sound quality in the presence of noise and other distractors. This can degrade the end-user experience and slow the adoption of voice technology. The traditional approach to voice applications has been to record the voice commands and send them to the cloud for processing. But to take advantage of maturing voice integration, processing technologies are moving to the edge, away from the cloud. The approach lowers latency and reduces costs, both in dollars and bandwidth. But the current challenges with employing voice still include power consumption, latency, integration, and

privacy. There are design approaches and technical guidance that can facilitate audio edge processing when designing IoT voice applications. As more applications employ voicecalling, control, and interactions, the field becomes more fragmented. The fragmentation diminishes the chance of guaranteeing a successful voice implementation by following a few basic rules-of-thumb. Voice control is integrated differently into each application—be it Bluetooth speakers, home appliances, headphones, wearables, or elevators. It might be simple to add a voice wake-trigger. Designing voice into an enterprise-grade Bluetooth speaker and headset will probably be complex— even more so if the speaker includes true wireless stereo (TWS) integration. Additionally, different hardware platforms by require different voice integration approaches. For instance, voice integration on most smart TVs entails work in a Linux ecosystem. But adding voice on a home appliance will require working in a microcontroller (MCU) ecosystem. There is

How industry insiders expect true wireless stereo (TWS) features to evolve.

eeworldonline.com

|

designworldonline.com

4 • 2022

DESIGN WORLD — EE NETWORK

21


INTERNET OF THINGS HANDBOOK WEARABLE BANDS AND TWS EXPECTED TO GROW 30%

The market analysis firm Canalys predicts a 6.7% compound annual growth rate for wearables and about a 20% growth rate for true wireless stereo through 2024.

a common, recommended method for all of these integrations, but there are always variations that complicate the task.

THE VOICE USER INTERFACE It is expected that always-on Voice User Interface (VUI) will become commonplace in numerous consumer products. But inconsistent sound quality in the presence of noise and other distractors can make voice interactions annoying. Whether or not a device can intelligently manage sound can ultimately make or break a user’s ability to communicate effectively. Integration of audio edge processors into devices is one way of adding reliable always-on VUI capabilities.

22

Dedicated audio edge processors containing machinelearning-optimized cores are the key to devising high-quality audio communication devices. These processors can deliver enough compute power to process audio using traditional and ML algorithms while dissipating much less energy than a generic processor. Because the processing happens directly on the device, the process is significantly faster than sending that information to the cloud and back. Use of audio processors can also help add capabilities like voice-wake. While the cloud may offer some great benefits, edge processing allows users to use their device without a high-bandwidth internet connection. For example,

DESIGN WORLD — EE NETWORK

Knowles — IOT HB 04-22.indd 22

4 • 2022

edge audio processors enable low-latency audio processing with contextual data while also keeping the contextual data local and secure. Nevertheless, there are some common challenges with implementing VUIs for IoT alwayson and listening devices: Power consumption—A VUI device must be always-on/alwayslistening to receive commands. In a voice-command system, at least one microphone must always be active, and the processor tasked with recognizing the wake word must also be active. However, audio edge processors designed with proprietary architectures, hardware accelerators, and special instruction sets can run audio and ML algorithms optimized to reduce power consumption. Latency—There is no tolerance for latency with voiceactivated devices. A perceived delay exceeding just 200 msec will bring people to talk over each other on voice calls or repeat their commands to the voice assistant. Engineers and product designers must optimize the audio chain to limit latency. Low-latency processing in edge processors is thus critical for ensuring highquality voice communication. Integration—There are many options for hardware and software for different VUI implementations. Optimal hardware architectures for implementing a VUI system depend

on the device usage, application, and ecosystem. Each VUI device will include either a single microphone or a microphone array, connected to an audio processor. And there are numerous operating systems and drivers to choose from. Ideally the audio processor will come with firmware and a set of drivers configured to connect with the host processor. The operating system, such as Android or Linux, usually runs on the host. Driver software components that run in the kernel space interact with the firmware over the control interface, and audio data from the audio edge processor can be read in the user space via a standard Advanced Linux Sound Architecture (ALSA) interface. The process of connection the audio processor driver provided in the software release package into the kernel image can be complicated. It involves copying the driver source code into the kernel source tree, updating some kernel configuration files, and adding device tree entries according to the relevant hardware configuration. An alternative would be to use pre-integrated standard reference designs with exact or similar configurations. Ideally, the audio edge processor would provide streamlined software stacks for integration and come with preintegrated and verified algorithms to further simplify the process.

eeworldonline.com

|

designworldonline.com

4/6/22 6:18 PM


VOICE RECOGNITION

ALGORITHM INTEGRATION There are typically multiple algorithms employed for any given use case. Even for a relatively simple Voice Wake, multi-mic beamformers, an edge voice wake engine, and cloud-based verification algorithms are all necessary. So at least three algorithms must work together synchronously to optimize performance. Any device integrating Alexa or Google Home keywords must employ multiple algorithms, often coming from different vendors, that must be optimized to work together in one device. To simplify things, it might be possible to choose an audio edge processor that comes pre-integrated with verified algorithms and is developed and tested independent of the host system. Form Factor Integration—Modern devices can take on numerous form factors, each having its own configuration of multiple microphones. The distance and placement of microphones and speakers both play a big role in performance. Performance tuning and optimization depend on the final form factor and target use. Aspects of mechanical engineering--such as microphone sealing, acoustic treatments on the device, vibration dampening, and more—may also impact performance. Privacy--Many audio processors detect the wake word and then immediately beam the information to the cloud for

interpretation. But once the audio data is in the cloud, the user loses control over the information it contains and is at risk of a privacy breach. To avoid the problem, an edge AI processor can interpret commands and respond locally, “at the edge.” The Voice user interface (VUI) implementation not only becomes private but also speeds up, making user interactions much more natural.

Balanced Armature (BA) speaker drivers produce high fidelity sound through the use of a reed balanced in the static magnetic field between two magnets within the driver’s outer shell. A stationary coil driven with the audio source magnetizes the reed, causing it and the attached diaphragm to move in step with the sound. Sometimes BAs are combined with dynamic drivers in what is commonly called a hybrid speaker driver, as shown here. The BA driver functions as a tweeter to deliver high frequencies while the dynamic driver functions as a woofer. The hybrid combination helps deliver superior sound quality combined with active noise cancellation (ANC). Hybrid drivers are becoming common in high-to-mid end TWS devices.

DESIGNING VUIS The complexity of design requirements for VUI implementations can make it challenging to quickly develop such products. OEMs and system integrators can drastically reduce risk by working with a standard development kit. This approach lets designers develop their product on a platform that eliminates the challenges that arise in VUI projects. Designers should look for development kits having pre-integrated and verified algorithms, pre-configured microphones, and drivers that are compatible to the host processor and operating systems. Audio edge processors incorporating open architectures and development environments accelerate innovation by giving developers the tools and support to create new devices and applications. Future audio devices will be a collaborative effort.

Development kits can speed up voice integration efforts. An example of one such kit is the AISonic Audio Edge Processor IA8201 board. Among its features is integration support for any Bluetooth chipset, a system firmware release configured to support sensors, pre-integrated mics from Knowles, and a wake-word engine that serves as the voice trigger for always-on, battery operated edge devices. It also includes algorithms suitable for use with many voice-assistants or cloud-based automatic speech recognition APIs, support for location-based voice capture, acoustic-echo cancellation to sort voice commands from background noise, and gesture support.

eeworldonline.com

Knowles — IOT HB 04-22.indd 23

|

designworldonline.com

4 • 2022

DESIGN WORLD — EE NETWORK

23

4/6/22 6:18 PM


INTERNET OF THINGS HANDBOOK

The quest to optimize Why connect everything to everything? For some, the goal behind the Internet of Things (IoT) is more of a quest to optimize systems. Notes Harald Remmert, Senior Director of Technology, Digi International, “The IoT is an evolution and we’re just at the start. The ability of IoT to help optimize nearly any system or function enables design engineers to reimage everything that we do today.” For example, fast forward roughly 50 years. Visualize what it could be like to drive in a city optimized with systems connected to a communication network to handle varying traffic loads. “Today you have traffic signs at every intersection. With the IoT, how could that evolve over the years? We will likely have more electric vehicles on the road, potentially fully autonomous. Will we still need traffic lights? Or is that traffic control function moved into the car? At that time, when you approach an intersection, maybe

24

the car takes over to guide you through it safely. Then you wouldn’t need traffic lights.” Traffic will become more efficient because cars will communicate their direction and negotiate order accordingly. Thus, a segment of social operation will be optimized, resulting in lower costs; no need for traffic lights, and less use of electricity for that function. Is it possible to optimize manufacturing to the point of reaching lights-out manufacturing, one of the other goals of IoT for some? Remmert thinks not. “I don’t see that you would have a complete lights-out solution. I think you will still want to have people around to supervise. And with machines, at least today, they can’t easily fix themselves. So, we’re not to the point where a machine has an issue and a robot drives by and fixes it. We will still need people on the plant floor for a while.” But more optimization is possible now. Maybe not with all equipment, and that’s where the IoT can help. An important first step is to determine whether a

DESIGN WORLD — EE NETWORK

4 • 2022

With IoT systems, engineers can reimage production and manufacturing. Leslie Langnau, Contributing Editor

component, system, or equipment can be improved. For that, a designer needs to develop a digital twin that will be used to determine whether optimization is warranted.

MACHINE LEARNING Many IoT systems will begin with machine learning to help optimize a process. Machine learning involves the use of digital twins of the machines and processes engineers seek to optimize. “Machine learning is the initial connection to collect and measure data,” says Remmert. “Collect it and then to analyze it. That’s mainly what machine learning does. It analyzes the data. Once you build a model of what is and is not normal, then you can predict performance. Within Digi, for example, we have our smart sense cold chain monitoring solutions where we monitor medicine and food. We have temperature sensors, for example, that are sit in our refrigeration units. Because we know the brand and model of the units, and we know the cycles and temperature settings and have modeled that, then we can detect if there is an anomaly. And we can let the store know that this particular freezer or refrigerator will

start to have problems based on the data we have gathered. “But you can do this with any component. You can take a motor, for example, turn it on and gather a range of operational information,” continues Remmert. “Then you can feed that information into your model. Over time, you can teach that model what is normal behavior for the motor. If the current goes up, or the vibration changes, maybe the motor is working a little harder because it’s not well lubricated or there’s something going on with whatever’s attached to the motor.”

THE RIGHT DATA When connecting various equipment to an IoT network, it’s important to know that equipment and what data points to collect, how frequently to collect the data, and then, where to process that data. When connecting various equipment to an IoT network, it’s important to know that equipment and what data points to collect, how frequently to collect the data, and then, where to process that data.

eeworldonline.com

|

designworldonline.com


IoT OPTIMIZATION artificial intelligence, someone still needs to make that decision. A machine can make some decisions depending on the training it has received. And even with AI, an operator may also need to make certain decisions.”

SEEING BETTER WITH AI? The idea of using AI along with machine learning in an IoT system is gaining traction. The belief is that AI will find errors better than a person. Notes Remmert, “One example involves detecting cancer in patients via CAT scans. An oncologist looks at a picture to determine if cancer is present. But this takes time. What AI can now do is examine a large number of CAT scans of different patients, and using a machine learning algorithm, predict with a percentage which image is benign and which is cancerous. The oncologist still reviews the scans, but the process is less time consuming. This is where I think the machine learning and AI can come in and really improve an outcome. The oncologist can do additional tests to confirm what the machine saw.”

IOT TOOLS

More tools are coming that ease the process of developing an IoT system. At the basic level are sensors. Many are easy to integrate such as this connect sensor from Digi International.

“A designer will need to make choices about the kind of sensors used to monitor the machine, for example,” says Remmert. “Video sensing is actually powerful. So, video analytics is a possibility.” Another consideration is whether to analyze data at the machine or in the cloud. Machine-based data analysis will probably require a powerful edge system. Cloud data processing will involve connectivity choices as well as procedure choices: how much data will be sent and selecting a medium to send and return data. However, one of the challenges of IoT installations is the complexity involved. Some vendors are making efforts to solve it, and are looking at AI as one tool for the IoT toolbox. “What we see at Digi is a lot of interest in making this easy. What kind of tools are out there, or systems that have some of this already solved, “ says Remmert. Machine learning says, here’s what works well, and here’s where there might be a deviation. “However, without using a tool like

eeworldonline.com

|

designworldonline.com

More tools are coming that ease the process of developing an IoT system. At the very basic level are sensors. Many are easy to integrate. They measure, voltage, power, temperature, humidity, and so on. And you can get them at the chip level or at the device level, depending on the need. Also available are development kits that make it easy to integrate various parts, such as at the edge. The kits can include interfaces needed to connect to the sensor. “In another example, Digi places its embedded systems with its XBee modules. We have the compute, we took the hard part, made that easy, and provided development environments around that, including things like MicroPython.” Notes Remmert, “Python is used by a lot of the machine languages and libraries. We also have tools that then can, at the connectivity part upstream, go to the cloud or to a server, either wired or wireless or cellular, to get the data to the cloud and then have a device or storage in the cloud. If you run your machine learning algorithm on the edge, then it’s analyzed data and you’ll get a summary. Or, if you want to stream the data to the cloud and do the heavy lifting there, then you can push it down back to the device. “We have the ability to run MicroPython and Python at the edge.” Training modules constitute another IoT tool. Many of these tools are still under development. But there are applicationspecific models that a designer can train. For

4 • 2022

example, if you have a camera that should differentiate between apples and oranges, that model is available. “But when you get more specific, especially when you have sensory inputs for particular motors and so on, models don’t exist yet. Ideally you start with a template and then you tweak it, but there’s always work to do. These models can always be improved. That’s why, when you think about self-driving or autonomous vehicles, they’re still constantly learning; they’re not fully autonomous yet,” says Remmert.

THE WEIGHT OF DATA Of course, the IoT generates lots of data that must be managed, whether it’s at the edge or in the cloud. “We like to say that data has a weight and so the further you have to carry it, the more expensive it gets,” says Remmert. “If you can process lots of data on site, or even near the machine, then it’s inexpensive. But if you have to take that data and transmit it to a server on a local network, there is a cost there. For example, there’s wiring to buy, switches, computers, data storage, and so on. If you need to take that data and push it up to the cloud over a fiber link or cellular link, there’s a cost there. The further the data travels, the heavier it gets and the more expensive it will be to transport. So, moving the intelligence closer to the edge ideally will help with processing data and taking the result of that processed data, compressing it down, duplicating it, and then sending only a small portion of that upstream. One idea is to optimize that process and trim the data down to the essence, which will live in the cloud to be consumed by others in the future. It’s a very interesting concept to think about it that way.

THE FUTURE OF IoT For some, the IoT is advancing at a slow pace. Remmert, however, is optimistic about the IoT. He sees more connected and much smarter systems coming. “All the things we’ve learned from the data we have collected and analyzed, there is going to be more wireless connectivity versus wired. Factories will change also in the concept of industry 4.0 where you have that machine interaction and a more flexible setup in the factory versus just one line that sits there forever. So, factories, flexibility, lower volume per skew, but more customization per skew, these are the benefits coming with the IoT. And, hopefully more optimization, which will help cut down costs and streamline the whole value chain.”

DESIGN WORLD — EE NETWORK

25


INTERNET OF THINGS HANDBOOK

Practicalities of specifying 5G antennas for the IoT Equipment in the 5G band can’t afford to use antennas that are afterthoughts in the design process. Josh Mickolio, Digi-Key Electronics

There are a lot of assumptions, confusion

COMPARE CELLULAR VS. MILLIMETER WAVE

and questions about how 5G will impact the non-handset world. There are several high-level goals for 5G. One is to offer technology that can better support faster speeds with enhanced Mobile Broadband (eMBB). Another is a continued offering of the LPWA technologies like Cat-M and NB-IoT with massive Machine Type Communications (mMTC). A third is to provide a better service level for critical infrastructure applications with Ultra-Reliable Low-Latency Communications (URLLC). Any one of these would be a big leap forward. The reality is that 5G will mean different things to different users. Consider the first specification from the 3GPP organization (3gpp.org) that will utilize frequencies in the millimeter-wave (mmWave) spectrum. The bandwidth available at these frequencies—a few gigahertz-- is massive compared to that available at the lower bands used with 4G LTE, tens of megahertz per band typically. The use of mmWave spectrum dominated 5G discussion from the start due to the impressive performance gains that have been demonstrated relating to network speed. Millimeter-wave is only a small part of the 5G story. Though millimeter-wave may dominate the discussion, the sub-6-GHz bands are where most of the devices deployed will reside. The term for the mmWave bands is FR2 (frequency range 2). The sub-6-GHz bands are known as FR1. Within FR1 will be the existing 4G LTE spectrum as well as a couple more areas known as C-Band and CBRS, where the frequencies are around 3.5 to 4 GHz. Antennas will be quite physically different across these ranges of spectrum used by 5G. First, the full wavelength of the mmWave signal is below 1 cm, whereas at 600 MHz the full wavelength is almost a half a meter. Thus there is a vast difference in antenna sizes. Antennas are typically quarter-wave or half-wave in length;

26

DESIGN WORLD — EE NETWORK

at mmWaves the antenna can be tiny given that a half-wave antenna is 5-mm long at 30 GHz. A half-wave antenna at 2.4 GHz is a little over 6 cm long or 12 times bigger. Another consideration concerns the signal properties and particularly the signal propagation properties. At mmWave frequencies, the signal is easily blocked or absorbed by obstacles like construction materials and water though lower frequencies can pass through these materials more easily. A key number in any wireless link is the link budget, which is a sum of transmitted and received power gains and losses in the system. The link budget can be a useful tool in estimating usable range at a given frequency.

4 • 2022

The cellular and mmWave frequencies and wavelengths as depicted by the Tech Target market research firm. (www.techtarget.com/ searchnetworking/definition/ millimeter-wave-MM-wave)

5G features as represented by the European standards organization ETSI. (www.etsi.org/technologies/5G)

eeworldonline.com | designworldonline.com


IoT 5G ANTENNA TECHNOLOGY

The general equation for the link budget is just Power Received = Power Transmitted + Gains – Losses. The units are typically in dB. Typical gains and losses include antenna gain (Tx and Rx), receiver sensitivity, path loss, cable/connector losses, and noise figure. Path loss is how much the signal diminishes or attenuates over a given distance. It can be affected by the environment, reflections of the signal and many other factors. Path loss is often the largest loss in the link budget. It rises significantly as the frequency increases. For example, the free-space path loss at 600 MHz is 68 dB over 100 m; at 30 GHz it is 102 dB. It’s useful to consider the impact of antenna dimensions on planned 5G and existing 4G installations. The 5G NR FR1 (New Radio, Frequency Range 1) utilizes many of the same frequencies as 4G. Fortunately, this new band won’t affect existing 4G equipment. 5G and 4G will operate in a dynamic spectrum-sharing model, meaning the same antenna will work for both 5G and 4G in those bands. This spectrum-sharing model also may mean that the life cycle of a 4G LTE device should be extended more than previous generations like 3G. The performance of 4G and 5G in the lower bands in FR1, at least for

now, are not significantly different. More good news is that the cellular LPWA (Low-Power Wide-Area) technologies--or Cat-M and NB-IoT which were developed several years ago to operate on existing 4G LTE bands—meet the requirements for mMTC in 5G. This means that these devices can already be considered 5G devices, and again no changes are needed to the antenna. One factor receiving a lot of attention is the new bandwidth available for operator use in the millimeter-wave (or close to it) spectrum. These bands (the most common bands are n257, n258 and n261) are around 24 GHz up to 40 GHz. They offer a major potential throughput advantage over the other bands and will work well for certain use cases. In the real world, however, high path losses at these frequencies complicate the deployment of mmWave 5G across a wide area. Consequently, the limitations of mmWave make it impractical for most applications that must span long distances. The antennas used at lower frequencies (FR1) are fairly simple, often taking the form of a simple length of metal. The radiation pattern is somewhat isotropic or close to it in the intended directions. These antennas are also likely passive, with no active components to boost the signal or filter out noise.

An example of an antenna pattern for a 3.55 GHz whip antenna from antenna maker Taoglas. (www.taoglas. com/datasheets/ TG.59.0113.pdf)

eeworldonline.com

|

designworldonline.com

4 • 2022

In contrast, wwWave antennas are often more complex. To boost the range to usable levels at mmWave frequencies we must improve the link budget. There are a few common approaches available: increase the transmit power, increase the antenna gain or directivity on the transmit and receive signal path, or lower the path losses. There is a limit to how much power can be transmitted, especially in smaller devices the size of a mobile phone. Path losses are difficult to reduce when you have no control over the path between the transmitter and receiver and thus can’t remove obstacles, raise the device off the ground for better line of site conditions, and so forth. An antenna with a higher gain can increase the range, and highly directional (beamforming) steerable antennas are frequently used to get the most out of a wireless link at these frequencies. A phased-array antenna can concentrate signals in a specific direction to enhance gain. The array consists of a matrix of patch antenna elements each fed the same signal that is delayed by fixed amounts via a phase shifter. By adjusting the phase of the signal as it goes to each element, constructive and destructive interference is used to steer the radiation pattern, essentially aiming the beam where desired. At mmWave frequencies phasedarray antenna elements are much smaller than those operating at lower frequencies, making larger matrices feasible. The more antenna elements used, the narrower and more directional the RF energy. Still, it’s difficult to fit phased-array antennas into most embedded applications due to size, cost and complexity, even at mmWave frequencies. There may be future use cases that can utilize lower-cost antennas, but it is unlikely that mmWaves will make an impact in most embedded applications anytime soon. There is another approach to 5G that can be viewed as a compromise between the advantages of mmWave bandwidth and the longer ranges available at the lower frequencies. It involves the new spectrum available to network operators, C-band and CBRS. C-Band has recently been turned on with some U.S. network operators, but the 5G band n78 has been used in Europe and Asia for a while. The band n77 will be used in the U.S. The

DESIGN WORLD — EE NETWORK

27


INTERNET OF THINGS HANDBOOK

importance of C-band when it comes to 5G can’t be overstated. It provides a significant increase in available bandwidth, anywhere from 40 to 200 MHz, with more to come. It also sits at a frequency that is low enough to get usable range. These bands will likely not be utilized for low-power IoT devices and won’t impact this area. But we do see devices like gateways and routers becoming available that take advantage of the added bandwidth. Antennas in use today, especially multi-in-one external antennas, may already support C-Band. One metric for antenna performance is the efficiency at any given frequency. Some antennas will operate at higher efficiency at certain bands and lower efficiency at others. To get the best performance, it’s important to pay attention to the bands that will be used most and the antenna efficiency in those bands. Antennas can be an afterthought in many designs, leading to sub-optimal performance that is sometimes blamed incorrectly on the device itself. With the deployment of 5G, the need to get the most out of the antenna will only become more important.

References Understanding the Basics of the 3GPP, www.qualcomm.com/news/onq/2017/08/02/ understanding-3gpp-starting-basics mmWave Properties, www.transition.fcc.gov/Bureaus/Engineering_Technology/ Documents/bulletins/oet70/oet70a.pdf Calculating Link Budgets, www.internetsociety.org/wp-content/uploads/2017/10/ Link-Budget-Calculation.pdf mmWave Applications, iopscience.iop.org/article/10.1088/1742-6596/1804/1/012205/pdf Reference mmWave applications and Antennas, iopscience.iop.org/ article/10.1088/1742-6596/1804/1/012205/pdf

It’s not a web page, it’s an industry information site So much happens between issues of R&D World that even another issue would not be enough to keep up. That’s why it makes sense to visit rdworldonline.com and stay on Twitter, Facebook and Linkedin. It’s updated regularly with relevant technical information and other significant news for the design engineering community.

rdworldonline.com

A simple illustration of a wavefront striking four antenna elements from two different directions, as depicted by Analog Devices Inc. A time delay is applied in the receive path after each antenna element, and then all four signals are summed together. In the figure, that time delay matches the time difference of the wavefront striking each element. And in this case, that applied delay causes the four signals to arrive in phase at the point of combination. This coherent combining results in a larger signal at the output of the combiner. (www.analog.com/media/en/ analog-dialogue/volume-54/ number-2/phased-array-antennapatterns-part-1-linear-arraybeam-characteristics-and-arrayfactor.pdf)


COMMUNICATION SYSTEMS

From design to certification: Connecting your LTE NB-IoT or M1 module to an antenna Integrating a wireless module and antenna into a system requires thought, making design tradeoffs, achieving certification, and testing in production. Grégory Fron and Jeremy Gosteau, Sequans Communications

When you embark on an IoT device design that uses wireless modules, you’ll face decisions that include which bands to use, the type of antenna (make or buy), and the tests needed for both qualification and in manufacturing. Your device’s intended use case — be it medical, industrial, asset tracking, or sensing, to name a few — factor into design decisions. In addition to use cases, product specifications, expected performance, and selected carriers will greatly influence your design and technical choices. For example, the number of bands and the carrier requirements your product will define 90% of the antenna selection and the work needed to integrate the antenna into the product. The number of bands and the frequencies they occupy have a direct effect on the antenna design, antenna tuner, and matching circuit needed to get optimal performance. The carrier requirements relate to the antenna performance. You may ask “What must I do to get my product certified?” Because carriers have different requirements, you first need to decide which carrier your product will support — this will impact some of your antenna design choices. That decision relates directly to the certifications you must obtain. The number of antennas (Cat M1 or NB-IoT require one antenna, whereas LTE Cat 4 requires 2 antennas), correlation factor, and antenna tuner needs are all technical choices that are driven by carrier specifications. Carriers have their own requirements for specifications such as total radiated eeworldonline.com

|

designworldonline.com

power (TRP) and total isotropic sensitivity (TIS), key parameters for a wireless device. Figure 1 shows a TRP simulation plot. Antennas and modules are inevitably linked. While all modules must comply with 3GPP, having margins for RF sensitivity and transmit output power will ease meeting the over-the-air requirements. That’s why modules need to have better performance than 3GPP requires. Once you settle on design specs, you’re ready to choose between a custom antenna design or a COTS antenna (Figure 2). Here, you have tradeoffs. For small quantities, you may choose a COTS antenna. Antenna manufacturers provide application notes that define the size of the ground plane you need. You’ll have strict requirements to meet to get optimum performance.

NOT JUST ELECTRICAL You also have mechanical constraints. “Do I have the space to use a COTS antenna, or will I need a custom design?” Product specs plus module performance influence the antenna design. You’ll have to account for TRP and TIS. The design spec plus the module performance help you define the antenna requirements. Custom antenna design requires specific expertise and simulation tools to achieve correct performances in the required form factor and to keep reasonable antenna cost. Antenna characteristics over frequency include efficiency, return loss, gain, polarization (circular for GNSS and vertical for cellular). The choice of COTS versus custom antenna often depends on manufacturing volume. Large quantities usually get a custom antenna design to match the enclosure. For shorter time to market in lower design cost, you may consider using a COTS antenna. 4 • 2022

Figure 1. A TRP measurement indicates if your devices transmit with sufficient power.

DESIGN WORLD — EE NETWORK

29


INTERNET OF THINGS HANDBOOK

Figure 2. COTS antennas save you the pain of designing your own antenna, but they may not be cost effective in high volumes. Image: Ignion.

HARDWARE INTEGRATION Module suppliers often provide design guidelines based on reference designs. Evaluation kits (Figure 3) contain the module, COTS antenna, antenna-tuning circuit, antenna matching, and RF connector on a board. Module manufacturers should provide design files, schematics, layouts, and characterization of the board’s performance. The kit lets you evaluate the performance of the module in a design. You should expect a development board to meet CTIA requirements for all the applicable bands. While development kits make for a good start, they don’t emulate a user environment. That’s in part because development kits have more digital interfaces than you’ll use in your design. A typical development kit might include interfaces such as GPIO, UART, USB, SPI, JTAG, and so on. Your design may, for example, need to expose only two GPIOs and a UART. Getting optimal performance in the development kit is more complex than in most end applications – that’s one of the reasons why development kits are usually larger than final products. When you use a digital interface, you add the risk of signal spurs and degradation that can reduce a receiver’s sensitivity. Module manufacturers can help by performing design reviews that include schematic and layout reviews with an eye towards RF performance. A careful design review will ensure that hardware integration of the module on the host PCB is optimal for both conducted and over-the-air (OTA) performance. Module vendors usually provide you with hardware design guidelines. Pay special attention to grounding, shielding, and isolation of any potential noise sources such

30

DESIGN WORLD — EE NETWORK

as oscillators, high speed digital interface and, DC/DC converters. During the design preparation, you should anticipate manufacturing and qualification needs, by including test points, RF connectors for conducted testing, and adapting the antenna tuner to your product requirements. Designs may also include other radios such as GPS, Wi-Fi, or Bluetooth. When such radios coexist, we recommend antenna isolation or digital mitigation. As an example, a GPS receiver can coexist with other radios using GPS blanking feature, based on GPIOs, indicating when non-GPS transmission is taking place. Although the module you choose must be certified to meet 3GPP requirements, you still must fully qualify and certify the final product. Industry and regulatory bodies will accredit the end device in its final form factor and will make sure that it is safe for operation on networks.

FROM QUALIFICATION TO CERTIFICATION Now that you have your hardware design ready and first prototypes available, the next step is to qualify the RF performance. Indeed, how do you know if the overall design is good enough? What constraints do you have that might affect qualification? What is the recommended qualification flow that you must follow prior to bringing your product to certification?

During the hardware design cycle, the need of antenna matching, RF connector for conducted test, test points, antenna tuner have already been anticipated. Follow these steps in the qualification process. 1. 2. 3.

Ensure compliance to product specifications in conducted mode. Verify the antenna performances in passive mode. Verify the product performance (i.e. the combination of module, RF matching, tuner, and antenna) in active mode.

Coming back to step 1, you must verify key RF performances such as sensitivity, output power, and signal integrity. You can measure these key RF parameters with an LTE tester or go to a third-party lab. Module vendors can also propose such services to verify your design in conducted mode. For step 2 you’ll need tests performed in a specialized over-the-air (OTA) chamber. Assume that you’re using a COTS antenna such as the one shown in Fig. 2. Some suppliers will offer services where you can send your prototype and they will perform a passive OTA test. We call these “passive measurements” because your product antenna is connected to a signal generator (Figure 4), placed inside an anechoic chamber, in which the intrinsic performance of the antenna will be measured (allowing

figure 3. Design kits for wireless modules provide a platform for evaluating the device.

4 • 2022

eeworldonline.com

|

designworldonline.com


COMMUNICATION SYSTEMS

figure 4. A signal generator let’s you perform passive tests on your prototype. Image: Keysight Technologies.

antenna’s efficiency in all intended bands, return loss). That tells you if the antenna and tuning circuits are meeting requirements. During the final qualification (step 3), you will test the module, RF matching, tuner, and integrated antenna, in active mode. That’s where you measure TIS and TRP. Mobile network operators are usually checking these two key criteria on final products and have strict requirements. You should run these qualification tests at an early stage in the development to lock the hardware design. Keep in mind that conducted tests only get you so far. Assume you have a DC-DC converter powering your MCU and module, and the converter radiates spurs in your targeted frequency range. If you don’t go through this qualification, you may not see the problem until you get to the certification lab. A late discovery will yield costly hardware redesign and will impact your ability to reach the market at the required timeline. Even if your design passes certification requirements, a device with degraded RF performance draws excessive battery power. It may have poor performance in the field. You must go through these steps to make sure that your product is certifiable and can be used on commercial cellular networks. If you’re developing a device such as a router that uses an external antenna where the enclosure can be shielded, then the risk is much lower than for a device with an internal antenna. Once you’re ready with your hardware qualification, you must pass regulatory certification tests for individual regions, such as those for Japan (JATE, TELEC), or FCC in the U.S., RED in Europe, or UKCA in the UK. After that, your selected mobile network operator may ask you to perform dedicated product certification. At this stage, you have to perform, for example, TIS and TRP testing. Your

eeworldonline.com

|

designworldonline.com

operator may also ask you to confirm your compliance to industry certification standards, such as GCF or PTCRB.

MANUFACTURING TEST While certification is important, don’t forget about manufacturing constraints. There are usually three options to consider. •

No test during the end-device manufacturing: because the cellular module is tested before assembly, and when the manufacturing process is proven, the device manufacturer may consider that there is no need for final test in the manufacturing line. In such case, the final functional test is performed by the distributor or by the end customer, at first activation in the field. Test in non-signaling mode: this option involves a simple tester (RF signal generator, spectrum analyzer, or RF sensor). By doing this, the device manufacturer performs a sanity check, ensuring no major flaw in the full device assembly. Test in signaling mode: this option involves an LTE callbox (an eNB emulator). By doing this, the device manufacturer performs a functional E2E test, mimicking what the device will face once in the field. Note that a test SIM card is used in such case.

Generally, for the latter two options, it’s mandatory to use a controlled RF OTA environment, such as a shielded box.

4 • 2022

DESIGN WORLD — EE NETWORK

31


INTERNET OF THINGS HANDBOOK

Managing smart street lighting design Wi-SUN may well be the linchpin that turns street lighting into a scalable IoT umbrella offering a multitude of solutions beyond illumination. Soumya Shyamasundar, Silicon Labs

According to the World Bank, legacy street lighting systems based on traditional technologies can account for as much as 65% of a city’s electricity consumption. The energy consumption, of course, adds to the amount of CO2 emissions in the atmosphere, and the fiscal impact of this waste is considerable. Because LED lighting can yield up to 70% in energy savings, many governments are now replacing incandescent bulbs with LEDs in residential, public, and commercial spaces. LED lamps typically last up to 20 years. While it is understood that the ROI is substantial, their long lifespan is giving rise to new business models, including the augmentation of energy services through controls and sensors. The term for this process is smartification, the use of data to enhance previously purely physical products, turning them into smart service bundles (smart products). In a citywide network, the chief purpose behind smartification thus far has been to enable a single application – street light management. Smart street lighting systems make possible adjustable lighting strategies

32

DESIGN WORLD — EE NETWORK

and energy shaving—that is, leveling out peaks in energy use. Ditto for predictive maintenance through the provision of alerts for anomaly detection and malfunction. Wireless connectivity and two-way communication can make lighting a scalable IoT umbrella for a multitude of other smart city features beyond illumination. Examples include CO2 sensors, public Wi-Fi, smart parking, and even gunshot detection. The connection of street lighting to a centralized management system (CMS) makes it possible to access live information and enable data sharing across multiple city departments. Smart street lighting systems must operate in real-time and adapt quickly to shifting environmental conditions. To do so, they require connectivity that is scalable, secure, reliable, low-latency, long-range, highthroughput and two-way. The key to smart lighting is all-inclusive design: Understanding the city-specific contextual lighting needs, the city’s long-term ambitions, and the visibility on usage and energy consumption. It is important to make sure the hardware is digitally enabled and ready to handle likely future requirements. The typical smartification of streetlighting adds a wireless controller onto each luminaire. This controller communicates with a management application. Developers of endto-end smart LED street lighting systems typically consider a few key elements within the IoT development platform: Interoperability and scalability: These factor into the choice of a network topology and protocol. One widely used methodology in smart street lighting is mesh networking. Here, street light controllers communicate with each other and with gateways over a self-configuring and self-healing network. The gateway routes the

4 • 2022

communication to the CMS. Adding more devices creates multiple communication paths, thereby ensuring there is no single point of failure and thus making the network more robust. Mesh networks contrast with pointto-point networks where nodes communicate directly with the CMS via cellular links, arguably offering less redundancy. A leading IPv6 sub-gigahertz mesh technology for smart city and smart utility applications is Wi-SUN, which stands for Wireless Smart Ubiquitous Network. Because of its advantages for large-scale outdoor IoT wireless communication networks, the City of London recently updated its lighting stock to incorporate Wi-SUN, citing the additional capability to enable new programs such as environmental monitoring. Once the streetlights are deployed, the sensor node applications can work from any of the streetlights. This means it is possible to expand to any number of sensor applications depending upon city needs and how they may wish to deploy some of the emerging applications. A future-proof, extensible option, Wi-SUN gives cities a way to expand for 30-40 years. Flexibility: Many smart city installations are still in their infancy. Wi-SUN technology provides the flexibility to integrate new, thirdparty devices. An open, standards-based technology, Wi-SUN allows any vendor to build a solution and deploy it. It ensures that cities have enough flexibility to make modifications, if they choose, and still gives them options to buy hardware and software from different suppliers, avoiding vendor lock-in. Where cellular networks are typically subscription-based and in a licensed frequency band, Wi-SUN is a mesh network, residing in the ISM radio band – a free frequency spectrum. Once Wi-SUN has been deployed, it is up to the city management to manage the whole network. And because it is a mesh network, newer WiSUN nodes can be added without redeploying or rearchitecting the entire network. Also worth considering is the limited lifespan of a cellular network. Cellular operators typically end-of-life their products within 10-

eeworldonline.com

|

designworldonline.com


IoT NETWORKS

12 years, a short time frame for smart city infrastructure. For UNCONSTRAINED NETWORK TOPOLOGY context, it is estimated that around 30 million endpoints in the (NETWORK SIZE SMALL) U.S. were orphaned by the phasing out of 2G networks. Sub or 2.4 GHz: Designers must choose between sub-GHz or 2.4 GHz chips when considering the communication protocol or platform for the lighting controller itself. There are many 2.4GHz mesh options available such as Zigbee and Bluetooth mesh. These are widely used in the home automation and commercial lighting sectors respectively because of their low-power mesh topology. Because the sub-gigahertz spectrum is less crowded than the 2.4 GHz area, emerging sub-gigahertz equipment offers longer range and lower RF noise. As smart cities are being built out, options that scale to hundreds of thousands of nodes will become a critical requirement, making Wi-SUN deployed in the sub-gigahertz spectrum attractive. Security: Today’s chips are flexible in the sense that once an OEM builds a luminaire controller, the chip needn’t change when the next version of the specification arrives. Chip, firmware, software and applications can be updated via over-the-air (OTA) updates in wireless lighting infrastructure. So security issues that emerge can be corrected. Cyber protection is especially important for critical infrastructure – clearly, hacked city streetlights are potentially dangerous. Chips should have a secure element integrated within their hardware to enable the secure storage of keys and passwords. Built-in security features include true random number generators, crypto engines, secure debugging features with lock/unlock, DPA countermeasures PROVEN SHOCK, VIBRATION & and anti-tamper chip designs. NOISE REDUCING SOLUTIONS With the above factors in mind, consider that the annual global energy consumption is estimated to top 580 million terajoules. According to the International Energy Agency, RIAL MATE electricity demand will rise by 4.5% in 2021 (over 1,000 TWh). In a world that’s shifting from product to augmented product to platform, smart, connected solutions are helping to minimize the rising use of energy. Cities of the not-so-distant future will be technologically smart, using software and wireless communication to optimize their use of energy and minimize consumption. Performance monitoring systems will deliver real-time insights into how public amenities are running, enabling better performance and less maintenance. And it all starts with smart LED lighting networks.

PROTECT AGAINST

DROP SHOCK DAMAGE

References Silicon Labs, www.silabs.com

800.838.3906

sorbothane.com DESIGN WORLD — EE NETWORK

SORBOTHANE® MADE IN THE U.S.A.

33 Sorbothane_DesignWorldAd_C-9.indd 1

2/3/20 9:24 AM


INTERNET OF THINGS HANDBOOK

IoT module energy use: Protocols matter Time equals power when transmitting data to and from NB-IoT and LTE-M devices. How you use communication protocols can affect power use.

When designing an IoT device using a cellular

Editor’s note: This article is based on a conversation with Phil Ware and Sandro Sestan of u-blox.

Client

cellular network. Thus, they all need to go through the cellular protocol stack. All modules have a boot time and must physically scan for consider in minimizing power consumption. Data a network in the bands of interest. That time is sheets can provide guidance on power supply generally fixed. Once a module is registered on the network, you’re in the application space design, module placement, and connections. where your choices make a difference. That will get you through the hardware design of Suppose you’ve designed a module into a your product, but it’s not the whole story. How utility meter. You might use a “fire and forget” protocol that minimizes power consumption you use the module makes a difference as well. by simply sending a meter reading and shutting down the module. That’s the lowest “How you use the module” means that power (and cost) way to send a message. If a protocols make a difference in power reading gets lost, there’s always tomorrow. To consumption. Your goal is to keep the module accomplish this exchange, simply send a User operating at the lowest possible power level Datagram Protocol (UDP), Message Queue for the maximum amount of time while still Telemetry Transport for sensor networks achieving the desired performance. (MQTT-SN), or Constrained Application The protocols you use and how you Protocol (CoAP) message. The application can implement them at the application level either shut down the module or use the make a difference in power use. Some power 3GPP power-saving mode (PSM). consumption is inevitable regardless When the application requires an of module or protocol because all Server acknowledgement from a server, the modules must first connect to the module needs to wait for a downlink message. During this time, the radio is still connected, waiting for the ACK message. Some Client Hello modules use NB-IoT because of low power, but then use “chatty” protocols that take time and use unnecessary power. Hello Verify Request wireless module, you have many factors to

Client Hello

EXPERIMENTAL RESULTS

Server Hello, serverkey Exchange, Server Hello done ClientKeyExchange, [ChangeCipherSpec], Finished [ChangeCipherSpec], Finished

As an experiment, Ware tried using Cat-M1 and NB-IoT modules to send messages of 150 bytes. When sending that amount of data, the NB-IoT module used less power than the Cat-M1 module. Cat-M1 gets more efficient as the number of bytes increases. At 500 bytes and up, low-data-rate protocols mean your transmitter is on for a longer period of time. Cat-M1 modules support ten times the data

A CoAP protocol transaction needs relatively few handshakes.

34

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

|

designworldonline.com


IoT WIRELESS MODULE POWER

rate as NB-IoT and thus they’re on for less time for a given amount of data. With the NB-IoT protocol stack, when it reattaches to the network, the attached message and the customer’s message are all part of the same transmission. Cat-M1 does not yet have these optimizations and the application messages must travel over the User Data Plane. This is less efficient than sending messages over the Control Data Plane for shorter messages, say less than 200 bytes. The power efficiency crossover seems to occur somewhere between 150 and 200 bytes. If you look at Lightweight M2M (LwM2M), you’ll see that its protocol is message heavy. Thus, modules must negotiate lots of information with the host. It does, however, have lots of features. Even so, the day-to-day operation of an IoT device might not need LwM2M. It can use MQTT-SN or CoAP instead. The transaction sequence below shows the typical CoAP protocol sequence. Module users can change protocols depending on the application and should take advantage of that option. Keep that in mind

when developing application software for your device. Protocol stack libraries are available for that purpose. Clients can be built into modules; you don’t need to develop your own client. There’s no cost in switching protocols. It’s just a matter of sending a few TCP or UDP packets.

3GPP POWER SAVING 3GPP specifies power-saving modes. For example, a module could go into a listening mode for a specified amount of time before going back into sleep mode, called Extended Discontinuous Reception (eDRX). A module’s power-saving features depend on its chipset, although most implement 3GPP Release 13/ Release 14 features. In listening mode, a chipset might only consume microamps of current. Some modules don’t need to fully boot when they wake up from sleep mode, which means less power consumption. Other modules need to boot, which can take four or five seconds when the module is powered but not sending useful data. The difference in power consumption from transmit to receive may depend on distance.

A short distance to a base station means less transmit power. A module’s receive current could be less than 100 mA but transmit current could be much higher. It depends on distance to base. The network defines the transmit power needed by the remote device. For the receiver, the network will determine how often (duty cycle/C-DRX) the module will listen to the network signaling. The graph below compares a typical transmit and receive current for a wireless module. As you can see, the protocols you use can make a difference in your IoT device’s power consumption. Choose your protocols based on data-transfer rates and size. The right choices result in lower power use and lower power costs for the final device user.

Current [mA] 400 300

A typical wireless module draws far more current when transmitting, but for a relatively short period of time. Source: u-blox. Source “Constrained Application Protocol for Internet of Things,” Xi Chen, Washington University in St. Louis.

Current consumption value depends on TX power and actual antenna load

200

100 0 1 Slot 1 Resource Block (0.5 ms)

1 LTE Radio Frame (10 ms)

eeworldonline.com

|

designworldonline.com

1 Slot 1 Resource Block (0.5 ms)

1 LTE Radio Frame (10 ms)

4 • 2022

DESIGN WORLD — EE NETWORK

Time [ms]

35


INTERNET OF THINGS HANDBOOK

On the drawing boards: Direct chip-based power control The next generation of IoT power control electronics may sit on a PCB that’s smaller than a U.S. quarter. Leland Teschler, Executive Editor

Here’s the rub when it comes to modernizing the outlets, switches and circuit breakers that characterize ac electrical systems: Any innovations must interoperate with an installed base of equipment that may have been in place for decades. Manufacturers say isues surrounding compatibility with installed equipment have slowed the rate of introduction for technologies such as the internet of things (IoT) and intelligent controls. But this situation may be changing thanks to advances in solid-state controls. One example of the trend comes from Amber Solutions via an alliance with Infineon Technologies. The two have devised a system-on-a-chip (SoC) for power management small enough to be easily integrated into the form factors of existing electrical equipment such as an ordinary ac outlet box. The SoC enables the design of endpoints that include any number of IoT communication

36

DESIGN WORLD — EE NETWORK

4 • 2022

components. The resulting products are said to deliver fully-connected smart building intelligence that makes use of existing gang boxes and circuit breaker panels. This SoC, for example, is claimed to imbue light switches, outlets, and circuit breakers with enough processing power to handle up to 10 times the sensing capabilities and smart features of currently available smart switches and outlets. Amber says its technology today resides on a 13x28-mm PCB. It claims the next generation will fit on a 10x20-mm chip package. The technology can be viewed basically as a combination ac-to-dc supply plus a smart switch. Output power ranges from 0.2 to 15 W, drawn from input voltage that spans 50 to 480 V. The on-demand output voltage can be 3.3 V, 5 V, 12 V, or anything in between. Power density comes in at 5 W/in3, with built-in short circuit, over-voltage, and thermal protection.

Amber says its technology today resides on a 13´28-mm PCB. It claims the next generation will fit on a 10´20mm chip package.

eeworldonline.com

|

designworldonline.com


POWERING IoT DEVICES

Amber says the presense of integrated wireless communications allows an Amber-powered circuit breaker to provide immediate data on a specific endpoint that caused a trip. Such data can include the character of the trip (slow-trip, dead short, etc.) and enable wireless shut off or resetting.

The technology includes a built-in power signature identifier that notices electrical anomilies such as arcing. That’s because Amber chips essentially digitize the sine wave at kilohertz rates using proprietary software/ firmware and algorithms, giving the technology

awareness as to the electrical sine wave’s “normal” state at all times. This awareness lets it detect anomalies such as a forming arcfault literally the instant the anomaly occurs. The technology then follows the anomaly downstream within a single cycle (or within

Santa Clara Convention Center

2022 CO-LOCATED EVENT

robobusiness.com

Sponsorship opportunities are available for future ROBOBusinessDirect programs.

For more information, contact

COURTNEY NAGLE

cseel@wtwhmedia.com | 440.523.1685

multiple half cycles if needed). If the anomaly repeats—even in one complete cycle or a cycle and a half—the system recognizes it as a true arc and trips. However, for anomalies that don’t repeat after a single cycle, the system allows electricity to flow, virtually eliminating nuisance trips. All in all, Amber’s embedded intelligence operates quickly and with enough deep understanding of electrical waveforms to virtually eliminate false and nuisance GFCI and AFCI trips. The switching is solid state and hence arc free; thus it requires no special measures for the switching of capacitive or inductive loads. And it is configurable for SPST or SPDT. Amber also says it has a special version of its ac switch for driving LEDs.


INTERNET OF THINGS HANDBOOK

Amber Solutions’ AC Direct DC Enabler is now available as a demo kit for technical evaluation. It enables dc extraction directly from ac mains without the use of transformers, rectifiers, or filtering. The evaluation platform takes the form of a preproduction discrete board. Amber expects to offer the devices in a silicon chip format scheduled for release in early 2023 or sooner. Similarly, its AC Switch technology is also available as a demo kit. The AC Switch provides precise digital control of electricity regulation, control, and protection. It also includes Amber’s AC Direct DC Enabler technology.

The new SoC can be contrasted with conventional solid state relays (SSRs). SSRs switch electrical power using optically coupled photovoltaic and semiconductor-controlled rectifiers (SCRs or Triacs). The conventional SSR is inefficient—for every 1 A of the current it handles, it dissipates 1.4 W; thus a 10-A SSR dissipates 14 W. Conventional SSRs also turn on and off slowly and have a large mechanical footprint. Perhaps most important, conventional SCRs and Triacs used in SSRs don’t control power to the load on a cycle-by-cycle basis. When modulating power— as for example in conventional light dimmers--they typically do so by turning on for some fraction of the ac cycle. In contrast, the Amber solid state switch does handle power cycle-by-cycle rather than blocking portions of the ac waveform within each cycle. It is integrated with what’s called a dedicated ac-dc enabler. The enabler basically extracts dc directly from the ac mains for powering the SoC. Amber says its solid state switch is designed to provide multiple control functions such as lighting control/dimming, over-voltage surge protection, over-current capacitive load protection, and overvoltage inductive load reaction. It also dissipates a relatively minescule fraction of milliwatt for every amp of current it handles. The presense of integrated wireless communications allows an Amber-powered circuit breaker to provide immediate data on a specific endpoint that caused a trip. Such data can include the character of the trip (slow-trip, dead short, etc.) and enable wireless shut off or resetting. Wall outlets and light switches can incorporate Amber’s switch to handle overload and thermal sensing. Moreover, Amber says its chip can also power common dc loads directly such as security sensors, micro cameras, air quality detectors, pressure sensors, wireless transmitters and receivers, and so forth. Amber expects to be in production next year.

References Amber Solutions, ambersi.com Infineon Technologies, www.infineon.com

38

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

|

designworldonline.com


The Premier Engineering Events for Producers of Commercial Robotics Systems 2022 CO-LOCATED EVENT

S U M M I T

&

E X P O

FULL AGENDA Hear from 35+ robotics industry thought leaders Network and reconnect with other robotics professionals Walk Boston Dynamics’ Spot quadruped through an obstacle course Participate in a Robotics Design for Additive Manufacturing workshop Get access to the MassRobotics Career Fair

MAY 10-11, 2022 B O S T O N C O N V E N T I O N AND E X H I B I T I O N C E N T E R B OS TO N, M A

robot ic ssum m it.com healt hcarerobot ic sforum .com EXHIBIT AND SPONSORSHIP OPPORTUNITIES For more information, contact Courtney Nagle 440.523.1685 cseel@wtwhmedia.com

RSE22_3-22_Vs2.indd Robotics Summit house ad 51003-22.indd 39

3/15/22 4/6/22 12:57 6:19 PM PM


INTERNET OF THINGS HANDBOOK

Ultra-long-life batteries harness the passivation effect as remote wireless devices become increasingly

A little known chemical reaction helps extend battery life. Sol Jacobs Tadiran Batteries

essential to the Industrial Internet of Things (IIoT), there is a growing need to understand the principles behind extended battery life. Many IIoT nodes require a battery-driven energy source. These off-grid low-power applications fall into two categories. The first is those devices that draw average current measurable in microamps with high pulses measurable in the multi-amp range. Their annual energy consumption is low enough that an industrial-grade primary (non-rechargeable) battery is a practical power source. The second is made up of those devices that draw higher average amounts of energy (background current and pulses) measurable in milliamps. Here, the energy drain is enough to prematurely exhaust a primary cell, thus requiring use of an energy harvesting device combined with an industrialgrade rechargeable lithium-ion (Li-ion) battery to generate high pulses. A point to note is that lithium batteries are not all alike. Lithium is the lightest nongaseous metal. Lithium chemistries feature a high intrinsic negative potential that exceeds that of all other

metals, offering the highest specific energy (energy per unit weight) and energy density (energy per unit volume) greater than any other chemistries. Lithium cells operate within a current voltage (OCV) range of 2.7 to 3.6 V. Lithium chemistry is also non-aqueous, thus less likely to freeze in extreme environments. Within the lithium family there are numerous competing chemistries, including iron disulfate (LiFeS2), lithium manganese dioxide (LiMnO2), lithium thionyl chloride (LiSOCl2), and lithium metal-oxide, each offering unique advantages and disadvantages. Of all these chemistries, LiSOCl2 is overwhelmingly chosen for long-term deployment in extreme environments, including AMR/AMI metering, M2M, SCADA, RFID, tank-level monitoring, animal and asset tracking, RFID, and environmental monitoring, and many other applications. Bobbin-type LiSOCl2 chemistry delivers the highest capacity and highest energy density of all lithium cells, which promotes product miniaturization. LiSOCl2 chemistry also features incredibly low self-discharge (less than 1% per year for certain cells), allowing certain low-power devices to operate up to 40 years on the original battery. Bobbin-type LiSOCl2 batteries also feature an extremely wide temperature range (-80 to 125°C), making them good candidates for use in extreme environments. Bobbin-type LiSOCl2 batteries can be specially modified for use in the cold chain where consistent temperatures as low as -80°C must be maintained. Typical applications include the continuous monitoring during the transport of frozen foods, pharmaceuticals, tissue samples, and transplant organs. Bobbin-type LiSOCl2 batteries can also be modified for extremely high temperatures, including RFID tracking devices used in hospitals that must withstand many autoclave sterilization cycles at 125°C. Use of an extended-temperature battery saves

Bobbin-type LiSOCl2 batteries deliver an operating life exceeding 40-years for certain low-power applications.

40

DESIGN WORLD — EE NETWORK

4 • 2022

eeworldonline.com

| designworldonline.com


PASSIVATION EFFECT VOLUME LEFT AFTER 10 AND 20 YEARS OF SELF-DISCHARGE ONLY (NO LOAD) XOL TL-49xx Series

93%

100%

86%

IXTRA TL-59xx Series and other manufacturers

100%

100%

80% iXTRA

LiMnO2 and alkaline cells

70% other manufacturers

0%

years

10

0%

20

years

10

*

20

Generally not recommend for applications requiring 10+ year operating life as the average current drawn plus the annual self-discharge rate will cause cell capacity to be quickly depleted, thereby reducing the operating life of the device.

0%

years

10

20

High annual self-discharge rates cut maximum battery life to well under 10 years.

Left to right, volume left after 10 and 20 years of self-discharge only (no load) for the XOL TL-49xx Series, the IXTRA TL-59xx Series, and for other manufacturers. The IXTRA TL-59xx Series is generally not recommended for applications requiring 10+ year operating life as the average current drawn plus the annual self-discharge rate will cause cell capacity to quickly deplete, thereby reducing the operating life of the device.

time and money by eliminating the need to remove the battery prior to sterilization and also ensures 24/7 data continuity.

THE IMPORTANCE OF PASSIVATION Self-discharge happens in all batteries as chemical reactions sap energy even while a battery is inactive or in storage. A battery’s self-discharge rate is impacted by numerous variables, including the cell’s current discharge potential, the purity and quality of raw materials, and the cell’s ability to harness the passivation effect. Passivation occurs when a thin film of lithium chloride (LiCl) forms on the surface of the lithium anode to limit chemical reaction. Whenever a load is placed on the cell, the passivation layer also creates an initial high resistance, causing voltage to dip temporarily until the discharge reaction removes the passivation layer. This process keeps repeating each time the load is removed. Passivation is affected by factors such as the current capacity of the cell, length of storage, storage temperature, discharge temperature, and prior discharge conditions; removing the load from a partially discharged cell can impact passivation relatively more than in a new cell. Passivation is essential for reducing self-discharge, but too much of it can also restrict energy flow when it is needed most. Reducing the level of passivation permits greater energy flow, but the trade-

off is a higher self-discharge rate and shorter operating life. The effects of passivation on selfdischarge and energy flow is analogous to fluid-filled bottles having different sized openings: • • • •

• •

The volume of the glass/bottle is analogous to battery capacity Evaporation/self-discharge is analogous to capacity loss Flow volume is analogous to discharge/ energy flow Low liquid/electrolyte quality can clog the bottle opening, causing a stoppage to flow/passivation Low liquid/electrolyte quality can cause evaporation/self-discharge Large openings are good for fast flow/ discharge but not for storing fluids for a long time The ability to deliver a long life demands a smaller opening for low evaporation/ self-discharge

VISUALIZING BATTERY PROPERTIES Low flow, low self-discharge

Medium flow, medium self-discharge

XOL TL-49xx Series

IXTRA TL-59xx Series & other mfgs.

LiMnOz & High flow, high alkaline cells self-discharge

Bobbin-type LiSOCl2 batteries have “small openings” (low flow rate with slower evaporation/self-discharge), while spiralwound LiSOCl2, LiMnO2 and Alkaline cells have “larger openings” (higher flow rates with faster evaporation/self-discharge). The larger opening causes excessive evaporation/selfdischarge. Conversely, an opening too small

4 • 2022

DESIGN WORLD — EE NETWORK

41


INTERNET OF THINGS HANDBOOK

TABLE 1. HOW LITHIUM BATTERY CHEMISTRIES STACK UP AGAINST EACH OTHER. Primary Cell

LiSOCL2

LiSOCL2

Li Metal Oxide

Li Metal Oxide

Bobbin-type with Hybrid Layer Capacitor

Bobbin-type

Modified for high capacity

Modified for high power

Alkaline

LiFeS2

LiMnO2

Lithium Iron Disulfate

Lithium Manganese Oxide

(AA-size) Energy Density (Wh/Kg)

700

730

370

185

90

335

330

Power

Very High

Low

Very High

Very High

Low

High

Moderate

Voltage

3.6 to 3.9 V

3.6 V

4.1 V

4.1 V

1.5 V

1.5 V

3.0 V

Pulse Amplitude

Excellent

Small

High

Very High

Low

Moderate

Moderate

Passivation

None

High

Very Low

None

N/A

Fair

Moderate

Performance at Elevated Temp.

Excellent

Fair

Excellent

Excellent

Low

Moderate

Fair

Performance at low temp

Excellent

Fair

Moderate

Excellent

Low

Moderate

Poor

Operating life

Excellent

Excellent

Excellent

Excellent

Moderate

Moderate

Fair

SelfDischarge Rate

Very Low

Very Low

Very Low

Very Low

Very High

Moderate

High

Operating Temp.

-55°C to 85°C, can be extended to 105°C for a short time

-80°C to 125°C

-45°C to 85°C

-45°C to 85°C

-0°C to 60°C

-20°C to 60°C

0°C to 60°C

can get clogged (excess passivation), which can be exacerbated by chemical impurities. Wireless devices increasingly require high pulses of short duration to power data queries and two-way wireless communications. Standard bobbintype LiSOCl2 batteries deliver low-level background current during ‘stand-by’ mode but cannot generate high pulses because they are designed to provide low discharge rates. This challenge can be easily overcome with the addition of a patented hybrid layer capacitor (HLC) that delivers periodic high pulses. The HLC also offers an added bonus in the form of a unique end-of-life voltage plateau that can be interpreted to transmit low-battery status alerts. Supercapacitors perform a similar pulse-supplying function for consumer electronics. But they are generally ill suited to industrial applications due to multiple drawbacks, including: linear discharge qualities that prevent use of all the available energy, low capacity, low energy density,

42

DESIGN WORLD — EE NETWORK

and high annual self-discharge rates (up to 60% per year). Supercapacitors linked in series also require the use of cell-balancing circuits that add expense, bulkiness, and consume energy to accelerate their selfdischarge rate.

DIFFICULT TO DETECT The cumulative impact of high annual selfdischarge (largely due to high passivation) may not become apparent for years. Unfortunately, predictive models used to determine theoretical battery life tend to be highly unreliable because they underestimate the effects of passivation and long-term exposure to extreme temperatures. When ultra-long-life reliability is a concern, thorough diligence is in order when comparing brands. All potential battery suppliers should provide fully documented test results, in-field performance data under similar conditions, and multiple customer references.

4 • 2022

This due diligence is so important because a superior quality bobbin-type LiSOCl2 battery can feature a self-discharge rate as low as 0.7% per year, thus enabling certain low power devices to operate maintenance-free for up to 40 years. By contrast, a lower quality LiSOCl2 cell with a higher rate of passivation can exhaust up to 3% of its total energy each year, losing up to 30% of total capacity in 10 years, making 40-year battery life impossible. All in all, a superior-quality battery can reduce or eliminate the need for future battery replacements to significantly lower the total cost of ownership.

References Tadiran Batteries, www.tadiranbat.com

eeworldonline.com

| designworldonline.com


DESIGN WORLD ONLINE EVENTS AND WEBINARS Check out the DESIGN WORLD WEBINAR SERIES where manufacturers share atheir experiences and expertise to help design engineers better understand technology, product related issues and challenges.

WEBINAR SERIES

FOR UPCOMING AND ON-DEMAND WEBINARS, GO TO: www.designworldonline.com/design-world-online-events-and-webinars

WTWH Webinar house ad - 03-22.indd 43

4/6/22 6:19 PM


AD INDEX INTERNET OF THINGS HANDBOOK • APRIL 2022

Coilcraft.................................................................... 7

OKdo...................................................................... 20

Cornell Dubilier Electronics, Inc............................ BC

RECOM Power GmbH.............................................. 3

Digi-Key Electronics.................................. Cover, IFC

Sorbothane Inc....................................................... 33

Energy Conversion Congress and Expo................. 11

Tadiran Batteries....................................................... 1

KYOCERA AVX..................................................... IBC

SALES

LEADERSHIP TEAM

Jami Brownlee jbrownlee@wtwhmedia.com 224.760.1055

Courtney Nagle cseel@wtwhmedia.com 440.523.1685

Jim Dempsey jdempsey@wtwhmedia.com 216.387.1916

Jim Powers jpowers@wtwhmedia.com 312.925.7793 @jpowers_media

Mike Francesconi mfrancesconi@wtwhmedia.com 630.488.9029

Publisher Mike Emich memich@wtwhmedia.com 508.446.1823 @wtwh_memich Managing Director Scott McCafferty smccafferty@wtwhmedia.com 310.279.3844 @SMMcCafferty EVP Marshall Matheson mmatheson@wtwhmedia.com 805.895.3609 @mmatheson

Advanced Driver-Assistance Systems (ADAS) Classroom Whether or not you want a self-driving car, there’s no arguing that most drivers need “assistance”, which makes ADAS a formidable technology. It’s a new learning curve for many, and this classroom is here to help. Neural network software (and the memory to accommodate it), ADAS sensors, LiDAR, are just some of the technologies that allow a car to “see”. How and why? START WITH TUTORIALS IN THIS EE CLASSROOM!

Check out our EE Classroom to learn more: www.eeworldonline.com/adas-classroom


ǐǐǐ DŽǒLjƼƾNjƺ ƺǏǑ ƼLjdž

KYOCERA AVX - IOT HB 04-22.indd 41

4/6/22 6:20 PM


It’s not you, Mark.

I just need more space.

Thin, Powerful and Frees Up Space.

Shrink your device with ultra low profile capacitance. Visit cde.com/flatpack for full details or contact us at (864) 843-2277.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.