IoT Design Guide 2019

Page 1

SUMMER 2019 | VOLUME 6 WWW.EMBEDDED-COMPUTING.COM http://embedded-computing.com/designs/iot_dev_kits/

IOT INSIDER

PG 5 The truth about 5G and when it will be here

Development Kit Selector

2019

EXECUTIVE Q&A

K.C. Liu Founder, Chairman, and CEO K.C. Liu on the present and future of Advantech and the Internet of Things

PG 18

Design Guide PG 38

congatec Server-on-modules bringing industrial computing power to the edge. PG 40

Crystal Group Rugged Systems Edge Computing for IoT PG 39

Avnet Avnet adds software capabilities to end-to-end IoT offering. PG 37


Advertiser Index PAGE

ADVERTISER

38

ACCES I/O Products, Inc. – PCI Express mini card, mPCIe embedded I/O solutions Advantech – Enabling Deeper, Domain-Specific Integration on the Industrial IoT Aetina – Jetson Edge Computing Platform Provider Avnet – Avnet adds software capabilities to end-to-end IoT offering Avnet – Why you need a technology partner with hardware & software expertise Computex –Building global technology ecosystems congatec inc – Server-onmodules bringing industrial computing power to the edge Critical Link – MitySOM-A10S: Arria 10 system on module for industrial IoT Crystal Group, Inc. – Edge computing for IoT Crystal Group, Inc. – Edge computing for IoT Digikey – Development Kit Selector Embedded Technologies Expo & Conference 2019 Microchip Technology, Inc. – Simplify development of secure, connected devices PICMG – Join the PICMG IIoT specification effort Pixus Technologies – Superior Rugged Metal Claw Sintrones Tech Corp – Intelligent transportation systems Virtium LLC – Storage and memory built for extreme conditions WinSystems, Inc. – Robust IIoT solutions

18

29 1

37

47 1

15

1 9 1 21 33

13 25 27 52 3

EVENTS IoT World 2019 May 13-16, 2019 Santa Clara, CA http://tmt.knect.com/iot-world

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass rich.nass@opensysmedia.com EDITOR-IN-CHIEF Brandon Lewis brandon.lewis@opensysmedia.com TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com ASSOCIATE TECHNOLOGY EDITOR Laura Dolan laura.dolan@opensysmedia.com ASSISTANT MANAGING EDITOR Lisa Daigle lisa.daigle@opensysmedia.com CONTRIBUTING EDITORS Majeed Ahmed Jeremy S. Cook Nicholas Cravotta Bill Schweber DIRECTOR OF E-CAST LEAD GENERATION AND AUDIENCE ENGAGEMENT Joy Gilmore joy.gilmore@opensysmedia.com ONLINE EVENTS SPECIALIST Sam Vukobratovich sam.vukobratovich@opensysmedia.com CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com SENIOR WEB DEVELOPER Aaron Ganschow aaron.ganschow@opensysmedia.com WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com CONTRIBUTING DESIGNER Joann Toth joann.toth@opensysmedia.com EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

SALES/MARKETING SALES MANAGER Tom Varcie tom.varcie@opensysmedia.com (586) 415-6500 MARKETING MANAGER Eric Henry eric.henry@opensysmedia.com (541) 760-5361 STRATEGIC ACCOUNT MANAGER Rebecca Barker rebecca.barker@opensysmedia.com (281) 724-8021 STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838 STRATEGIC ACCOUNT MANAGER Kathleen Wackowski kathleen.wackowski@opensysmedia.com (978) 888-7367 SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582 SOUTHWEST REGIONAL SALES MANAGER Barbara Quinlan barbara.quinlan@opensysmedia.com (480) 236-8818 INSIDE SALES Amy Russell amy.russell@opensysmedia.com ASIA-PACIFIC SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com BUSINESS DEVELOPMENT EUROPE Rory Dear rory.dear@opensysmedia.com +44 (0)7921337498

WWW.OPENSYSMEDIA.COM PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com EXECUTIVE VICE PRESIDENT John McHale john.mchale@opensysmedia.com EXECUTIVE VICE PRESIDENT Rich Nass rich.nass@opensysmedia.com CHIEF FINANCIAL OFFICER Rosemary Kristoff rosemary.kristoff@opensysmedia.com GROUP EDITORIAL DIRECTOR John McHale john.mchale@opensysmedia.com

Embedded Technologies Expo & Conference June 25-27, 2019 San Jose, CA www.embeddedtechconf.com

VITA EDITORIAL DIRECTOR Jerry Gipper jerry.gipper@opensysmedia.com TECHNOLOGY EDITOR Mariana Iriarte mariana.iriarte@opensysmedia.com SENIOR EDITOR Sally Cole sally.cole@opensysmedia.com CREATIVE PROJECTS Chris Rassiccia chris.rassiccia@opensysmedia.com PROJECT MANAGER Kristine Jennings kristine.jennings@opensysmedia.com FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com SUBSCRIPTION MANAGER subscriptions@opensysmedia.com

Subscribe to a free digital edition of IoT Design magazine or the IoT Design Weekly e-mail newsletter at www.embedded-computing.com/iot

2

IoT Design Guide 2019

CORPORATE OFFICE 1505 N. Hayden Rd. #105 • Scottsdale, AZ 85257 • Tel: (480) 967-5581 REPRINTS WRIGHT’S MEDIA REPRINT COORDINATOR Wyndell Hamilton whamilton@wrightsmedia.com (281) 419-5725

www.embedded-computing.com/iot


ROBUST IIOT SOLUTIONS

Embed Success in Every Product WINSYSTEMS’ rugged, highly reliable embedded computer systems are designed to acquire and facilitate the flow of essential data at the heart of your application so you can design smarter solutions.

We understand the risk and challenges of bringing new products to market which is why technology decision makers choose WINSYSTEMS to help them select the optimal embedded computing solutions to enable their products. As a result, they have more time to focus on product feature design with lower overall costs and faster time to market. Partner with WINSYSTEMS to embed success in every product and secure your reputation as a technology leader.

817-274-7553 | www.winsystems.com 715 Stadium Drive, Arlington, Texas 76011 ASK ABOUT OUR PRODUCT EVALUATION!

Single Board Computers | COM Express Solutions | Power Supplies | I/O Modules | Panel PCs

EBC-C413 EBX-compatible SBC with Intel® Atom™ E3800 Series Processor EPX-C414 EPIC-compatible SBC with Intel® Atom™ E3800 Series Processor PX1-C415 PC/104 Form Factor SBC with PCIe/104™ OneBank™ expansion and latest generation Intel® Atom™ E3900 Series processor


Summer 2019 | Volume 6

CONTENTS

opsy.st/IoTDesign

FEATURES 6

10

The why, when, and how of IoT projects By Jayna Locke, Digi International

10

Smart products need smart development By Kim Rowe, RoweBots

16

Embedded system licensing: What, why, and how By Tim Regas, KEYLOK

22

The edge is getting smarter, going smaller, and moving further out!

30

5G Primer Part 1: Basic Architecture

Automotive embedded systems – lots of standards By Colin Walls, Software Technologist at Mentor, a Siemens business https://bit.ly/2IozkH5

34

APEC wrapup/roundtable by Alix Paul https://bit.ly/2U6sFn5

By Alix Paultre, Senior Technology Editor, Analog, Power & Europe

How to unlock IoT value now

34

Taking back control of our personal data

38

2019 DESIGN GUIDE

48

By Todd Mozer, Sensory

By Sahir Sait, Ayla Networks https://bit.ly/2Z39dvo

How cellular IoT is taking over the world Software & Development Tools LPWAN (Including LoRa, NB-IoT, etc.) Storage

The value of adding location for IoT designs

By Dunstan Power, Bytesnap Design https://bit.ly/2uUVPv8

Published by:

By JC Elliott, Polte

COLUMNS 5

Guest blog by Emmanual Gresset, CEVA Wireless business unit

Four CEOs on the state of wide-bandgap tech

Power and protocols

AI & Edge Computing Applications: Industrial Development Kits

MWC: 5G opportunity taking off for big and small ventures https://bit.ly/2Z0Zfuf

By Curt Schwaderer, Technology Editor

32

The watchwords in IoT right now: Smart – edge – secure. Our IoT Design Guide covers all of those topics and also delves into the architecture of 5G, data storage, and product development. Discover 2019’s most innovative IoT products and solutions in our product profile section, beginning on page 38.

WEB EXTRAS

Fail-safe data storage for IoT applications By Nilesh Badodekar, Cypress Semiconductor

COVER

22

By Jaya Kathuria Bindra and Nidhin MS, Cypress Semiconductor

26

@iot_guide

IOT INSIDER – The truth about 5G and when it will be here By Brandon Lewis, Editor-in-chief

2019 OpenSystems Media® © 2019 Embedded Computing Design All registered brands and trademarks within Embedded Computing Design magazine are the property of their respective owners. iPad is a trademark of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. ISSN: Print 1542-6408 Online: 1542-6459 enviroink.indd 1

4

IoT Design Guide 2019

10/1/08 10:44:38 AM

www.embedded-computing.com/iot


IoT INSIDER

The truth about 5G and when it will be here By Brandon Lewis, Editor-in-chief

5G is supposed to bring enhanced mobile broadband (eMBB) connectivity to consumers; ultra-reliable, low-latency communications (URLLC) for applications like augmented/virtual reality and autonomous cars; and massive machine-type communications (mMTC) from narrowband standards for ubiquitous IoT. All of this is still a work in progress, highlighted by the delayed “late drop” of Release 15. But given the 3GPP’s track record, we can assume they will deliver. The real question is, “When?” Muddled marketing suggests that many features required for eMBB, URLLC, and mMTC are supported in Release 15. When you dig a little deeper, however, you’ll find slides like Figure 1 from the 5G Alliance for Connected Industries and Automation. As shown, the IoT- and industrial-centric features of 5G won’t be finalized until the completion of 3GPP Release 16 and Release 17. Based on the 3GPP’s adjusted timeline, that puts us into 2021 before many of these capabilities are even standardized. That’s not to say that 5G marketing has been completely misleading. Readers familiar with 5G will note that the narrowband standards are compatible with the 5G New Radio (5G-NR) architecture, and that devices in the field that were designed to them can support 5G through a firmware upgrade. However, the issue is that the core network and base stations have a long way to go before they can deliver 5G capabilities to those endpoints. According to Daniel Quant, VP of Strategic Development at MultiTech, big mobile network operators ensured that the eMBB aspects of 5G were accelerated to keep consumers upgrading their Internet services. That has had a domino effect.

brandon.lewis@opensysmedia.com

“Release 17 delivers the third leg of the stool: mMTC for IoT devices or LTE-M and NB-IoT,” Quant says. “Nothing really changes on the device side other than enhancements for faster data codecs, improved power and range, location services, etc. Lots of work is needed in the core and next-generation NodeB (gNB) to support them. “The point here, is assets supporting LTE-M or NB-IoT will not connect to a pure-play 5G-NR-plus-5G Core network until the 2022 timeframe, once Release 17 is ratified and networks are deployed.”

To wait or to LoRaWAN? As Quant asserts, “2022 seems a ways off today and industry can’t wait. It needs to get moving.” One alternative is LoRaWAN. I recently attended a LoRaWAN Live event in San Diego where LoRa Alliance members updated the crowd on some key protocol enhancements. One was additional support for firmware updates over the air (FUOTA) through specs that define application-layer clock sync, remote multicast setup, and transport for fragmented data blocks. The Alliance also has working groups focused on “Worldwide Wakeup,” or the ability for a single device to operate in multiple geographies using LoRaWAN profiles that account for regional discrepancies in frequency, transmit power, etc. Another still is investigating the use of LoRaWAN – originally designed for unlicensed spectrum – in licensed bands. Notable infrastructure is also emerging to help engineers develop, test, and deploy LoRaWAN devices. For example, the online Mbed Simulator developed by Jan Jongboom of Arm allows users to run the protocol in a virtual environment and measure the performance of functions like FUOTA. Meanwhile, MultiTech’s solution is to attach LoRaWAN to a cellular backhaul, whether that’s 2G, 3G, 4G, or 5G-NR, on a public or private network. “It will be a long time before 5G has the range and link budget that cellular has today,” Quant explains. “With LoRa, you’re able to capture the best link budget and power performance but attach it back into the existing network. That reduces risk from 4G to 5G because the LoRa nodes you’re attaching to help you scale out better in terms of bandwidth and efficiency.” So is it LoRaWAN or wait?

FIGURE 1

Adjusted 5G timeline.

www.embedded-computing.com/iot

IoT Design Guide 2019

5


THE WHY, WHEN, AND HOW OF IOT PROJECTS

The why, when, and how of IoT projects By Jayna Locke, Digi International

What do a cellular solution for an environmental-cleanup project, a water-optimization solution for farm irrigation, and a retrofit solution for smart city lighting have in common? If you guessed “Internet of Things technology,” you are correct. You would also be correct if you guessed that these solutions were all developed in response to an expensive problem that needed to be solved. All of these IoT examples developed out of a strong business case around working smarter and improving efficiency. In other words, the “why” behind IoT projects is not necessarily a quest to be a technology adopter. In fact, the terms “IoT” and “Internet of Things” may not even enter into early conversations. Instead, it comes down to business fundamentals: how to reduce waste, drive efficiency, increase visibility into complex or remote processes and offload the burden of inefficient manual operations to wireless communications and automation. All of these goals can improve the bottom line by eliminating costs or reducing risks that can impact revenue. Consider, for example, an IoT solution that senses when a refrigeration unit breaks down and immediately notifies an administrative professional via SMS message; the administrator quickly gets service personnel to the location, saving $25,000 in inventory that would have otherwise spoiled. The operations manager of the restaurant or warehouse whose inventory was saved may not have envisioned an IoT solution, initially, but may instead have asked “How can I reduce the risk of loss if one of my units fails?” Today, the answers to questions like these can often lead to a business case to invest in an Internet of Things solution.

6

IoT Design Guide 2019

The numbers underscoring the idea that it may be time to create that business case are impressive:

• •

Up to $11 trillion in annual savings and revenues are forecasted for 20251. IoT is projected to boost corporate profits by 21 percent by 2022 and is expected to reach a tipping point of 18 to 20 percent adoption in 2019, as shown in the graphics below2. (Figure 1, Table 1.) Industrial IoT devices are expected to add $14 trillion to the global economy by 20303. www.embedded-computing.com/iot


Digi International www.digi.com

TWITTER

@digidotcom

FACEBOOK

LINKEDIN

@digi.international

YOU TUBE

www.linkedin.com/company/digi-international/

www.youtube.com/user/Digidotcom

... THOSE MAKING PURCHASING AND DEPLOYMENT DECISIONS NEED TO HAVE SOLID ANSWERS TO THE QUESTIONS AROUND WHY, WHEN, AND HOW TO DEPLOY THESE SOLUTIONS. The “why” has certainly come into focus for many organizations. As we have discussed, it typically comes down to efficiency, cost savings and even potential competitive advantage that would enable the organization to win new business. Nothing makes a business case more readily than significant dollars saved or an increase in revenue. Establishing the “why” involves defining the business need and then demonstrating how technology solves that problem. For example:

• •

Reduce redundancy and business complexity: Will the solution enable your city transit system to move to one full-featured router for all on-board data communications including dispatch, GPS tracking, security monitoring and customer Wi-Fi? Reduce cost: If you put a sensor-based system in place to notify personnel when remote systems must be serviced, can you reduce truck rolls to those sites by 50 percent? Increase efficiency: Can a cloud-based remote device monitoring and management system enable your team to remotely update device firmware of all devices at once? IoT adoption to approach 100% over the next 10 years Rate of Adoption

100%

Artificial Intelligence

90% 80%

Blockchain

70%

The why of Internet of Things projects While adoption of IoT is rapidly increasing across every imaginable business environment, from transportation and supply chain to financial services, smart cities, medical facilities and industrial sites, those making purchasing and deployment decisions need to have solid answers to the questions around why, when, and how to deploy these solutions.

Augmented Reality

60% 50%

IoT to reach tipping point of 18-20% % in 2019

40% 30%

Augmented Reality Blockchain

20% 10% 0%

Artificial Intelligence

2017 2030

Internet of Things

Source: DBS Bank based on estimates by Gartner, United Nations, World Bank

IoT is projected to boost corporate profits by 21 percent by 2022; it’s expected to reach a tipping point of 18 to 20 percent adoption in 2019.

FIGURE 1

IoT adoption gaining momentum 2016

2017

2018

2030

IoT units installed base – total (m)

6,382

8,381

11,197

125,000

Consumer devices (m)

3,963

5,244

7,036

75,000

Consumer devices as a % of total devices

62%

63%

63%

60%

5

5

5

5

World population

7,400

7,600

7,700

8,500

IoT adoption rate

11%

14%

18%

176%

Connected devices per person

TABLE 1

IoT adoption is gaining momentum and will reach 176% adoption by 2030. Source: DBS Bank based on estimates by Gartner, United Nations, World Bank

www.embedded-computing.com/iot

IoT Design Guide 2019

7


THE WHY, WHEN, AND HOW OF IOT PROJECTS A Cap Gemini report4 provides a handy list of “whys” including those below. To build your use case, gather estimated percentages in improvement or reduction from your use case:

• • • • • • •

Cut down on the amount of waste in production Improve fuel efficiency Reduce the cost of occupational injuries Reduce product liability claims Grow aftermarket sales or service Improve productivity Eliminate the cost of follow-up visits

Some organizations can create metrics around potential new business, whether through better customer service, value added services that attract new customers, or features that make the business more competitive. For example, one smart lighting solutions provider developed an IoT solution that can save their customers up to 85 percent of their lighting energy costs, giving them an excellent data-backed story for winning new customers.

The when of Internet of Things projects This question can be a bit more challenging, because there are many factors at play. For example, some IoT solutions (e.g., boxed products like routers) can virtually be drop-in deployment, whereas a solution that involves product design with radio modules, single board computers and antennas has a longer lead time. The question of when to deploy is connected to the why. Consider an industrial site such as a water treatment facility using landlines for transferring data on flow rates and other measurements. The “why” may be the high cost of maintenance, or it may be the risk of equipment failure due to aging technology. The organization needs to evaluate the cost of upgrading to new infrastructure against those concerns. Additional factors in the project timeline will include choosing an IoT solutions vendor, sourcing materials, any design time that would be required, and performing a pilot.

The how of Internet of Things projects Developing a solution based on Internet of Things technology can be a daunting undertaking. Behind the how of IoT is a rapidly developing wealth of tools, technologies, devices, networks, and applications – the intricate scaffolding of the Internet of Things. Defining an entry point into this complex environment is critical, as it is highly likely that multiple hardware and software components must come together in the final solution.

beyond selling IoT products. By finding a vendor that takes a holistic, solutionsbased approach to problem solving, organizations can get the critical support along the way to define and answer questions about the integration of multiple components, how to gather data points from devices across an IoT deployment, develop a successful solution design, complete the certification process and meet time-to-market objectives. A November 2018 Gartner report stated it well: “CIOs should ensure they have the necessary skills and partners to support key emerging IoT trends and technologies, as, by 2023, the average CIO will be responsible for more than three times as many endpoints as this year.”5 Likewise, a Forbes report stated: “Many companies express a high sense of urgency in making their IoT programs operational. It is therefore no wonder that more than four out of five companies seeing the most success with IoT report using a vendor-sourced IoT platform as part of their initiatives.” 6 The journey of a thousand miles does indeed begin with a single step, as Lao Tzu is famously quoted as saying. This is certainly true with the decision to deploy an IoT solution. It all starts with an evaluation phase to clearly define the business need and the path forward. IoT References: 1

2

While each use case is somewhat unique, there are some key questions to ask: 3

• • • • • •

How will you source the hardware and components you will need to complete the design, and where will you procure those materials? How will you provide documentation or training to those who must install and implement that solution in the field? How will you ensure that the solution is operating optimally, performing as expected, and providing the desired data or results? How will you deliver data points to those responsible for monitoring and maintaining the solution? How will you analyze the data and make it actionable so that it becomes a value proposition for your solution? How will you scale the deployment to accommodate growth?

One of most effective approaches in determining the how of an IoT project may be surprising; it involves partnering with a provider that can deliver value above and

8

IoT Design Guide 2019

4

5

6

McKinsey & Company: Unlocking the Potential of the Internet of Things Forbes Insights: The Internet of Things: From Theory to Reality VXchange: 5 IoT Statistics You Need to Know in 2019 Cap Gemini: Thirty Measurable Business Cases for IoT with Connected Services Gartner: Gartner Identifies Top 10 Strategic IoT Technologies and Trends Forbes: 2018 Roundup of Internet of Things Forecasts and Market Estimates

Jayna Locke has an extensive background in content marketing, and loves storytelling as a way to share innovations, industry insights, and thought leadership. www.embedded-computing.com/iot


Edge Computing FOR

IoT RUGGED. SCALABLE. SECURE. Crystal Group rugged networking systems bring reliable IoT and Ai processing to the edge – where office grade systems would fail. Offering NIAP certified IP-security modules for network encryption, these rugged systems are designed for smart city, asset tracking, agriculture, smart grid, smart home, and transportation use cases and provide scalable, edge switch technology with enterprise class functionality. Crystal Group’s rugged server and switch, combined with a Ruckus wireless access point, collect and process IoT data in the field. This reduces latency, improves security, and eliminates cloud-based network throughput variations. All Crystal Group products are manufactured in NIST compliant U.S. based facilities with end-to-end U.S. supply chain of custody.

SERVERS | DISPLAYS | STORAGE | NETWORKING | EMBEDDED | CARBON FIBER sales@crystalrugged.com | 800.378.1636 | crystalrugged.com


THE WHY, WHEN, AND HOW OF IOT PROJECTS

Smart products need smart development By Kim Rowe, RoweBots

This small wearable device is deceptive. It contains a microcontroller, a low-power radio, sensors, memory, and complex software that analyzes physical conditions. In addition, it is connected to a smartphone and has analytical support in the cloud. (Stock photo.)

The world is full of bright and creative ideas for innovative and useful products. Brilliant engineers and marketers are eager to bring these products to customers who will purchase them and appreciate the way they enhance their lives. Today, these products are increasingly based on microprocessors or microcontrollers (MCUs) with creative combinations of peripherals and unique software.

To begin with, it might seem straightforward to decide “what” to build based on market research and user input, especially for smaller companies that have a close relationship with their customers and a familiarity with specialized areas.

can have a team look at a working early model to analyze it and understand it to make changes and work out bugs. Both approaches can be successful, but both are also subject to failure due to inattention to certain obstacles and mistaken assumptions about implementation.

But the “what” (a gizmo that does x) is often too general a concept when getting down to an actual device that interacts with the real world and costs actual dollars. Getting to the “what” does involve market research and input, but it also relies on an engineering team that knows what is involved technically as well as hard-nosed financial analysis to see what that path of possibility will cost. And since all projects contain certain unknowns, management must be flexible to deal with changes in plans and “gotchas” along the way – and have the ability to find the best way through implementing the maze of functions and components that are necessary for the device but not central to its application.

To start, it can be useful to filter ideas by subjecting them to a process called stage-gate to the considerations of market research, development, and financial constraints to emerge with the best contenders (Figure 1). Although it isn’t used often, stage-gating can be useful for smaller companies that might struggle with a more rigorous approach.

Smaller companies may be limited by their resources and willingness to take risks, and stick to areas they know when expanding their offerings – they can take advantage of less direct competition to support development and incremental improvement in their products. Larger companies, however, often opt for prototyping so that they

Once such an idea has been selected it is time to get started on real development and avoid the pitfalls that lie along the way.

How is it that so many innovative concepts fail to make it from an excited meeting room to an actual profit-making product? It turns out that there are reasons for this that can be identified and avoided to achieve success.

10

IoT Design Guide 2019

www.embedded-computing.com/iot


RoweBots

www.rowebots.com

TWITTER

@rowebots

LINKEDIN

YOU TUBE

www.linkedin.com/company/rowebots

www.youtube.com/user/rowebots

one or more related apps. In turn, the smartphone communicates with cloud-based AI or data management apps, making the wearable an IoT device. The wearable spends a good deal of time disconnected from the cloud, sensing, preprocessing, storing data, and providing immediate responses. Its local processing performance is limited by constraints on power consumption and the need for security and data storage.

STAGE PROCESS MODEL DIAGRAM Go To Testing Launch Preparation

Technical Documentation To Manufacturing Marketing Plan

Post Launch Review Project Closure

Idea Screening

Scope Screening

DISCOVERY

SCOPE

DEVELOPMENT

TESTING

MARKETING

CLOSURE

GATE 0

GATE 1

GATE 2

GATE 3

GATE 4

GATE 5

STAGE 0

A fictional smart device and its development requirements

Go To Development

STAGE 1

STAGE 2

STAGE 3

STAGE 4

STAGE 5

DISCOVERY

SCOPING

BUILDING

DEVELOPMENT

VALIDATION

LAUNCH

Idea Generation

Work Breakdown Structure

Build Business Case

Development Documentation Build Industrial Prototype

Testing & Validation Launch Preparation

Launch followup with Customer

FIGURE 1

A stage-gate model may be used to refine product choice to minimize expenditures in products which never

FIGURE 2

Sensor fusion will take the data from multiple sensors, process it, and fuse the results together to realize some higher-level information or conclusion. In this case, temperature, barometric data, and acceleration data are combined to estimate altitude and velocity for a skydiving and hang-glider sensor.

To demonstrate the issues many companies experience during product development, consider a hypothetical company working in an area where they already have some experience. On entering the product specification stage (where the “what” has been selected in terms of a product specification aimed at a specific set of end users), the “what” turns out to have two aspects. These could be described as specifications of “what” it is supposed to do and “what” it is going to actually be as a physical device – or, in other words, what to do and how to get there. The company settles on a small wearable device that includes a set of sensors to monitor body and environmental conditions, provides information and/ or alarms to the user, and stores data in the cloud (as in lead image). It is billed as a first-of-its-kind device to be sold in volume and will use a Bluetooth 5 radio to connect to a smartphone containing www.embedded-computing.com/iot

IoT Design Guide 2019

11


THE WHY, WHEN, AND HOW OF IOT PROJECTS So far, so good. From this specification comes a number of requirements, including

• • • • • •

Hardware that can support both embedded processing and Bluetooth 5 A suite of proven, reliable sensors Sensor fusion algorithms capable of combining multiple sensor outputs into a single result (Figure 2) Enough program memory for device software and firmware, as well as additional memory for storing data locally until it can be transmitted A USB interface and firmware over-the-air (FOTA) update capabilities for diagnostics, maintenance, and service A battery that provides sufficient capacity and adequate charging capability within a size and weight envelope that can be accommodated by the final design

Filling these requirements warrants careful consideration, and it is here that failures often begin.

Potholes on the road to success Continuing our hypothetical example, a developer might decide that Wi-Fi is a better option than the USB service interface – except for the fact that Wi-Fi draws so much power that the device would require a bigger battery and more hardware support that result in more size, weight, and higher overall cost. It is far less costly and time-consuming to avoid this mistake altogether rather than start down the path and have to turn back. This caution is required in the selection of other hardware, too, especially the processor. A low-power microcontroller that can handle the limited programming and complexity of the device is preferable to a microprocessor that can simply be overkill in terms of size and power consumption. Finding parts with integrated functions can also go a long way to reducing cost, time to market, and dead ends in the development process, as an MCU with integrated Bluetooth 5 saves space and power versus using a discrete Bluetooth chip. And, by all means, select parts that are known, prequalified, and, when appropriate, come with reliable driver software (such as sensors). While prototyping is of course an important part of the development process, try to select an evaluation board with your selected processor rather than some embedded Linux board. The former can get you to a working, flexible design much more easily, but will also give you a better idea of how components will come together in the final packaging.

FIGURE 3

12

A super loop, or “a pile of spaghetti,” can arise from a simple control loop, which may initially seem attractive and easy to implement, but will soon become unmanageable as more devices are supported and modules get fragmented to deal with the scheduling issues created by the super loop. Finally, in the general case the super loop quickly produces a pile of spaghetti and all maintainability, product evolution, and flexibility is lost.

IoT Design Guide 2019

Are versions of the MCU available that can be put into a very small package? Where will the battery and interfaces be located on/in the wearable device? Considering these questions early in the process helps avoid roadblocks that kill time to market, add huge costs, and ultimately doom products.

Finding the safe road for software One aspect of design that is rarely appreciated up front is the scope and volume of software required. There is, of www.embedded-computing.com/iot


A frequent corollary to “writing it all from scratch” is the “super loop” approach. In a super loop, writing from scratch continues in the form of a single control loop that continuously polls the different peripherals for data and processes (Figure 3). Such a loop is easy to understand in its simplest form, but can quickly turn into a pile of spaghetti and become impossible to comprehend or maintain as it grows in functionality. Starting on this path will only take you further down it, and lead to developmental and financial disaster. While platform-based development is almost a given for edge devices (i.e., the smartphone) and the cloud, it must be actively sought out, evaluated, and selected for embedded devices. From the start, this means using a real-time operating system (RTOS) that is specifically targeted at the MCU architecture used in the project (Figure 4). Using an RTOS delivers an instant platform that runs on the MCU and

Join the PICMG IIoT Specification Effort FIGURE 4

An RTOS provides structure, scheduling support, and software reuse, all while eliminating the fragmentation of modules that happens in the super loop case. Time to market is critical; this is the optimal path to realize a quality, secure product under budget and on time.

course, the software that will run on the device (which is a topic unto itself), but also the applications (yes, Android and iOS versions) that will run on the smartphone, and cloud software that will be needed to improve the usability of the device. Decisions will need to be made on the development approach, look, and feel to make all of this software as reliable and consistent as possible.

Super loops, spaghetti code, and RTOSs Developing software for an embedded processor can be a hazardous path if not plotted carefully. The temptation to write all the code from scratch can be deadly. For one thing, that entails coding all the support for peripherals, security, a file system, real-time clock, and more. This process alone costs huge amounts of time and means inevitable errors and debugging. Even then, it’s likely that the software will have bugs that remain undetected after deployment.

Plug & Play Interoperability at the Sensor Domain

Be a part of the PICMG IIoT open specification effort to bring true plug-and-play interoperability to the “last foot” of the network. With our low-fee model, companies large and small can work with thought innovators on the leading edge of technology. Join PICMG today!

www.embedded-computing.com/iot

www.picmg.org

IoT Design Guide 2019 20190109-PICMG-Island-Ad-04.indd 1

13 1/9/19 11:10 PM


THE WHY, WHEN, AND HOW OF IOT PROJECTS

Cloud platforms abstract challenges, but not all of them Fortunately, platforms such as IBM Watson IoT and Microsoft Azure offer a solid environment for cloud applications: They provide cloud/edge/device connectivity options, data-management tools, and AI frameworks that can analyze vast amounts of data and refine applications running on devices and edge systems (Figure 5). Developing cloud software and the platform itself can also be expensive. A rich environment and advanced cloud processing may require significant development, especially if the application requires sophisticated algorithms.

FIGURE 5

Microsoft Azure is an example of a platform for cloud applications that includes connectivity and support for edge devices as well as protocols for embedded devices to connect and exchange data with the cloud analysis applications.

services interrupts from the peripherals as needed, offers ready-made I/O, and performs other basic functions. This alone supplies huge amounts of ready-made, tested, and often certified code onto which you can add unique functionality (value-add). It also shortens time to market and assures code correctness at that level, which can be considered secure, tested, already integrated, complete, and efficient, overcoming all the limitations of the super loop approach. An RTOS also enables easier prototyping. In the sensor fusion example mentioned earlier, prototyping will help the developers understand what sensors are needed, which algorithms work best combining sensor outputs into a single quantity, and how well the chosen processor (presumably with fast multiply/accumulate [MAC] operation performance) is able to filter noise from the sensors. An RTOS with rich support for peripherals and prototyping boards based on a range of processors makes a powerful platform for full-on prototyping and development. A well-chosen RTOS can offer pretested device drivers; security functions; a file system; and support for a wide range of sensors, cameras, and radios.

Frameworks for phones When it comes to the edge devices (here, smartphones, but in other cases, maybe PCs) there are also well-known operating systems such as Android and iOS, as well as robust programming languages and tools for smartphone development. One possible pitfall is divergence between functionality and user experience (such as user interface, etc.) on the two types of phones. Here, frameworks are available to help coordinate development. Still, the cost of developing phone applications can be easily underestimated. Given that most products target both Android and iPhone, the decision to use a framework on the phone can be a tough one. If separate development is done, an almost identical look and feel can be achieved, but development cost and maintenance costs will be greater. With a framework, both applications can be maintained together, likely reducing costs but creating less-flexible user interfaces.

14

IoT Design Guide 2019

AI requires a lot of raw, real-world data, which needs to be gathered, preprocessed to eliminate and clean out noise, processed again by deep-learning neural networks, and run either in the back end or moved to the edge. Such AI processing can improve your product greatly, but achieving this is more difficult than one might think if there’s inexperience in algorithm development or using digital signal processing to clean the signals. All of this is often underestimated in terms of time, risk, and total effort. Start with simple analysis and grow your successes.

Time saved is money earned Without doubt, the most expensive aspect of product development is time – time taken re-engineering designs, testing components that could have been purchased pretested off the shelf, and fixing errors and bad decisions. Market share is worth 10 times to 1,000 times more than total development cost, and therefore time to market, quality, and reliability are more important than the comparatively little you save by engineering components in-house. One path to capturing market share as quickly as possible is by selecting an RTOS with the peripheral support and flexibility you need, as well as a processor that can make your product a true success. IoT Kim Rowe is CEO of RoweBots. www.embedded-computing.com/iot


ADVERTORIAL

EXECUTIVE SPEAKOUT

MITYSOM-A10S:

ARRIA 10 SYSTEM ON MODULE FOR INDUSTRIAL IOT CRITICAL LINK’S LATEST PRODUCTION-READY, INDUSTRIAL PERFORMANCE SOM Open Architecture for User-Programmability Critical Link’s MitySOM-A10S is an Intel/Altera Arria 10 SoC system on module developed exclusively for industrial applications. It is a production-ready board-level solution that delivers industrial performance and includes a range of configurations to fit your requirements. MitySOM-A10S features up to 480KLE FPGA fabric, dual-core Cortex-A9 32-bit RISC processors with dual NEON SIMD coprocessors, and 12 high speed transceiver pairs up to 12.5Gbps. The SOMs include on-board power supplies, two DDR4 RAM memory subsystems up to a combined 6GB, micro SD card, a USB 2.0 on the go (OTG) port, and a temperature sensor. The ARM architecture supports several high-level operating systems, including Embedded Linux out of the box. Customers using the MitySOM-A10S receive free, lifetime access to Critical Link’s technical support site, as well as access to application engineering resources and other services. Critical Link will also provide developers the design files for our base boards, further accelerating design cycles and time to market.

Specifications › Up to 480KLE FPGA fabric › Dual-Core Cortex A9 processors › 4GB DDR4 HPS shared memory › 2GB DDR4 FPGA memory › 12 high speed transceiver pairs, up to 12.5Gbps › Max 138 Direct FPGA I/Os, 30 shared HPS/FPGA I/Os › Supports several high-level operating systems, including Linux out of the box Flexible, Off-the-Shelf Board Level Solution for Industrial Applications Leverage the SoC’s dual core ARM and user-programmable FPGA fabric to do more embedded processing with 40% less power. 12 high speed transceiver pairs combined with Critical Link’s onboard memory subsystems make this SOM well-suited for the high-speed processing needs of the most cutting-edge industrial technology products. Example applications include: › › › › › › › ›

Industrial Automation and Control Industrial Instrumentation Test and Measurement Medical Instrumentation Embedded Imaging & Machine Vision Medical Imaging Broadcast Smart Cities / Smart Grid

Why choose a Critical Link SOM? Critical Link’s support is unmatched in the industry, including our application engineering and online technical resources. We provide production-ready board-level solutions that deliver industrial performance and include a range of configurations to fit your requirements. With Critical Link SOMs, it’s about time: Time to market, time to focus on company IP, and product lifetime. › Built for long term production, with 10-15+ year availability › Proven track record for product performance in the field › Base board design files and other resources available online at no cost › Lifetime product maintenance and support › 100% US-based development and assembly


THE WHY, WHEN, AND HOW OF IOT PROJECTS Firmware Updates

Embedded system licensing: What, why, and how By Tim Regas, KEYLOK

WHAT Embedded system licensing is the commingling of hardware and software in a single offering. A great way to convert hardware vendors into software vendors, it enables manufacturers to position products as “all-in-one” via software that enhances a machine’s capabilities while giving consumers more “bang for the buck.” With embedded system licensing, manufacturers can offer a base product and provide customers with access to desired features through upgrades. By reducing the entrylevel product cost, manufacturers often increase demand – and give customers an opportunity to access more features as they grow or as needs evolve. This tends to generate more lucrative long-term relationships.

WHY Let’s look at the impressive set of benefits of embedded system licensing:

• •

Segmentation – Successful marketers have long employed segmentation strategies to appeal in laser-like ways to specific market segments. By unbundling product features into a set of products, you’re able to potentially achieve wider market penetration at a cost-effective rate. Revenue – Here’s an obvious one: By cost-effectively achieving wider market penetration, you’re able to grow revenue – and your bottom line. Savings – With embedded licensing, you have the option of creating fewer actual hardware “products” while raising profits. Your offering becomes relevant to a greater number of target groups. You gain added differentiation. Unbundling features is a way to make a small manufacturer look big.

16

IoT Design Guide 2019

Security – Embedded licensing protects against IP theft, tampering, reverse engineering, unauthorized use, and unauthorized distribution – including the gray market. It deters businesses from building hardware that illegally runs your software. Customization – By enabling customers to only buy what they need and pay on a per-feature or per-use basis – or by timeframe of use – your product appeals to more potential customers. Manageability – Embedded licensing means eliminating human capital needed to monitor and track use. Features automatically shut off as uses and times expire. Satisfaction – When customers save by getting only the features they need and nothing more, customer satisfaction rises – along with profits. www.embedded-computing.com/iot


KEYLOK

www.keylok.com

Support – Customer support tends to improve when you’re able to break down support teams into specialized groups (e.g., by feature or application). You diagnose and resolve customer issues more easily. Versatility – Another major advantage of embedded licensing is the ability of manufacturers to more easily offer products spanning different operating systems and environments. As a result, you’re able to capture more customers.

HOW

LINKEDIN

www.linkedin.com/company/keylok/

YOU TUBE

www.youtube.com/user/KEYLOKHowTo

the options to enforce these, because without an enforcement to your strategy you might as well not have one. There are three main ways to enforce your strategy: cloud, software, and hardware. All options require you to plan your strategy beforehand how you want to license, what your strategy is, and if you plan to make changes in the future. Each solution’s vendor will have a different way to implement with your system, but typically they use a software wrapper, or code changes.

GONE ARE THE DAYS OF SELLING A DEVICE FOR A ONE-TIME FEE AND ALLOWING THE CUSTOMER PERPETUAL ACCESS. NOW EMBEDDED SYSTEMS ARE SOLD AS OFFERINGS WITH THE POTENTIAL TO GENERATE RECURRING REVENUE.

These three elements are key to a successful embedded system licensing strategy:

Enforcement methods

1. Pricing: It begins with the actual cost of hardware plus the cost of each feature or application. Embedded licensing means not having to price in the full cost of software; you can build a pricing strategy using the cost of hardware plus each feature. This often enables greater price competitiveness. 2. Packaging/delivery: This requires you to examine how each unit will be sold and delivered to customers and how additional features will be bought (e.g., cloud, email, other). All this must be considered. 3. Product management: This includes identification of all uses of your product (with monitoring of usage); understanding data trends to expand into new markets; and discovering opportunities to create new products without creating a new physical device (e.g., offering new software features). It’s about having the agility to quickly adapt to new customer wants and needs, as well as to changing demographics.

Gone are the days of selling a device for a one-time fee and allowing the customer perpetual access. Now embedded systems are sold as offerings with the potential to generate recurring revenue. If you don’t plan, implement, and execute a licensing strategy for your embedded system, you are going to leave revenue on the table, alienate potential customers, and lose business to more savvy competitors. IoT

Once you have thought about the three elements to successful embedded system licensing, you can start to explore

Tim Regas is a product expert and general manager at KEYLOK. With over 35 years of experience, KEYLOK helps software developers license and secure software using hardware security dongles.

www.embedded-computing.com/iot

There are distinct advantages and disadvantages to each method of enforcement.

Cloud – Cloud licensing uses a centralized licensing server, usually on the cloud, to track and maintain the licensing parameters setup by the vendor. Because the license is stored on the cloud, the device must also be connected to the internet to retrieve the information, which can be difficult for some machines in remote locations or without access to the internet. However, always being connected to the internet allows for much greater visibility to the developer. Many solutions that utilize the cloud allow for usage tracking as well as other streams of data from the machine in real time. Software – Software licensing utilizes secure licensing files placed on the system to carry out the wishes of the vendor. The license file can be shipped with the system or added at a later date. Since the license does not require access to the internet, you can use this system for more remote locations or places without connectivity to the internet. There won’t be as much, if any at all, visibility into usage or other data metrics. Updating software licenses usually requires the transmission of new licensing files and replacing the old ones. Hardware – Hardware licensing utilizes external hardware dongles or chips to enforce the license. Like the software version, these can be shipped with the embedded system or separately and used like keys to operate the machine. Since all licensing parameters are included in the external hardware, no connection the internet is necessary to update it. Updating the hardware with new licensing parameters usually can be done remotely but requires the end user’s action like plugging the dongle into a computer with internet access or transferring a license file to the system. The physical nature of the license container makes for the most secure option for licensing as well as easy portability. Moving the license can be as easy as unplugging the dongle and moving it to a new system.

IoT Design Guide 2019

17


Enabling Deeper, Domain-Specific Integration on the Industrial IoT K.C. Liu on the present and future of Advantech and the Internet of Things

With more than 40 years of experience in the electronics industry, K.C. Liu – Founder, Chairman, and CEO of industrial computing manufacturer Advantech – has seen wave after wave of market trends like the IoT. Here, the electronics magnate provides insight on why the IoT market has been slower to develop than initially thought; how a dire

K.C. Liu

need for solutions integrators is

Founder, Chairman, and CEO of Advantech

impacting the industry’s growth; and how Advantech is enabling newcomers to the market through platforms like WISE-PaaS and events like the IoT Co-Creation Partner Conference.

EXECUTIVE Q&A


IOTDG: The IoT has taken longer to reach critical mass than initially predicted. As the market continues to unfold, some organizations may be reluctant to go “all-in.” What do you view as the current barriers to success for ubiquitous IoT?

IOTDG: Advantech obviously provides hardware infrastructure, but the emergence of WISE-PaaS and SRPs address a much broader portion of the solution stack. What is your vision for Advantech moving forward, and how could this redistribute value in traditional electronics supply chains?

LIU: Three to five years ago people thought IoT was coming. But actually it’s taken much longer. That’s because, until now, the IoT supply and value chains were immature. To support the IoT you need a value chain, but the value chain still needed time to unfold. That’s the reason IoT didn’t move faster. Let me explain the value chain of IoT, what is already mature, and what is still growing. In the IoT value chain, infrastructureas-a-service (IaaS) providers like AWS or Microsoft Azure are already mature and have been for some time. This is a foundational part of IoT, so that’s step one. Then comes the industrial platform-as-a-service (PaaS) market. For example, Advantech’s WISE-PaaS is an industrial PaaS, and, of course, there are other PaaS providers like PTC or GE Predix. I would say that this year, 2019, is when those industrial PaaS offerings will become mature. But what is not mature? I would say the biggest problem is solution integrators, or SIs. For industrial IoT, users who want to make a system need SIs. We call these companies domainfocused SIs. They need the IaaS and PaaS parts of the value chain to make a system. But in Asia, for example, there are not enough SIs and the ones that exist are too small. Without domain-focused SIs, IoT solutions cannot be developed and deployed quickly. Some bigger companies do use in-house SIs. For instance, a company with 100 IT people probably has enough to create these systems completely in house. However, those internal resources still need to learn how to use IaaS, use PaaS, and integrate the complete system. So that kind of learning is still ongoing. So I would say the problem is the SI sector. In the IoT value chain we’ve also seen a need for what we at Advantech call solution-ready platforms (SRPs). These are semideployable software modules dedicated to a specific application or function. Right now, SRPs still need some time to mature, which will happen this year or next year. But on the whole, users are now ready for and aggressive about investment. If I talk to upper management at any company about the IoT, every customer wants what we call digital transformation. It’s not “all-in” investment, but it’s aggressive at every company.

LIU: WISE-PaaS is a general-purpose PaaS. Any industrial IoT solutions can work with it. It’s also transparent to IaaS platforms like AWS, IBM, Microsoft, and the Alibaba Cloud in China. People use WISE-PaaS to collect data from sensors, transmit it to a database, create management dashboards, and implement control functionality. WISE-PaaS gives them everything they need to create this infrastructure and makes it very easy. Usually a customer can build out all of this functionality in three months. But customers in every industry still need domain-focused, specialized SRPs to complete their system. When they need to do AI, for example, their system requirements will be much more involved because the AI will need to process data ingested by platforms like WISE-PaaS and make decisions. Right now, SRPs for AI are in high demand but not yet mature. This also requires domain-focused SIs, and is one reason we think this sector has high growth potential. Our customers, the domain-focused SIs, need to provide integration and customization on-site for their customers. So far, I haven’t seen any SIs grow into large organizations. Of course, big companies like Siemens, Schneider, and Rockwell will jump into the SI business for big projects, but most SIs are still small organizations. So while Advantech and certain SRP providers can be globally operated, most SIs need to focus on a single market in specific regions because they cannot be too far away from their customers. Our model for WISE-PaaS is a cost-center model, not a profit center. We want to make WISE-PaaS popular at a low cost. Our profit model is to sell hardware. Maybe it’s a little strange, but we think a popular software platform and reasonably-priced hardware will be a healthy business for Advantech. Our competitors – like PTC, GE Predix, and Siemens – may have more revenue potential today, but they are leveraging a high-priced model. So while their offerings may be good for very big companies, for medium-sized and general customers I feel that our model is a better fit. It’s more “everybody’s” solution. I expect the real winners from all of this to be domain-focused SIs that are fully, 100-percent, domain-focused. For example, an SI focused on intelligent hospitals will partner with us and adopt a model to become a cloud service provider that works


with one, two, or ten hospitals. The SI needs to be domainfocused because they need to grow to a stage where their domain cloud service becomes powerful. They must also be able to go deeper into the domain to provide things like AI and analytics functions for all of a hospital’s departments. Over time, their core competency will become these cloudbased services that can grow to help hospitals in different regions and potentially all over the world. By that time they will have grown into a large, successful company with popular solutions. You can already see this happening today through cloud-based companies like Uber and Airbnb. But they need a platform provider like Advantech to provide the underlying hardware and software platforms. We want to use WISE-PaaS and SRPs to enable domain-focused SIs, and those SIs will add on domain-focused value. Although we have our own domain-focused product groups within Advantech, our position will be to remain a general-purpose platform provider. Because we address verticals like factory automation, transportation, medical, and retail, our coverage is too broad for us to be competitive in a specific domain.

IOTDG: Last fall, the Advantech IoT Co-Creation Summit brought together hundreds of current and potential technology partners. What did the event teach you about the needs of IoT ecosystems, and what is your strategy for continuing to seed the IoT market to foster growth?

LIU: At the Suzhou event we had a big announcement about IoT Co-Creation, which is our strategy to help domainfocused SIs better serve their customers using WISE-PaaS and SRPs. But at that time, many of the solutions were rather proof-of-concepts. That was about four months ago, and our activity has changed from demonstrations and proof-of-concepts to solid SRPs and solid, collaborative relationships with domain-focused SIs. Most of the functions are becoming mature. The evidence of this is that last year we won about six new VIP customers per month. After the event up to today we’re averaging about 10. So the growth is significant. We’ve also initiated other activities to help educate our partners and the market that the industrial IoT is a good investment venture. For instance, instead of doing one big IoT Co-Creation Summit event like last year, this year we will be doing 60 smaller events of about 100 attendees for our partners worldwide. So far, every one of these IoT Co-Creation Partner Conferences has been a full house.

But we also have a long-term strategy based on minor investments in those SIs. We’re currently invested in several companies in Asia. We are 20 percent shareholders, and help them with their platform development, global marketing, and succeeding as domain-focused integrators. We are enabling them, but if they become successful we will share in that success.

“The SI needs to be domainfocused because they need to grow to a stage where their domain cloud service becomes powerful. They must also be able to go deeper into the domain to provide things like AI and analytics functions ... “ Now, SIs have decided to jump into the arena, and of course there are a large number of startups. I recently attended a venture capital event and almost every one of them has shifted their focus to industrial IoT and AI. Momentum is accelerating, and I think this year or next year will tip the scales for IoT. For more information on the IoT Co-Creation Partner Conference, visit: iotsummit.advantech.com/en-us/ccpc For more information on WISE-PaaS, visit: wise-paas. advantech.com/en-us For more information on Solution-Ready Platforms, visit: www.advantech.com/products/webaccess-softwaresolutions/sub_5d88a401-7766-439c-868e-bc4128e5b079 K.C. Liu founded Advantech in 1983 and has been the chairman ever since. His passion and leadership have inspired the company to stay focused on its core competencies, and have led ultimately to achieve its status as the leading light in the worldwide industrial computing arena. K.C. Liu, an avid reader, often purchases good management books to share with all of the management team. Periodically, the company holds book-club meetings in its branch offices around the world. Especially high on his list of favorites are “Built to Last,”“Good to Great,”and “Blue Ocean Strategy,” which he says have had a major impact on the way business is done at Advantech. Advantech Co., Ltd. • www.advantech.com


embedded technologies expo & conference

2019

Pre-Conference: June 25, 2019 Conference & Expo: June 26-27, 2019 McEnery Convention Center San Jose, CA

North America’s Premier Event for Embedded Systems & Technologies Where Embedded Systems, IoT, and AI Meet

Learn from Experts

The 3-day conference will span 100+ hours, including pre-conference symposium, hands-on workshops and peer-to-peer training

Embedded Technologies Conference is the ONLY embedded event focused on what is most important to designers and implementers – education and training.

Connect with the industry

Join your peers and colleagues for new connections, ideas and partnerships

Embedded/IoT engineers, developers, and implementers will experience experience unparalleled education and training covering embedded systems, IoT connectivity, edge computing, AI, machine learning, and more!

Explore Technologies

Together with Sensors Expo, see hundreds of industry leading exhibitors

Three Full Days of Sessions Covering Tracks on:

AI & Machine Learning for Industrial Applications

Automotive Applications

Designing for Industrial IoT Applications

REGISTER TODAY!

New & Emerging Wireless Protocols

Secure Your Embedded System

Test Your Embedded/ Industrial/IoT System

Vision & Imaging for Industrial Applications

Use code OSM100 for $100 off Conference Passes or a FREE Expo Hall Pass!

Contact our sales team for more information on exhibiting today: SEAN RAMAN SRAMAN@QUESTEX.COM | 617.219.8369

Co-located with:

Conference Partners:

www.embeddedtechconf.com #EmbTech19


IOT EDGE COMPUTING

The edge is getting smarter, going smaller, and moving further out! By Jaya Kathuria Bindra and Nidhin MS, Cypress Semiconductor

IoT – the tiny eyes, ears, and brains that speak a common language to communicate with each other and with higher-capability systems – has been influencing our lives for the past several years. In this smart world, devices and environments will automatically and collaboratively serve people. Their numbers are orders of magnitude higher than the traditional computers and smartphones we churn out in large quantities each year. The IoT has not been around for very long but the term “IoT” is likely about 16 years old. Back then, the idea was often called “embedded internet” or “pervasive computing.” Now, the IoT has evolved into a complex system ranging from the internet to wireless communication and from micro-electromechanical systems (MEMS) to embedded systems. IoT consists of a gigantic network of internet “things,” from cellphones to smart buildings to jet engines. What has changed in recent years to make this all possible? There are several key factors, including low-power radios, advancements in MCU technologies, expansion of networking capabilities, the introduction of data-analytics tools, and the creation/ adoption of wireless standards that make it simpler for IoT hardware and software from different vendors to interact.

There are no wires on me Moving from wired to wireless was a prerequisite for enabling the IoT. Although selfcontained wireless communications have existed for a long time in terms of technological timescales, significant advancements were required to make them smaller, cheaper, and energy-efficient enough for IoT applications. Antenna size is directly related to the wavelength of the radio waves used, as shorter wavelengths result in smaller antennas. This is the reason why most IoT communications use frequencies on the order of several GHz. With millimeter-wave-based technologies just on the horizon, we’ll see even smaller antennas. Stationary IoT nodes can trade size for range by using frequencies in the several hundreds of MHz. Due to advancements in silicon scaling and RF front ends, each new generation of receivers is more sensitive. As a direct consequence, transmissions can now use very low power levels. Another important technological advancement that has reduced radio power levels is packet-based communication: By sending data in bursts of packets, the

22

IoT Design Guide 2019

system is able to “sleep” between bursts. The shorter the burst, the less power consumed. To send the required information in a shorter amount of time with limited bandwidth, modulation techniques such as quadrature amplitude modulation (QAM) and differential quadrature phase shift keying (DQPSK) were developed to improve spectral efficiency. The final piece of the puzzle was standardization. Wireless communications span a range of frequencies, modulation techniques, and packet structures. The proliferation of mobile phones, in addition to kick-starting cellular networks, helped evolve Bluetooth point-to-point radio technology along with it. Bluetooth also freed devices from clear line-of-sight limitations of previously used IR technologies. Bluetooth and its low energy (LE) variants have since kept a leading position for short-range, low-medium bit-rate communication technologies even in the wave of competing technologies. The age of internet and the need to get rid of those not-so-good looking LAN cables gave rise to Wi-Fi. Long-range communications are also on the rise for city-level www.embedded-computing.com/iot


TWITTER

Cypress Semiconductor

FACEBOOK

@CypressSemi

www.cypress.com

communication of smart installations, albeit using low-bit-rate technologies. It is important to remember that even though connected IoT devices are growing at a significant pace, they are still in their infancy. This means that as new applications arise, there will be demands to provide significant improvements in speed, power consumption, range, and capacity. The design challenge for engineers is to choose the appropriate communication protocol from among the wide range of choices available. Finding the right protocol to use means choosing the one that best meets an application’s need for high throughput, low power, size, seamless interoperability, secure communication, ease of development, and ease of use, plus interoperability. (Table 1.)

Wi-Fi: Wi-Fi is a popular choice of wireless protocol in general that offers very high data throughput, medium to long range, and sophisticated security options. In general, the very high data rate of Wi-Fi makes it a protocol of choice for applications that require internet access. However, Wi-Fi comes at the cost of higher power consumption, making it a less-than-great choice for battery-operated applications. Its primary applications include notebook computers, desktop

LINKEDIN

@CypressSemi

YOU TUBE

www.linkedin.com/company/ cypress-semiconductor/

www.youtube.com/channel/ UCOPwgpx4mNwPULlfdCn1xiQ

computers, and servers. Wi-Fi’s biggest advantage lies in the support of existing standard packet-switched LAN protocols. After all, it’s effectively a wireless version of Ethernet cables. The internet isn’t that far away when you have Wi-Fi – just connect to one of the many Wi-Fi “hotspots” that are available around populated areas. ZigBee: ZigBee offers low power, a light stack, and good market mindshare. However, ZigBee is not widely used in consumer equipment such as PCs and smartphones, limiting the nodes that a device can connect with. The primary applications for ZigBee include industrial automation, home automation, and smart metering. Bluetooth Classic: Bluetooth is a legacy standard for personal area networks (PAN) made popular by audio streaming to cellphone headsets. The Bluetooth Classic protocol offers better power efficiency compared to Wi-Fi. Primary applications include mobile phones, computer mice, keyboards, and office and industrial automation devices. Bluetooth Low Energy (BLE): BLE, also known as Bluetooth Smart, is optimized for short-range, low-power wireless applications that communicate state or control information (i.e., limited data). This protocol is best suited for battery-powered application like wearables, smart home, smartwatches, mobile phones, and other portables. Long Range (LoRa): LoRa provides a long-range, low-power wireless platform. LoRa IoT nodes are typically bulky and require relatively large external antennae. These factors are usually not a restriction since these nodes are usually attached to large equipment, buildings, or vehicles. Typical applications include smart cities, smart homes and buildings, smart agriculture, smart metering, smart supply chain and logistics, and more.

Radio and the computer The emergence of “programmable wireless sensor-based embedded controllers” – that is, integrated radios or simplifying the means to interface with radio ICs – enables greater flexibility, ease of management, and configurability. Provisioning sensor data will require these systems to use ultra-low-power wireless transceiver microcontrollers that not only have extremely low-power standby currents but also use power-conservation techniques to prolong the operating life from a single coin cell. MCU vendors have had a range of proprietary CPU cores for quite a long time. Today, however, it is expected that your design is compatible with other devices in

WIRELESS CONNECTIVITY COMPARISON Category IEEE Standard

Bluetooth Low Energy (BLE)

Bluetooth Classic

ZigBee

Wi-Fi

LoRa

802.15.1

802.15.1

802.15.4

802.11 (a,b,g,n)

Frequency (GHz)

2.4

2.4

0.8, 0.9, 2.4

2, 4 or 5

Various, Sub-GHz

Bandwidth

Low

Medium

Low

High

Very Low

Power Consumption Network Type

Very Low

Medium

Very Low

High

Low

Point-to-point

Point-to-point

Point-to-point

Access-point based

Point-to-point

Range Primary Applications

TABLE 1

Short

Short

Short

Medium

Long

Wearables, mobile phones

Mobile phones, mouse, keyboards, office and industrial automation devices

Industrial automation, home automation, smart metering

Notebook computers, desktop computers, servers

Smart cities, smart homes and buildings, smart supply chain and logistics

A connectivity link comparison for various wireless protocols.

www.embedded-computing.com/iot

IoT Design Guide 2019

23


IOT EDGE COMPUTING the ecosystem and that your intellectual investment will be reusable. This reality has pushed MCU vendors to choose standard processing architectures on which to build their controllers. Arm became the de facto choice for most MCU manufacturers. Arm processors are at the heart of a computing and connectivity revolution, offering an extensive portfolio of processors that can be used from sensors to smartphones to supercomputers. Low-power IoT devices that have batteries for power commonly use efficient lowbandwidth radio technologies such as BLE to communicate with other nodes using short bursts of data. Entry-level microcontrollers such as the Cortex-M0+ and -M4 are optimal for these energy-efficient, low-cost IoT designs. The M0+ has the advantage of very low gate count, making integrating an M0+ with a radio very cost-effective in terms of die area. The M0+ is sufficient to run a typical BLE stack along with a relatively light application. The M4, on the other hand, is highly energy-efficient in this power range and can handle relatively complex application code in addition to the radio stack. For certain applications, designers prefer to use a combination of both M4 and M0+ cores. Such an approach to design is popularly known as an asynchronous multicore architecture. Multicore MCUs integrate enough resources to allow the CPUs to handle intensive tasks in parallel and take advantage of multitasking efficiencies. These also allow developers to efficiently assign system events to a specific core such that the appropriate power and performance goals are met. Wi-Fi applications, on the other hand, require a much more capable processor. For applications that need determinism and real-time performance such as routers, the Cortex-R series is the de facto choice. However, a higher clocked M4 or M7 might also be sufficient for entry-level Wi-Fi applications. On the other end of the spectrum, an application (A series) processor provides the necessary hardware and software capabilities for high throughput (for example, 802.11ac) and higher application complexities. Connecting a device to a network introduces the possibility of being hacked, making the security of IoT devices non-negotiable, whether the device is a personal wearable band or a connected car. Data protection is needed at all levels, including storage, processing, and during communications to ensure system reliability. In addition, any software or firmware that handles data should also be secured. A robust IoT security portfolio allows developers to protect devices while deploying the security level that best matches their application needs. Most MCU vendors provide built-in hardware with encryption and tamper-protection capabilities. The latest Cortex-M series CPUs – such as the -M23, -M33, and -M35P – also provide security features that are built into the CPU subsystem, therefore providing a security upgrade path for existing IoT nodes that use the M0+, M4, and M7. In addition to integrating the radio and CPU, several other peripherals are ideally integrated into the same chip to provide it with SoC-like capabilities. The functionality of these multiple MCUs can be provided through a single highly-integrated multicore MCU with integrated peripherals. The introduction of feature-rich MCUs that bring greater system integration in a single-chip architecture has extended to meet the requirements of IoT-based applications. For example, MCUs for IoT also integrate analog front ends (AFEs) for sensor interface, computing engines (Arm CPU) for sensor fusion, on-chip memories, connectivity (BLE), and capacitive touch interfaces (CapSense) to enable designers to build compact small form factor designs for the next generation of portable IoT applications. These feature integrations in a single chip not only reduce space requirements, but they also bring down system cost and power.

24

IoT Design Guide 2019

“Yes, but can it run Linux?” Sensing, organizing, analyzing, presenting, and decision-making on information requires software. We have been using various platforms to perform this at varying levels of capabilities for decades. Now that we have extremely small devices running from single-cell batteries with the power of 80486-based personal computers from the late 1990s, the next question is obvious: What’s the software? With bare-metal, RTOS, and true operating systems like Linux to choose from, we run into the similar problems explored in the previous sections, but from a software point of view. Starting with a scalable software architecture is a must for any embedded application, with plenty of thought given to the possible future enhancements that might be needed before finalizing the programming architecture for any embedded application. Another important consideration is cost: As system capabilities increase, so does the need for faster processors, more code memory, and RAM. The figure below shows a typical feature versus cost diagram for an embedded system, although lines between different tiers are blurred when it comes to possible hardware choices. Bare-metal programming is preferred in situations where the application is simple and implemented on low-end processors; the application needs to extract every cycle of CPU power and the overhead introduced by an OS is unacceptable; security and safety are closely tied to hardware where the system is performing and consistently functioning per exact expectations; and there are constraints on hardware cost and the need for high efficiency.

RTOS plus … Many embedded applications are infinite loops wherein they perform one task, then another task, and so on, repeating the same functionality. Most of these tasks are dependent upon each other. Bare-metal programming is not ideal for such situations as the code should be predictable, understandable, and easy www.embedded-computing.com/iot


to debug. Having a scheduler makes the life of an embedded engineer much simpler – each software module can be designed independently, then linked and scheduled with the others. Therefore, an RTOS is preferred as code complexity increases and the system requires a powerful microprocessor/microcontroller. An RTOS actually becomes a necessity when MCUs integrate more memory and peripherals. A complex IoT application may require more interrupt sources, more functions, and more standard communications interfaces, mainly wireless. An RTOS will likely be needed in such complex solutions. An RTOS can make full use of featurerich MCUs, especially when provided with middleware that can handle complex tasks that otherwise would require a true OS. However, there are many nonoverlapping areas of complexity and capability when it comes to software. An RTOS with added middleware can approach the capabilities of generalpurpose OSs. For example, middleware can add features such as file systems, networking, graphics, and complex input support, albeit with added development efforts compared to a true OS that supports these features natively. Some RTOSs even support POSIX APIs that can somewhat reuse code from Linux/ UNIX applications. When application complexity increases beyond a certain limit, however, a general-purpose embedded OS comes into the picture. Due to their code size and primary memory requirements, expensive SRAM and NOR memories become impractical. Most embedded versions of general-purpose operating systems require at least 16 to 32 MB of primary memory and 64+ MB of code storage to run properly. Fortunately, application processors and general-purpose operating systems are capable of dealing with cheaper, slower memories such as DRAMs and NAND flash. You won’t lose “real-time capabilities” when you move to an embedded generalpurpose OS. They’re capable of running real-time applications, at slightly higher latency levels, with varying levels of determinism (“soft real time”). But most applications do not require “hard real time” features. A well-validated application www.embedded-computing.com/iot

running on an embedded OS can be as bulletproof and deterministic as a similar application running an RTOS with the help of middleware. With application processors and memory reducing in cost every year thanks to the continued silicon scaling and the abundance of engineers who are more comfortable with a proper OS, many applications that would otherwise use an RTOS now find that the combination of an application processor with the proper OS combination costeffective and quicker to market. IoT Jaya Kathuria Bindra works as an applications manager at Cypress Semiconductor; she manages the Embedded Applications Group and Solutions Development using the PSoC platform. She has 14+ years of experience in the semiconductor industry. Nidhin MS works as a staff applications engineer at Cypress Semiconductor. He has seven years of technical experience with analog, power electronics, touch sensing, embedded computing, and connectivity.

Are Your OpenVPX Handles Breaking?

Superior Rugged Metal Claw If you are ready for a more robust handle/panel solution, come to Pixus! Our OpenVPX handles feature a metal engagement claw and rugged design that ensures the highest reliability. Ask about our new rugged horizontal extruded rails with thicker material for OpenVPX and high insertion force systems today!

sales@pixustechnologies.com pixustechnologies.com

IoT Design Guide 2019

25


IOT EDGE COMPUTING

Fail-safe data storage for IoT applications By Nilesh Badodekar, Cypress Semiconductor

The high endurance, ultra-low power consumption, and instant nonvolatility of ferroelectric RAM (FRAM) make it a compelling alternative memory for critical data logging in the connected world. Today, FRAM memories are available for specific markets, including automotive and industrial. For decades, the basic architecture of remote sensing nodes consisted of a controller, sensor, local storage memory, network connectivity interface, and battery. This architecture has been replicated in all the systems that interact with real-world inputs. In an industrial automation system, controllers monitor several sensors at varying rates, store time-stamped sensor data in local memory or expansion memory, and transfer data via industry standard buses like ProfiBus, etc. For an automotive advanced driver assistance (ADAS) system or event data recorder (EDR) system, several MCUs are collecting data and controlling the car’s electronics for a better driving experience and fail-safe data logging. Similarly, a medical system requires functionality for life-critical sensor data that either get recorded locally or uploaded to a central network. All these systems are trying to solve the fundamental problem of collecting data, storing critical parts of it, and taking appropriate action based on data analytics. However, they all have different priorities. Industrial systems tend to capture massive amounts of data in short intervals from a wide variety of sensors and must maintain a detailed log locally as well as remotely. An automotive system might generate data at a slower rate, but data retention is critical, and, in some cases, data loss can be life-threatening. Since most cars tend to run for over a decade, the long-term reliability of storage tends to be a critical criterion when selecting the appropriate memory. Portable medical systems,

26

IoT Design Guide 2019

on the other hand, tend to prioritize power consumption when selecting the optimal memory technology. Medical implants or hearing aids are highly optimized to store data accurately while consuming the lowest power possible, as these systems operate on a battery supply. Designing fail-safe data storage, with long-term reliability and low power consumption, is one of the critical challenges facing the designers of medical systems. With the advent of the Internet of Things (IoT), every device in the field can begin communicating over the network. A conservative estimate predicts that over 10 billion devices will be connected by the year 2020. These include cars, industrial www.embedded-computing.com/iot


Cypress Semiconductor www.cypress.com

TWITTER

@CypressSemi

automation equipment, medical implants, and New Age devices like wearables, smart homes, etc. Next-generation 5G networks are already being deployed in several parts of the world and are expected to handle a majority of the traffic coming from these devices. But there are several unanswered questions that data scientists and system designers are trying to address today, namely:

• • • •

FACEBOOK

@CypressSemi

LINKEDIN

www.linkedin.com/company/ cypress-semiconductor/

YOU TUBE

www.youtube.com/channel/ UCOPwgpx4mNwPULlfdCn1xiQ

write operations. A flash cell can be “programmed” to contain new data only if the cell is erased beforehand. Programming a cell allows a change from logic ‘1’ state to logic ‘0.’ During the next update, if the cells need to hold a logic ‘1,’ the cell must first be erased. To optimize erase speed and program times, flash manufacturers have created different page, block, and sector architectures. A page is the smallest quantum of data that can be programmed into the flash at one time. Flash devices contain an internal page-size buffer that allows for temporary storage of data. Once the transfer from the external interface is complete, the device initiates a page program operation on a page that is already erased in the main array. If this page contains old data, then it must

Which devices need to be connected to the cloud? How much information needs to be broadcasted? How much processing can be done locally? Who pays for the cloud?

One approach is to upload everything to the cloud and handle processing remotely. While this may work for smaller and isolated systems, once the world becomes more connected and a plethora of systems are trying to upload information, we’ll need to consider the cost of network versus local storage and processing. For example, an autonomous car can generate several gigabytes of data per hour while driving. To anticipate future demand, now is the time to decide what to transfer and what to store locally for compressed transfer later. The same problem will be faced by industrial and medical system designers. Industry 4.0 is already migrating from “upload everything to the cloud” to a “process locally and upload smartly” approach. This makes choosing the optimal local data storage relevant for future systems. These systems will need reliable, lowpower, fail-safe memories for storing critical data. One approach is to use available flash memory to log data. Flash technology is designed for efficient read operations and therefore has become ubiquitous for boot code and firmware storage. As flash is already available to the system, designers may make the easy choice to use a flash for data logging without understanding its technology limitations when it comes to performing www.embedded-computing.com/iot

IoT Design Guide 2019

27


IOT EDGE COMPUTING TODAY, FRAM MEMORIES

be erased prior to a program operation. Every time an erase is performed, the flash cell deteriorates. This phenomenon is quantified as endurance in a flash datasheet. Typically, the best flash devices are rated for endurance cycling of 100,000 eraseprogram cycles and are no longer guaranteed to reliably store data after reaching this limit. While this number appears large on paper, we will demonstrate that this device endurance falls short quickly even in low-end data logging systems. Some manufacturers implement byte programming and delayed programming from buffer to flash memory. While these features do simplify the program operation into the device, they do not address the underlying endurance limitations of flash. To compensate for these limitations, the system designer is forced to implement a complex file system to handle wear leveling of flash cells (i.e., spread wear evenly throughout the cells). The software overhead of a file system slows down the system. Let us evaluate scenarios where designers may consider a flash-based memory for data logging. In industrial automation and asset-management systems, sensor nodes tend to capture data several times per second, periodically sampling several different kinds of sensors. The node then assembles the packets for a network upload. Typically, these data packets can range from 16 bytes to 128 bytes. As there is always a risk of power failure, these packets are stored on a non-volatile memory to avoid data loss. Vibration sensors or stepper motor position sensors provide short bursts of data every few milliseconds while sensors like temperature or humidity provide data once every second, but the logged data packet is comprised of data from several sensors. The tables below provide a comparative analysis of packet size versus sampling rate and how it wears down a flash memory if it is used for data logging. This example uses 8 Mbyte of flash with 105 endurance cycles. (Table 1.) Sampling rate (in ms)

Days in field with no failure

Years in field with no failure

1

1

9709.04

26.60

2

1

4854.52

4

1

2427.26

Packet size of sample (in bytes)

ARE AVAILABLE FOR SPECIFIC MARKETS LIKE AUTOMOTIVE AND INDUSTRIAL. FRAM ALSO SUPPORTS SPI, I2C, AND PARALLEL INTERFACES WITH DENSITIES RANGING FROM 4 KBITS TO 4 MBITS. The following graphs provide an interpretation of this data. We observe that for a low-end system, logging 8-16 bytes of data every 1 ms, an 8 Mbyte flash wears out in less than five years. An automotive or an industrial system is expected to be in the field for over a decade. (Figure 1.) A low cost, high-risk option of simply adding more flash memory requires a complex file system to handle wear leveling in flash devices. If a file system is not implemented, then the system needs to handle the periodic chip erase cycles

Sampling rate (in ms)

Days in field with no failure

Years in field with no failure

1

0.1

970.90

2.66

13.30

2

0.1

485.45

1.33

6.65

4

0.1

242.73

0.67

Packet size of sample (in bytes)

8

1

1213.63

3.33

8

0.1

121.36

0.33

16

1

606.81

1.66

16

0.1

60.68

0.17

32

1

303.41

0.83

32

0.1

30.34

0.08

64

1

151.70

0.42

64

0.1

15.17

0.04

128

1

75.85

0.21

128

0.1

7.59

0.02

Packet size of sample (in bytes)

Sampling rate (in ms)

Days in field with no failure

Years in field with no failure

Packet size of sample (in bytes)

Sampling rate (in ms)

Days in field with no failure

Years in field with no failure

1

10

97090.37

266.00

1

0.01

97.09

0.27

2

10

48545.19

133.00

2

0.01

48.55

0.13

4

10

24272.59

66.50

4

0.01

24.27

0.07

8

10

12136.30

33.25

8

0.01

12.14

0.03

16

10

6068.15

16.63

16

0.01

6.07

0.02

32

10

3034.07

8.31

32

0.01

3.03

0.01

64

10

1517.04

4.16

64

0.01

1.52

0.00

128

10

758.52

2.08

128

0.01

0.76

0.00

TABLE 1 28

Packet size versus sampling rate – how does it wear down a flash memory when used for data logging?

IoT Design Guide 2019

www.embedded-computing.com/iot


once the whole memory is rolled over. This problem only gets aggravated in today’s IoT world with ever-increasing data collecting terminals. Flash-based memories are well-suited for boot code and firmware storage, where the number of write cycles don’t exceed more than 1,000 throughout the lifetime of the product in field. An ideal approach to address the datalogging problem would be to use a highendurance, instantly nonvolatile memory, which does not put data at risk due to program and erase delays. Ferroelectric RAM (FRAM) is suited to address these kinds of applications: FRAM offers endurance cycles of 1014 cycles, has instant nonvolatility, and does not require program and erase operations. Any data that have entered the device interface is instantly stored. To put this in context, a 4-Mbit FRAM can log 128-byte data packet streams every 10us and not wear out for over a thousand years. FRAM memory cells consume power only when they are being written or read, so standby power consumption is on the order of a few microAmperes. This makes it feasible to operate FRAM memories in devices that run on batteries. Hearing aids and high-end medical wearables

Endurance v/s Packet Size for different sampling rates

FIGURE 1

For a low-end system example, an 8 Mbyte flash wears out in less than five years, too short to serve in an automotive or industrial system, which can be in the field for more than 10 years.

designed to sample heartbeats are examples of power-sensitive applications where FRAM can provide the low-power and high-endurance performance required. In automotive systems, where data are continuously logged into memory, a flash-based system will fail to capture data during the “program” periods of flash. In contrast, FRAM-based logging offers high reliability for these systems. The high endurance, ultra-low power consumption, and instant nonvolatility of FRAM make it a compelling alternative memory for critical data logging in the connected world. Today, FRAM memories are available for specific markets like automotive and industrial. FRAM also supports SPI, I2C, and parallel interfaces with densities ranging from 4 Kbits to 4 Mbits. For more details on designing fail-safe data storage for IoT applications, see Interfacing FRAM using SPI (https://bit.ly/2VOpB0d) and Designing an FRAM Data Logger (https://bit.ly/2TnEzgW). IoT Nilesh Badodekar is an Applications Engineer at Cypress Semiconductor.

Jetson Edge Computing Platform Provider

Integrated for Vertical AI Solution as High Security Access Control

•Perfectly drive Nvidia Jetson module series

• AI Performance with 11 TFLOPS & 512 Cores • Only 87 x 50mm (Smallest Carrier) • Extension VID-Ready Peripheral

Aetina Corporation 2F-1, No.237, Datong Rd., Xizhi Dist., New Taipei City 221, Taiwan Tel : +886-2-7709-2568 | E-mail : sales@aetina.com | www.aetina.com

www.embedded-computing.com/iot

IoT Design Guide 2019

29


5G AND WIDE-AREA NETWORKING

5G Primer Part 1: Basic Architecture By Curt Schwaderer, Technology Editor

All of the major mobile operators are actively promoting their 5G advancements. If you believe the commercials, 5G is being rolled out and turned up, enabling higher-bandwidth applications and machine-to-machine interactions. While it’s true that 5G rollout does appear to be underway, it’s not an “all-or-nothing” proposition. Like most innovations, rollout is a phased process. The advanced 5G features and capabilities for IoT, intelligent edge, and AI/IVR [artificial intelligence/interactive voice response] are very real, and attention should be paid to 5G network architecture in order to take advantage.

The physical components organization in 5G is really no different than that of 4G/LTE. The 5G difference lies in the capabilities of the 5G RAN and the organization of the EPC towards a higher level of virtualization.

This is the first in a series of pieces that look at 5G from the ground-up starting with the physical architecture and working our way up through virtualization and network slicing. If you find this informative or have additional questions or comments, please feel free to reach out to me at curt.schwaderer@opensysmedia.com.

Radio access network

5G generalized physical architecture

One of the key features introduced with 5G is that it defines three different spectrum bands that serve a specific purpose:

A look at a generalized 5G architecture is shown in Figure 1. There are three physical components:

• • •

Radio Access Network (RAN): This is the wireless network part of the network that connects to mobile devices. Evolved Packet Core (EPC): This forms the core part of the mobile network and serves as the bridge between the RAN and the Internet or other IP-based services. IP Multimedia Subsystem (IMS): Most people think of this as the Voice over LTE (VoLTE) component, but by design, it’s more general than that. The purpose of the IMS is to provide IP application services within the mobile network infrastructure be it voice over IP or some other IP-based communications service.

30

IoT Design Guide 2019

Low-band (sub 1GHz). This is also the spectrum used by LTE today. It’s the best option to maximize coverage, but the maximum bandwidth tops out at about 100Mbps. Mid-band (sub 6GHz). Mid-band provides higher bandwidth to 1Gbps and lower latency, which is critical to many IoT or machine-to-machine applications. However, reception through buildings and objects is a problem. 5G addresses this by www.embedded-computing.com/iot


using multiple input, multiple output (MIMO) macro cells to increase the number of simultaneous users. A technology called “beamforming” is also a new innovation in 5G, in which the antenna sends a directed signal to every connected device with monitoring capability to increase signal quality. High-band (>6GHz) or mmWave. This gets the majority of the press when it comes to 5G because it provides peak data rates up to 10Gbps with extremely low latency. As you might expect, the price you pay is very limited distance (less than one square mile) and poor object/building penetration. It does, however, fit a nice sweet spot between technologies like WiFi and its low and mid-band predecessors.

Another key feature of the RAN is the capability to incorporate edge routing. Routing traffic between devices and intelligent edge components or between devices dramatically reduces latency involved with routing at the EPC.

PREVIOUS GENERATIONS OF MOBILE NETWORKS TREATED THE PIPES AS BANDWIDTH THAT EVERYONE SHARED; IN CONTRAST, 5G BREAKS THESE RESOURCES INTO VIRTUALIZED NETWORK SLICES THAT CAN BE ALLOCATED, USED, AND ISOLATED.

Evolved packet core (EPC) The 5G EPC also has some innovations, but I’ll cover them in more detail in my next piece. In general, the 5G EPC is being updated to better manage and integrate voice, data, and internet connectivity. The key innovations in the EPC are:

Complete separation between control plane (connection setup/ teardown) and user plane (the communications content): Some applications have few devices but very high bandwidth

www.embedded-computing.com/iot

FIGURE 1

5G generalized physical architecture has three components: RAN, EPC, and IMS.

requirements. Others use a massive number of sensor devices, each requiring very low bandwidth. The separation of user and control plane allows the network to flexibly allocate resources to one or the other depending on the need. Virtualization: Defining virtualized network functions (VNF) with the ability to run VNFs and standard server platforms reduces cost and increases flexibility in contrast to the previous generations of fixed-function EPC hardware components. Network slicing: This is somewhat analogous to the virtualization of compute, memory, and storage components, only slicing the physical network resources into logical network functions. Each network slice consists of network resources dedicated to serving a specific customer or service. Network slices are also isolated and insulated from one another, so one poorly behaving system cannot disrupt service from others. Previous generations of mobile networks treated the pipes as bandwidth that everyone shared; in contrast, 5G breaks these resources into virtualized network slices that can be allocated, used, and isolated. This ability is especially important with the advent of IoT, where control of IoT devices and frameworks is not possible at this time.

The EPC is undergoing change, but it’s more about “under-the-hood” improvements and efficiencies in order to carry the increased burden of the improvements within the RAN.

IP multimedia subsystem (IMS) I’ll address the IP multimedia subsystem in more detail at a later date. The high-level view is that this component implements specific IP-based applications and services within the mobile network. The most well-known use case for the IMS is Voice over LTE (VoLTE). Previous generations of mobile networks (and even 4G/LTE) implement mobile voice and short message service (SMS) as circuit-switched data – not IP. As the network evolves toward end-to-end IP, the first application that needs to be addressed are these two services. So the first defined IMS involves components that communicate with mobile phones using Voice over IP (VoIP) technology instead of the old circuit switched methods. This finally breaks the divergent paths between the old circuit switched voice and SMS and the new IP based data services of previous mobile network generations. By moving to VoLTE, the entire circuit switched part of the network can now be eliminated saving hardware and maintenance costs.

Where is 5G today? At present, mobile operator focus is on deploying the new RAN spectrums. This work will enable 5G-enabled devices to use the network and they will be connecting with the 5G RAN, but they will be running over the older 4G/LTE EPC. The industry calls this “5G Non-Standalone” or 5G. This moves the bottleneck to the EPC until the 5G EPC is deployed. The industry uses the term “5G Standalone,” or 5G SA, to note a network where the RAN and EPC are both 5G. IoT IoT Design Guide 2019

31


5G AND WIDE-AREA NETWORKING

Power and protocols By Alix Paultre, Senior Technology Editor, Analog, Power & Europe

The world has been embroiled in aggressive technology progression since the creation of the steam engine; now, not only is the pace of change accelerating, the very nature of how we perceive technology is changing. Once driven by a hardware-oriented focus, large parts of the industry are moving toward a hardware-agnostic development philosophy. This is not to say that hardware isn’t important; not only is hardware the foundation of applied technology, but advanced hardware is in fact the enabler of the migration to systems-based engineering design. Advanced hardware provides the support that enables the functionality of the software involved. Arthur C. Clarke once said: “Every advanced technology is indistinguishable from magic.” The modern twist on his statement is “Every advanced technology eventually becomes a toaster.” Toasters are magic. You put bread in, push down the lever, and toast comes out. There could be little fire-breathing dinosaurs on the inside like in the Flintstones, but nobody cares. It makes toast. It does its job so well it is invisible. Once upon a time, toasters were relatively complex machines. Every technology eventually matures to a level where it becomes ubiquitous in society, or it goes away and gets replaced by something better.

A hardware-agnostic world This migration of technology development affects not only the user but also the designer. Just as convergence and integration change the way the user experiences a product, it also impacts the role of the designer of that product. More and better tools, and more and better components, mean the role of the electronic systems designer is migrating from a “roll-your-own” to a systems-integration paradigm. This reality was brought very strongly to my attention at the recent Things Conference in Amsterdam. Based on a community dedicated to LoRa wireless cloud-based systems, The Things Industries, the organization behind the event, starkly demonstrated the new wave of systems-based designers. Largely hardware-agnostic, The Things Industries focuses on the functionality and software aspect, because of the aforementioned advanced hardware able to support it. The mainstream component baselines are now so high that unless you are

32

IoT Design Guide 2019

designing for a special-case application, most of the hardware solutions from the leading manufacturers will support it. That’s why everyone is migrating into increased design, service, and support business models, for the value-add to the customer that old-school tech differences once provided. In this paradigm, the application can “simply” be addressed by defining the required functionality, then integrating the available subsystems in the industry to serve them. An example of this is one company at the Things Conference, myDevices, that creates applicationspecific “IoT in a Box” cloud-enabled kits for a single task. One of their turnkey IoT solutions is their box for temperature monitoring in a controlled environment (Figure 1). The kit contains sensors, boards, and a LoRa Gateway so you can make a single-task gadget to do exactly what you need. No more, no less. Kevin Bromber of myDevices explained the “IoT-in-a-box” solutions at the Things Conference in Amsterdam: Using the myDevices (mydevices.com) “Cayenne” www.embedded-computing.com/iot


development tool, designers can use the drag-and-drop IoT project builder to quickly and easily develop and deploy IoT solutions across a wide variety of verticals. Cayenne contains a catalog of certified IoT-ready devices and connectivity options and enables users to easily add any device into the library with MQTT API. All devices in Cayenne are interoperable, with features like a rules engine, asset tracking, remote monitoring and control, and tools to visualize real-time and historical data.

FIGURE 1 Each of the myDevices “IoT-in-a-box” kits contains the hardware needed to address a specific application.

Power allocation The biggest questions that exist in such a development paradigm are 1) what protocols to use, and 2) how much power the system will need. WiFi or LoRa? Bluetooth Low Energy or Zigbee? Any solution created today for an IoT environment must deal with a multispectral RF environment and a variety of open-source and proprietary communication protocols. The industry is migrating toward a multiprotocol environment where devices can communicate with whatever wireless connection methodology is available, making this issue less pressing.

The other issue, power, is determined largely by the scale of the solution needed. A small and efficient sensor may be able to operate from harvested energy, either from the ambient environment (solar, vibration, etc), or from the RF energy of an interrogating signal. Larger systems, or those that also must drive motors or another actuation mechanism, will obviously need more power. The important thing to remember is that a hardware-agnostic development environment doesn’t reduce the importance of good electronic design, it just shifts the focus from the components to the subsystems. It is still important to have an elegant and efficient design, as you can build a poor solution from good parts if they aren’t well integrated. IoT

EXECUTIVE SPEAKOUT

Simplify Development of Secure, Connected Devices By Patrick Johnson, Microchip Senior Vice President

With the accelerated growth of the Internet of Things (IoT) enabling the deployment of Internet connectivity into virtually every industrial segment, security threats are escalating in quantity and scale. The impact of these threats on attacked organizations or companies can be huge. They can ruin a company’s reputation, negatively impact its financials and allow its intellectual property to be stolen or destroyed. While the rationale for using cryptography to secure these connected nodes is understood, many designers do not yet have the knowledge about how to implement this type of security in their applications. Hackers have become increasingly sophisticated, making it imperative that you apply sound security principles in the development of your product. Developed and backed by industry experts from the Trusted Computing Group (TCG), the Device Identity Composition Engine (DICE) security standard is a simple and reliable method that you can www.embedded-computing.com/iot

implement in the hardware of your product during manufacturing. The architecture breaks up the boot process into layers and creates unique secrets along with a measure of integrity for each layer, automatically re-keying and protecting secrets to avoid malware attacks. Minimizing development time and ease of use are top considerations when designing cloud-connected solutions. Microchip’s new CEC1702 IoT Development Kit (DM990013BNDL) for Microsoft Azure IoT provides you with everything you need to easily incorporate the DICE security standard in your product. It comes with the CEC1702 MCU and sample code to help you quickly develop a secure, cloud-connected solution. This kit is certified by Microsoft Azure, so you can be confident that the necessary components to connect your application to the Internet have been vetted and certified. IoT Design Guide 2019

33


THE LAST WORD

Taking back control of our personal data By Todd Mozer, Sensory

As more and more devices that are designed to watch and listen to us flood the market, there is rising concern about how the personal data collected gets used. Facebook has admitted to monetizing our personal data and that of our friends. Google has been duplicitous in its tracking of users even when privacy settings are set to not track; more recently, it has admitted to placing microphones in products without informing consumers. In no realm is the issue of privacy more relevant than today’s voice assistants. A recent PC Magazine survey of more than 2,000 people found that privacy was the top concern for smart home devices (more important than cost!). The issue becomes more complex as personal assistant devices become increasingly better-equipped with IP cameras and various other sensors to watch, track, and listen to us. Admittedly, people get a lot in return for giving up their privacy. We’ve become numb to privacy policies. It is almost expected that we will tap “accept” on End User License Agreements (EULAs) without reading through terms that are intentionally designed to be difficult to read. We get free software, free services, music and audio feeds, discounted rates, and other valuable benefits. What we don’t fully realize is what we are giving up, because big data companies don’t openly share that with us. We fall into two main categories of consumers: trust the giants of industry to do what’s right, or don’t play (and lose out on the benefits).

34

IoT Design Guide 2019

For example, I love having smart speakers in my home, but many of my friends won’t get them – precisely because of these very valid privacy concerns. Our legislators are trying to address privacy concerns, but the government needs to fully understand how data is used before writing legislation. They need help from www.embedded-computing.com/iot


Sensory

www.sensory.com

TWITTER

@Trulyhandsfree

FACEBOOK

@Sensoryinc

LINKEDIN

www.linkedin.com/company/sensory-inc-/

YOU TUBE

www.youtube.com/user/sensoryFAE

AI personal assistants provide examples of how sharing data can benefit or hurt the end user. The more we use these devices, the more they learn about us. (Figure 1.) As they become capable of recognizing who we are by face or voice and get to know and memorize our histories, preferences and needs, these systems will evolve from devices that answer simple questions into proactive, accurate helpers that offer useful recommendations, unprompted reminders, improved home security – assistance we users find truly helpful. But they can also reveal private information that we don’t want others to know, which is what happened when one girl’s shopping habits triggered advertising for baby products before her parents knew she was pregnant. As digital assistant technologies get smarter, there will be more concern about what private data they collect, store, and share over the internet. One solution to the privacy problem would be keeping whatever is learned by the device on the device by moving the AI processing out of the cloud to the edge (on device), ensuring our assistants never take personal data to the cloud so there is no privacy risk. This would be a good solution as embedded AI becomes more powerful. The biggest disadvantage may be that every new device would have to relearn who we are.

FIGURE 1

The more we talk to our smart speakers and personal assistants, the more they learn about us.

the tech community on how to deal with these issues. Europe has adopted the new General Data Protection Regulation (GDPR) to address concerns related to businesses’ handling and protection of user data. This is a step in the right direction, as it provides clear rules to companies that handle personal data and offers clearly defined monetary penalties for failure to protect the data of citizens. However, is it enough to slap fines on these companies for failing to comply? What happens when they think the market value of the infringement will be greater than the penalty? Or when human errors are made, which happened when a European user requested their data and was accidentally given someone else’s data. In the present day, many of our daily tasks require us to share personal data. Whether we are talking to our AI assistants, using our smartphone to navigate around traffic, or making a credit-card transaction at a local retailer, we’re sharing data. There are benefits to sharing this data, and as time goes on people will see even more benefits from data-sharing, but there are so many problems too. www.embedded-computing.com/iot

Another possible solution is more collaboration among data companies. Some of the AI assistant companies are starting to work together; for example, Cortana and Alexa are playing nicely. This current approach, however, is about transferring the baton from one assistant to another so that multiple assistants can be accessed from a single device. It’s unlikely this approach will be widely adopted, and even if it were, it would result in inefficiency in the collection and use of our personal data, because each of the AI assistant providers would have to build their own profile of who we are based on the data they collect. However, because companies will want to eventually monetize the data they collect, sharing done right could actually benefit us.

Could more sharing of our data and preferences improve AI assistants? Sharing data does create the best and most consistent user experience. It allows consumers to switch AI brands or buy new devices without losing any of the benefits in user experience and device knowledge of who we are. Each assistant having its own unique data and profile for its users skirts the privacy issue and misses the advantages of big data. That doesn’t seem to be the right way for the industry to advance. Ideally, we would create a system where there is shared knowledge of who we are, but we, as individuals, need to control and manage that knowledge. For example, Google knows that the most frequent search I do on Google Maps is for vegetarian restaurants. Google has probably figured out that I’m a vegetarian. Quite ironically, I found “meat” in my shopping cart at Amazon. Somehow, Alexa thought I asked it to add “meat” to my shopping cart. Perhaps if Alexa had the same knowledge of me as Google it would have made a different and better decision. Likewise, Amazon knows a lot about my shopping details. It knows what pills I take, my shoe size, and a whole lot of other things that might be helpful to the Google Assistant. A shared knowledge base would benefit me, but I want to be able to control and oversee this shared database or profile. Improved privacy without losing the advantages of shared data is achievable through a combination of private devices with embedded intelligence and cloud devices restricted and governed by legislature and industry standards. Legislature should enable user-controlled personal-data sharing systems; the user can then choose their own mix of private devices with embedded intelligence plus cloud devices that offer more general and standard protections. Of course, the legislature IoT Design Guide 2019

35


THE LAST WORD is the hard part since our government tends to work more for the industries that fund reelections and bills than the people. In any event, here are some thoughts on how legislation for privacy could work:

• • •

Companies can’t retain an individual’s private information. That’s it! There is no good reason for them to have it. Period … but they can collect it and provide it to the owner. Every person has and controls their “Shared Profile” or SP. The SP contains any and all information collected from any company. It is an item list categorized by type (clothing sizes, restaurants, etc.). The SP is divided into “confidential” (publicly available to any company) and “nonconfidential” (not publicly available) sections. The SP owner has direct control over their data and decides the following:

1. Which categories reside in the “nonconfidential” SP versus the “confidential” SP 2. Which items are included in those public categories inside the “nonconfidential” SP 3. Which companies can access information in the “confidential” SP and what specific information they can access Companies can access the “nonconfidential” SP and utilize it for their purposes, but they can’t disclose or sell anything in it to others. It’s our data, not theirs! Companies can create new categories or items within categories on our SP; however, data added by companies is automatically placed in the “confidential” section. The user gets notified about new information and they can monitor, screen, edit, and move such information into the “nonconfidential” SP, making it useful for other companies to access.

Ultimately, this concept is a shared format designed to give companies equal access to data while providing users full ownership and control of their personal data and a clear understanding of what is shared or made public. No longer would the user have to deal with individual companies and complex EULAs, and no longer would they need to fear that their devices are carrying out unscrupulous behavior because of the vagueness of our legal system and the less-than-transparent ways big data companies use our data. With a little creative thinking and a shift in regulation, we the people can change the future of data collection and take back ownership and control of our personal data. IoT Todd Mozer is the CEO of Sensory. He holds over a dozen patents in speech technology and has been involved in previous startups that reached IPO or were acquired by public companies. Todd holds an MBA from Stanford University and has technical experience in machine learning, semiconductors, speech recognition, computer vision, and embedded software.

36

IoT Design Guide 2019

www.embedded-computing.com/iot


EXECUTIVE SPEAKOUT

ADVERTORIAL

WHY YOU NEED A TECHNOLOGY PARTNER WITH HARDWARE & SOFTWARE EXPERTISE By Michael Lamp, Director of Business Development, Americas, IoT The Internet of Things is more than just a buzzword, it’s a multibillion dollar industry opportunity – at least, for those brave enough to take on the complexity. Even before the proverbial ink is dry on your project’s sign-off, the list of requirements and vendors you need to fulfill them is growing. Most companies say it takes up to 10 partners to even start an IoT project. How did a market built on connected technology get so fragmented? It comes down to the specialized expertise needed in three areas: sensing platforms, new business models and an evolving maintenance landscape. Sensing platforms, data gathering and other emerging technologies add complexity IoT starts with data. But setting up the right infrastructure to collect that data may require a myriad of specialists. Sensing systems add their own set of unique complications, many of which fall outside the realm of your typical IT skillset and are unique to your application. How do you secure or update a non-standard device? Avoid wireless interference? Regulate sensors to control connectivity fees? That leads right into the connection: it could be cellular, which involves data plans and extra taxes on your network, or it could be internet-connected, which means you’ll need infrastructure to keep connections running 24/7. All that is just for collecting the data. Then you’ve got to aggregate, store and analyze it. That’s usually the job of the cloud. You’ll need to engage in-house or hired programmers to customize the cloud for your specific use case. Your management software may need to access this data, or visualize the data in a form that’s applicable to your specific use case. (Haven’t built the site or the app yet? You’ll want to get on that, too.) Wider group of business units at the table All those different technologies mean one thing: people – and lots of them. Until now, IT groups could manage everyone’s computer or mobile phones via standard operating systems and security patches. IoT is a whole different beast. It involves the CEO looking for efficiencies and innovation. The CIO or IT leader tasked with scoping it. The data analysts. The programmers. And the list goes on: project managers, system integrators, operations specialists, installers, and business stakeholders – all have a role to play in ensuring an IoT project’s success from the technology as well as the business side. A bunch of stakeholders at the table means a whole host of requirements to satisfy. But since many of these people often have different business goals, their wish lists can complicate an already complex process. Choose your partner wisely An IoT project is a lot like building a house. If you know how to lay a foundation, wire the electrical system, lay plumbing and connect your HVAC all while staying within an operating budget, doing it yourself is easy. (Don’t forget to pick the paint color with

the most curb appeal and list it on the MLS with the right interior decoration to sell quickly, too.) That’s a lot to ask of an in-house team, though, especially once you include the time it takes to handhold all the various stakeholders from across the business through a successful implementation. Here’s where companies like Avnet, who has recently added software expertise to its hardware capabilities, can put you ahead of the competition. A vigilant partner, much like a general contractor, can ensure you get your IoT house built on time, on budget and ensure that it’s ready to perform when it really counts. Avnet • www.avnet.com/softweb


IoT Design Guide

2018 DESIGN GUIDE PROFILE INDEX Page

Advertiser

Page

Advertiser

AI & EDGE COMPUTING

SOFTWARE & DEVELOPMENT TOOLS

39

Crystal Group

44

40

Congatec

Cypherbridge

LPWAN (INCLUDING LoRa, NB-IOT, ETC.) APPLICATIONS: INDUSTRIAL 41

45

Microchip

WinSystems Storage

DEVELOPMENT KITS 42, 43

Lauterbach, Inc.

44

Technologic Systems

46

Virtium

OpenSystems Media works with industry leaders to develop and publish content that educates our readers. 18th-century technology for modern medical IoT By Jeff Miller, Mentor – A Siemens Business Researchers at MIT and Brigham and Women’s Hospital were trying to figure out how to power an ingestible probe without using a battery; the acid in a battery, when ingested, would cause health concerns and lose power over time. Looking to a voltaic cell for inspiration, they ultimately built a prototype consisting of a custom IC, specialized sensors, a unique PCB, and software from Mentor to track this ultimate IoT device on the edge.

www.embedded-computing.com/white-paper-library/18th-century-technology-for-modern-medical-iot

38

IoT Design Guide 2019

Check out our white papers at www.embedded-computing.com/ white-paper-library www.embedded-computing.com/iot


Edge Computing for IoT Crystal Group provides advanced connectivity kits for collecting and processing IoT data in the field, where office grade systems would fail. Designed for smart city, asset tracking, agriculture, smart grid, smart home, and transportation use cases, the system employs Ruckus E510 outdoor wireless access points (AP) which are connected to a Crystal Group ruggedized RCS7450 switch. Behind this network is a Crystal Group FORCE™ RS2608 rugged scalable Xeon class server which can host GPUs. The Ruckus E510 is an 802.11ac Wave 2 wireless AP designed with a unique two element enclosure, which separates the RF components from the antenna module. This allows for flexible antenna placement when the AP must be placed inside a vehicle or other metal-shielded environment. The Crystal Group RCS7450 switch family supports 24 and 48 copper or fiber ports, uses open standard protocols, and offers advanced stacking capabilities. The system can be configured with an IP-sec encryption module which enables end-to-end encryption tunnels from the edge to the core, allowing customers to control network access, support multi-tenant networks, and protect data in flight. The Crystal Group FORCE RS2608 is a rugged 2U server class platform designed for heavy industrial applications in uncon™

trolled environments. This high reliability system supports a Skylake 24 core Xeon CPU, 1TB of DDR4 RAM, and twelve 2.5" SSD drives. The kit creates a capability for large IoT data ingress, reliable network connectivity, and advanced AI processing at the edge of the network. While cloud access is also possible, computing locally reduces latency, improves security, and eliminates cloud-based network throughput variations.

FEATURES High speed data ingress and processing Ruckus BeamFlex+™ adaptive antennas to improve signal integrity Copper or fiber switch options Support for Wi-Fi and emerging IoT protocols in a single Access Point DoD certified, with IP-sec and AES 256 encryption for security GPU capable for DNN or CNN processing

www.crystalrugged.com/Edge-Computing-for-IoT/

Crystal Group, Inc.

www.crystalrugged.com

www.embedded-computing.com/iot

info@crystalrugged.com www.linkedin.com/company/crystal-group/

 800-378-1636 @CrystalGroup

IoT Design Guide 2019

39

IoT Design Guide

AI & Edge Computing


IoT Design Guide

AI & Edge Computing

Server-on-Modules With the COM Express Type 7 specification, PICMG has defined a highly flexible new module standard characterized by highspeed network connectivity with up to four 10 GbE interfaces and up to 32 PCIe lanes for customization. This is perfect for bringing the embedded server-class Intel® Xeon® D SoC as well as the new Intel® Atom™ processors to the industrial fields. Developers with high-performance demands for storage and networking applications, edge and fog servers for IoT and Industry 4.0 applications are best served with conga-B7XD based on the Intel Xeon D1500 processor family. Available with ten different server processors soldered on the module for highest robustness. For applications that are power restricted, the new conga-B7AC modules with Intel® Atom™ C3000 processors raise the bar with a power consumption of only 11 to 31 Watt TDP, the new lowpower multi-core Server-on-Modules feature up to 16 cores.

congatec

www.congatec.us

FEATURES High scalability from 16 Core Intel® Xeon® processor technology with 45 W TDP to low-power quad core Intel® Atom™ processors with a TDP as low as 11.5 W. All Server-on-Modules support the commercial temperature (0°C to 60°C) range. Selected SKUS even offer support for the industrial temperature range (-40 °C to +85 °C). conga-B7AC with Intel Atom technology offers 4x 10 Gigabit Ethernet ports, conga-B7XD with Intel Xeon technology support 2x 10 GbE. Supporting up to 48 gigabytes of fast and energy efficient 2400DDR4 (ECC or Non ECC). Up to 32 PCIe lanes for flexible server extensions such as NVMe flash storage and/or GPGPUs. Comprehensive set of standard interfaces with 2x SATA Gen3 (6 Gbs), 6x USB 3.0/2.0, LPC, SPI, I2C Bus and 2x legacy UART. www.congatec.us

sales-us@congatec.com

www.linkedin.com/company/congatec-ag

 858-457-2600 @congatecAG

AI & Edge Computing

conga-B7E3 The embedded computing market is demanding more computing power across application areas: Industry 4.0 applications require synchronization of multiple machines and systems; machine vision in collaborative and cooperative robotics requires processing of image and other environmental data. Many of the edge computing tasks that arise around the development of 5G networks require server class performance by default. The conga-B7E3 with AMD EPYC processors are highly flexible and an attractive migration platform for next-gen embedded server designs. They support up to 32 NVMe or SATA devices and up to 8 native 10 GbE channels. Support is also provided for legacy I/Os such as field buses and discrete I/O interfaces, which is critical for industrial server technologies.

www.congatec.us

congatec

www.congatec.us

40

IoT Design Guide 2019

FEATURES Equipped with AMD EPYC Embedded 3000 processors with 4, 8, 12, or 16 high-performance cores, support simultaneous multi-threading (SMT) and up to 96 GB of DDR4 2666 RAM. Measuring just 125 x 95 mm, the COM Express Basic Type 7 module supports up to 4x 10 GbE and up to 32 PCIe Gen 3 lanes. For storage the module integrates an optional 1 TB NVMe SSD and offers 2x SATA Gen 3.0 ports for conventional drives. Further interfaces include 4x USB 3.1 Gen 1, 4x USB 2.0 as well as 2x UART, GPIO, I2C, LPC and SPI. Seamless support of dedicated high-end GPUs and improved floating-point performance, which is essential for the many emerging AI and HPC applications.

sales-us@congatec.com www.linkedin.com/company/congatec-ag

 858-457-2600 @congatecAG

www.embedded-computing.com/iot


NET-429 Industrial TSN Switch WINSYSTEMS’ NET-429 network switch is designed for the harsh environments of the factory floor and provides performance for time-critical industrial networks. The switch has eight 10/100/1000 Mbps RJ45 Ethernet ports plus two 1000Base-X SGMII SFP ports and redundant power inputs with Power-over-Ethernet (PoE) PD support. Enabled for the latest IEEE 802.1 standards for Quality of Service (QoS) and Time Sensitive Networking (TSN), it includes advanced prioritization and timing features to provide guaranteed delivery of time-sensitive data. The NET-429 is based on the Marvell® Link Street® family, which provides advanced Quality of Service (QoS) features with 8 egress queues. The high-performance switch fabric provides line-rate switching on all ports simultaneously while providing advanced switch functionality. It also supports the latest IEEE 802.1 Audio Video Bridging (AVB) and Time Sensitive Networking (TSN) standards. These new standards overcome the latency and bandwidth limitations of Ethernet to allow for the efficient transmission of real-time content for industrial applications. The AVB/TSN protocols enable timing sensitive streams (such as digital video, audio, or industrial control traffic) to be sent over the Ethernet network with low latency and robust QoS guarantees.

FEATURES IEEE 802.1 Time Sensitive Networking (TSN) IEEE1588v2 one-step Precision Time Protocol (PTP) 8x 10/100/1000 Mbps RJ45 Ethernet ports 2x 1000Base-X SGMII SFP ports 256 Entry TCAM for Deep Packet Inspection Supports 4096 802.1Q VLANs Redundant Wide Range 9-36V DC Inputs Power over Ethernet (PoE) PD 802.3at Type1 device Fanless -40°C to +85°C Operating Temperature Range Shock and Vibration Tested Long lifetime (10+years availability)

Easing the burden of deployment, the NET-429 can be powered remotely as a PoE-PD Type I device or through the two wide range 9-36V DC power inputs. All three power inputs are redundant for maximum uptime and include overload protection to prevent damage to other systems. WINSYSTEMS also combines design elements of our broad portfolio for application specific design for OEM clients. Contact an Application Engineer to schedule a consultation of your product requirements.

www.winsystems.com

WINSYSTEMS, INC.

www.winsystems.com www.embedded-computing.com/iot

sales@winsystems.com https://www.linkedin.com/company/winsystems-inc-

 817-274-7553 http://twitter.com/winsystemsinc

IoT Design Guide 2019

41

IoT Design Guide

Applications: Industrial


IoT Design Guide

Development Kits

Lauterbach Debugger for Intel x86/x64 Skylake/Kabylake Lauterbach TRACE32 Debugger for Intel x86/x64: In January of this year, Lauterbach introduced the new CombiProbe Whisker MIPI60-Cv2. The TRACE32 CombiProbe and TRACE32 QuadProbe now offer the same debug features for the Converged Intel® MIPI60 connector: • Standard JTAG, Intel® debug hooks with Pmode, and I2C bus • Merged debug ports (two JTAG chains) • Intel® Survivability features (threshold, slew rate, ...) However, these debug tools have different areas of application. The TRACE32 QuadProbe, which is expressly designed for server processors, is a dedicated debug tool that enables SMP debugging of hundreds of threads on targets with up to four debug connectors. The TRACE32 CombiProbe with the MIPI60-Cv2 Whisker, designed for client as well as mobile device processors, can capture and evaluate system trace data in addition to its enhanced debugging features. Trace capabilities include support of one 4-bit and one 8-bit trace port with nominal bandwidth. The TRACE32 CombiProbe with the DCI OOB Whisker is specially designed for debugging and tracing of form factor devices without debug connectors. If the chip contains a DCI Manager, the target and the debugger can exchange debug and trace messages directly via the USB3 interface. The DCI protocol used to exchange messages supports standard JTAG and Intel® debug hooks as well as trace messages for recording system trace information.

FEATURES CombiProbe MIPI60-Cv2 provides debug and system trace capability Support for standard JTAG, debug HOOKs and I2C bus Support for merged debug ports (two JTAG chains per debug connector) Support for survivability features (threshold, slew rate, etc.) Support for system trace port with up to 8 trace data channels 128 MByte of trace memory SMP debugging (including hyperthreading) AMP debugging with other architectures BIOS/UEFI debugging with tailor-made GUI for all UEFI phases Linux- and Windows-aware debugging Hypervisor debugging

www.lauterbach.com

Lauterbach, Inc.

www.lauterbach.com

42

IoT Design Guide 2019

info_us@lauterbach.com  508-303-6812 www.lauterbach.com/pro/pro_core_alt1.php?chip=COREI7-7THGEN

www.embedded-computing.com/iot


TRACE32 Multi Core Debugger for TriCore Aurix Lauterbach TriCore debug support at a glance: For more than 15 years Lauterbach has been supporting the latest TriCore microcontrollers. Our tool chain offers: • Single and multi core debugging for up to 6 TriCore cores • Debugging of all auxiliary controllers such as GTM, SCR, HSM and PCP • Multi core tracing via MCDS on-chip trace or via high-speed serial AGBT interface The Lauterbach Debugger for TriCore provides high-speed access to the target application via the JTAG or DAP protocol. Debug features range from simple Step/Go/Break up to AutoSAR OS-aware debugging. High speed flash programming performance of up to 340kB/sec on TriCore devices and intuitive access to all peripheral modules is included. Lauterbach’s TRACE32 debugger allows concurrent debugging of all TrCore cores. • Cores can be started and stopped synchronously. • The state of all cores can be displayed side by side. • All cores can be controlled by a single script.

Lauterbach, Inc.

www.lauterbach.com

FEATURES Debugging of all auxiliary controllers: PCP, GTM, HSM and SCR Debug Access via JTAG and DAP AGBT High-speed serial trace for Emulation Devices On-chip trace for Emulation Devices Debug and trace through Reset Multicore debugging and tracing Cache analysis www.lauterbach.com

 info_us@lauterbach.com  508-303-6812 www.lauterbach.com/pro/pro_tc3xx_aurix_as_alt1.php?chip=TC399XE%20A-STEP

Development Kits

Lauterbach Debugger for RH850 Lauterbach RH850 debug support at a glance: The Lauterbach Debugger for RH850 provides high-speed access to the target processor via the JTAG/LPD4/LPD1 interface. Debugging features range from simple Step/Go/Break to multi core debugging. Customers value the performance of high speed flash programming and intuitive access to all of the peripheral modules. TRACE32 allows concurrent debugging of all RH850 cores. • The cores can be started and stopped synchronously. • The state of all cores can be displayed side by side. • All cores can be controlled by a single script. All RH850 emulation devices include a Nexus trace module which enables multi core tracing of program flow and data transactions. Depending on the device, trace data is routed to one of the following destinations: • An on-chip trace buffer (typically 32KB) • An off-chip parallel Nexus port for program flow and data tracing • A high bandwidth off-chip Aurora Nexus port for extensive data tracing The off-chip trace solutions can store up to 4GB of trace data and also provide the ability to stream the data to the host for long-term tracing, thus enabling effortless performance profiling and qualification (e.g. code coverage).

Lauterbach, Inc.

www.lauterbach.com www.embedded-computing.com/iot

FEATURES AMP and SMP debugging for RH850, GTM and ICU-M cores Multicore tracing On-chip and off-chip trace support Statistical performance analysis Non intrusive trace based performance analysis Full support for all on-chip breakpoints and trigger features AUTOSAR debugging www.lauterbach.com

 info_us@lauterbach.com  508-303-6812 www.lauterbach.com/pro/pro_r7f701325_alt1.php?chip=R7F701334A

IoT Design Guide 2019

43

IoT Design Guide

Development Kits


IoT Design Guide

Development Kits

TS-7553-V2 This TS-7553-V2 embedded single board computer hits on all the main points for a low power, cost effective, Internet-of-Things (IoT) capable, and ready-to-deploy OEM board with an emphasis on data integrity. The TS-7553-V2 offers the ability to communicate seamlessly with several different networks simultaneously from a single device. Using the onboard peripherals the system can connect to Ethernet, WiFi, Bluetooth, USB, RS-232, RS-485, and CAN networks or devices. Built in module interfaces like the XBee/NimbeLink socket and internal USB ports allow expansion to other networks like Cellular, DigiMesh, ZigBee, Lora, and other proprietary or industry specific networks. This ability to communicate over a wide variety of wired and wireless interfaces puts the TS-7553-V2 in an excellent position to be an IoT gateway. A Nine-Axis Micro-Electro-Mechanical System (MEMS) motion tracking device containing a gyroscope, accelerometer and compass is an on-board option for applications which would require sensing motion or vibration in the environment. The TS-7553-V2 has an off the shelf, low cost enclosure that can include a backlit monochrome 128x64px LCD and four button keypad for HMI. Applications with strict low power requirements will appreciate the work that's been done to reduce power consumption to less than 2 W in typical conditions and a 9 mW sleep mode. Power over Ethernet (PoE) is supported via a daughter card, if desired.

Technologic Systems

www.embeddedARM.com

FEATURES NXP i.MX6UL 696 MHz ARM Cortex-A7 CPU 4 GB MLC eMMC Flash 512 MB DDR3 RAM 2x CAN Bus, 1x RS-485, 4x RS-232 XBee Interface (can support NimbleLink cell modem and XBee) Wireless and Bluetooth Module Cellular Modem

www.embeddedARM.com

sales@embeddedarm.com

 480-837-5200

@ts_embedded

Software & Development Tools

SDKPac™ IoT Device Solutions The intersection of IoT with vertical applications and industries such as Industrial Control, Manufacturing and Automation, Medical Device, Smart Building and Energy Management, Network Equipment, SCADA and machine control, and IoT Edge Nodes, is driving solutions that promise to deliver connectivity, reliability, scalability, trust and electronic data security. Cypherbridge supplies cloud connected Software Development Kits and Toolkits, targeting small to medium applications where memory, power and performance are carefully balanced. We offer a broad portfolio of robust device-level solutions including standards-based protocol stacks secured with root of trust, authentication, integrity and encryption. The Cypherbridge SDKPac solution framework includes SDKs & Toolkits to fit a wide range of embedded applications and vertical markets. Integrated with off-the-shelf development kits and toolchains, SDKPac delivers a tailored project ready solution. Partner with Cypherbridge to help accelerate your journey to the IoT cloud. Generate new business value by migrating your existing product lines. IoT solution development requires a broad range of skills including embedded system design, networking, cloud computing, and expert security practices. Cypherbridge offers comprehensive design services for end-to-end line of business applications, including IoT device platform integration, differentiating features, and full-stack cloud computing and data systems engineering.

Cypherbridge Systems www.cypherbridge.com

44

IoT Design Guide 2019

FEATURES IoT Cloud Device Kit with JSON, TLS secure uMQTT 3.1.1, Platform Kit, compatible with AWS, Azure, Google and Watson. HTTP solutions include dHTTPS Web Server, and TLS secure client to cloud HTTP REST API. HTTP/2 protocol stack option integrated with TLS-ECDHE cipher suite uSSH server, client, SCP, SFTP and file system integration for Filezilla, WinSCP, Putty uSMTP TLS secure SMTP client with binary file transfer attachment uMODBUS Toolkit TCP and RDU slave interfaces to platform sensors and actuators AUTHN AUTHZ solution include uRADIUS client integrated with credentials server, and RFC7519 JWT uLoadXL secure boot loader and installer, managed firmware update, WSLAM Software Lifecycle Management Station

sales@cypherbridge.com www.linkedin.com/company/cypherbridge-systems

 +1 760.814.1575 @cypherbridgesys

www.embedded-computing.com/iot


Low Power Meets Long Range with LoRa Technology Accelerate your next design with our new highly integrated LoRa System-in-Package (SiP) family of devices. These new devices feature an ultra-low-power 32-bit microcontroller (MCU), sub-GHz RF LoRa transceiver and software stack which makes them ideal for a broad array of long range, low-power IoT applications that require small form factor designs and multiple years of battery life. With this new family of devices you can get your next design up and running easily by combining your application code with our LoRaWAN™ stack and quickly prototyping with the ATSAMR34-XPRO Development Board, which is supported by the Atmel Studio 7 Software Development Kit (SDK). This development board is certified by the Federal Communications Commission (FCC), Industry Canada (IC) and Radio Equipment Directive (RED) and the SAM R34 SiP devices support worldwide LoRaWAN operation from 862 to 1020 MHz, allowing you to use this single-part variant across many geographies.

FEATURES Industry’s lowest-power LoRa technology SiP Proven LoRaWAN software stack Small form factor

www.microchip.com/SAMR34

Microchip Technology Inc.

www.microchip.com/SAMR34 www.embedded-computing.com/iot

Contact Microchip: www.microchip.com/distributors/SalesHome.aspx  480-792-7200 www.linkedin.com/company/microchip-technology @MicrochipTech

IoT Design Guide 2019

45

IoT Design Guide

LPWAN (Including LoRa, NB-IoT, etc.)


IoT Design Guide

Storage

®

Solid State Storage and Memory

Solid State Storage and Memory for Industrial IoT Virtium manufactures solid state storage and memory for the world’s top industrial embedded OEM customers. Our mission is to develop the most reliable storage and memory solutions with the greatest performance, consistency and longest product availability. Industry Solutions include: Communications, Networking, Energy, Transportation, Industrial Automation, Medical, Smart Cities and Video/Signage. Features • Broad product portfolio from latest technology to legacy designs • Twenty years refined U.S. production and 100% testing • A+ quality – backed by verified yield, on-time delivery and field-defects-per-million reports • Extreme durability, iTemp -40º to 85º C • Intelligent, secure IIoT edge storage solutions • Longest product life cycles with cross-reference support for end-of-life competitive products • Leading innovator in small-form-factor, high-capacity, high-density, high-reliability designs • Worldwide Sales, FAE support and industry distribution

SSD Storage Includes: M.2, 2.5", 1.8", Slim SATA, mSATA, CFast, eUSB, Key, PATA CF and SD. Classes include: MLC (1X), iMLC (7X) and SLC (30X) – New Industrial 3D NAND SSDs now available. All SSD’s include Virtium‘s SSD Software – which features: vtView®: Monitor/maintain SSDs, estimate SSD life, predict maintenance, over-the-air updates, make SSD quals faster and easier – includes open source API. vtGuard®: Power-loss protection, power management and I-Temp support. vtSecure™: Military-grade secure erase, optional TCG Opal 2.0 for SATA and PCIe, and optional keypad or other authentication for external USB devices. vtTools™: Manage field updates; command line based test and data collection; create custom tests with only a few lines of code; open source and compatible with all SSDs. Memory Products Include: All DDR, DIMM, SODIMM, Mini-DIMM, Standard and VLP/ULP. Features server-grade, monolithic components, best-in-class designs, and conformal coating/under-filled heat sink options.

www.virtium.com

Virtium

www.virtium.com

46

IoT Design Guide 2019

sales@virtium.com www.linkedin.com/company/virtium

 949-888-2444 @virtium

www.embedded-computing.com/iot


Online int’l visitor registration is now open www.ComputexTaipei.com.tw


IOT EDGE COMPUTING

The value of adding location for IoT designs By JC Elliott, Director, Polte

The Internet of Things is an all-encompassing universe that is intrinsically woven through many facets of our world already. This interweave will expand exponentially as the Fourth Industrial Revolution innovations continue to roll out. IoT has been around for a couple of decades, but as innovation increases, along with increased user mobility, enterprises are embracing IoT more swiftly and realizing the critical need for location awareness. Providing the “what” from a device is important, but the “where” is the other side of the IoT coin, enabling important context about the data that is reported.

A simple comparison of these technologies is not easily done, however, as far too many factors weigh into the equation to uncover the right technology for a specific use case.

Connectivity in IoT translates to far greater visibility of systems, processes, and people. Tasks that were once done manually are now automated such that detection, analysis, and action happen in mere seconds. Also, the intelligence that’s delivered as part of this automation means problems are resolved before they happen again, and enterprises can manage their productivity and output more efficiently.

Issues that must be considered for determining the best location platform include:

Location is the prime analytic As seen in Figure 1, billions of IoT-connected devices are predicted, whether a pallet tracker, a fleet-management platform, or an emergency response button. Knowing the location of a device at a certain time provides an infinite amount of intelligence upon which action can be taken. As a result, a host of large enterprises, MNO/MVNOs, device manufacturers, modem and module manufacturers, cloud services, and others have announced launches of location technologies and platforms. Google Maps was launched in 2005 and has long enabled developers to include geospatial information in their applications. Many others have followed suit, such as Amazon with AWS maps; Microsoft’s Azure Maps recently launched its own platform, which offers a suite of geolocation services for developers to add geospatial abilities to their applications. Mobile network operators are offering location services for IoT devices, including AT&T with DataFlow, Verizon Wireless with ThingSpace, and T-Mobile with a service for devices on its network, including location capabilities.

Location considerations In the past, GPS was the de facto standard for locating something. The accuracy might not be as desirable for certain use cases, but GPS provided a good enough estimate for the time. Since the introduction of GPS to the public in the 1980s, many other location technologies have come to market, with varying degrees of success.

48

IoT Design Guide 2019

Power/battery life: For use cases that require a long life, technologies that use the least amount of power are desirable. Security: While every use case needs some level of security, when considering a use case that leverages location as part of the solution, it’s critical that the location platform is absolutely secure. Accuracy: Depending on whether the use cases need millimeter accuracy or just the knowledge that a object is within 200-500 meters, for example, will help discern the ideal location platform – or a hybrid of two or more solutions. Coverage: If the use case depends on seamless indoor and outdoor coverage, the choices are a hybrid solution of two or more platforms or leveraging the cellular network. www.embedded-computing.com/iot


Polte

www.Polte.com

Size: For use cases that require a small form factor – which means fewer radios, batteries, and antennae – other considerations like accuracy or battery life may need to be compromised. Cost: When a very low-cost tracker is required, utilizing only one radio that consumes the least amount of power is ideal. Incremental improvement: As with the GPS network, upgrades have been made over time that improve accuracy, but it means launching expensive new satellites. The ideal location solution is self-healing and uses machine learning to improve itself with every location lookup. Native location on device: IoT devices typically don’t have location capabilities designed in. It’s important to select a solution with embedded positioning technologies that can be easily activated on the device and updated over the air (OTA). This allows for immediate scalability.

Location technologies Let’s look at some of the currently used solutions and what they offer.

GPS Global Positioning System (GPS) was first developed by the U.S. Department of Defense in the early 1970s and used exclusively by the U.S. military. The satellite system was opened for civilian use in the 1980s and was fully operational in 1995. The accuracy of GPS signals broadcast from space varies depending on atmospheric conditions, satellite location (geometric dilution of precision), number of satellites seen, and other factors. GPS receivers need a clear line of sight to the satellite network; due to multipath errors, which can occur in dense urban areas, heavy wooded areas, tunnels or other covered structures, results may be less than desirable. The expected accuracy for a GPS fix is on the order of between 10 m and 100 m, depending on the factors noted above. www.embedded-computing.com/iot

FIGURE 1

TWITTER

@PolteCorp

FACEBOOK @PolteCorp

LINKEDIN

www.linkedin.com/company/polte/

IoT market size.

GPS does not work for indoor positioning, so its use is limited to outdoors and works best in a less-dense area. Power usage on GPS devices can be far greater than any of the other positioning technologies, due to its search algorithm, which can continue to run when no satellites are visible, so should be designed for use cases that have larger battery reserves or a rechargeable format. Also, device-size constraints need to be considered.

Wi-Fi Public Wi-Fi hot spots are forecast in some outlooks to grow to more than half a billion by 2022; Wi-Fi is readily available and is an inexpensive platform for indoor use. Wi-Fi is a proximity-based type of location solution, which observes signal strength from reported Wi-Fi access point (AP) locations, which are stored by third parties such as Google. Wi-Fi power consumption is less than GPS, but the range is limited and extra equipment may be necessary to provide adequate coverage for larger facilities. Obstructions as simple as walls or metal structures can interfere with signals. The expected accuracy for a Wi-Fi fix is on the order of 25 m to 300 m. If outdoor coverage is also required, Wi-Fi combined with GPS or another solution will provide adequate coverage for many uses. Using a hybrid solution does increase cost, size, and power consumption and the device will need to reconnect as it moves from an indoor to an outdoor environment. This has been a common technique for providing good indoor and outdoor location coverage but is not a possible solution for many very-low-power IoT solutions that require a long battery life on the order of two to 10 years.

Bluetooth Introduced in 1996, Bluetooth was a collaboration of Intel, Ericsson, and Nokia working to create a shared short-range wireless technology to bring to market. In 2002, IEEE approved the 802.15.1 specification. Fast-forward to 2019: Bluetooth recently introduced Bluetooth 5.0 with greater range and speed for connectivity between and among many devices. The Bluetooth Low Energy (BLE) introduction enables lower power or up to 4 times longer distances in higher-power applications. The Bluetooth location technology is a proximity-range application that can provide good indoor accuracy but requires a hardware-intensive network built to provide accuracy over a large area. To get the quoted 2 m accuracy, a device needs to be within 6 m of a beacon. IoT Design Guide 2019

49


IOT EDGE COMPUTING

RFID Dating back to as early as World War II, the first radio-frequency identification (RFID) systems used by the military across the globe were passive. The British then developed an active RFID solution and the technology quickly evolved to the solutions we see today. The RFID tags are very small and inexpensive, allowing them to be attached to many different assets. Since the range of RFID is very limited, RFID is an indoor-only location solution commonly used to track when items pass through chokepoints, such as doors at a retail store or items on an assembly line. RFID has been used to make many supply chains more efficient, but adoption has not been as widespread as it could be, due in part to its limited range and cost of beacon hardware deployment. By one estimate, the RFID market is 99% untapped, due in part to security concerns.

Cellular Location using a commercial cellular network can be accomplished in many ways; therefore, has various degrees of accuracy and other factors. A common method used is the cell ID method, in which the modem outputs radio information and signal strength of the serving cell and possible neighbor cells. This data is fed into a thirdparty database that tracks tower locations (similar to the Wi-Fi location technology). An example of cell-tower information object fed into a third-party API would include cell ID, location area code, mobile country code, mobile network code, signal strength, and timing advance of the towers detected. This type of location is expected to allow a rough order of location of around 300 m to 3 km, depending on the density of the network. It is a very simple, inexpensive technology to implement for seamless indoor and outdoor coverage if a high degree of location accuracy is not critical.

FIGURE 2

FIGURE 3 50

Positioning architecture in LTE Release 9/10. Courtesy of Ericsson.

Location technologies overview.

IoT Design Guide 2019

Another cellular technology in use is the carrier network technologies that are applied by the carriers themselves. These are standards-based and include complex architectural implementations. This includes technologies like Observed Time Difference of Arrival (OTDOA), in use with LTE full-band networks. In the U.S., the Federal Communications Commission (FCC) has required that handsetassisted location technologies for E911 calls must provide location within 50 m for 67% of all calls measured. (Figure 2.) This technology provides a good level of accuracy but is controlled by the carriers and is only available through them. Today its use seems to be confined to a location technology for handsets in emergency location systems. CAT-M systems with a lower bandwidth for IoT devices have an OTDOA standard, but it is not yet implemented in networks and will not provide the same accuracy as the full band solution described above. A newer technology that brings a more accurate cellular location to the market for IoT and does not require deployment by the network carriers themselves is Cloud Location over Cellular (C-LoC). This technology is an advanced Time Difference of Arrival (TDOA) methodology and is available with CAT-M and NB-IoT devices allowing location accuracy on the order of 10 m to 200 m, while requiring very low additional power above that already being utilized by the IoT device for communication since it is a part of the modem firmware. The technology allows a very secure connection, because the location information is never present on the device as it is with GPS. This technology is an ideal solution for very-low-power IoT devices that require accurate location indoors and outdoors. Regardless of the technology used, location will increasingly be more important to provide context as IoT connectivity expands. Location is the prime analytic (Figure 3), which enables key data that improves customer experience, productivity and bottom lines. IoT www.embedded-computing.com/iot


BY ENGINEERS, FOR ENGINEERS In the rapidly changing technology universe, embedded designers might be looking for an elusive component to eliminate noise, or they might want low-cost debugging tools to reduce the hours spent locating that last software bug. Embedded design is all about defining and controlling these details sufficiently to produce the desired result within budget and on schedule. Embedded Computing Design (ECD) is the go-to, trusted property for information regarding embedded design and development.

embedded-computing.com


Storage and Memory Built for Extreme Conditions

We design superior solid state storage and memory for industrial IoT ecosystems. •

22 years refined U.S. production and 100% testing

A+ quality - backed by verified yield, on-time delivery and field-defects-per-million reports

Extreme durability, iTemp, longer life-cycles and intelligent, secure edge solutions

Our Intelligent Storage Platform provides power protection, security and wear visibility for the longest lasting SSDs and essential data protection. Visit our website to learn more.

vtTools

Industrial 3D NAND SSDs now available!

®

Solid State Storage and Memory

Copyright 2019, Virtium LLC. Top image copyright: 123RF/epicstockmedia

www.virtium.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.