Embedded Computing Design with Embedded World Profiles Spring 2019

Page 1

SPRING 2019 VOLUME 17 | 1 EMBEDDED-COMPUTING.COM

TRACKING TRENDS RISC-V: Too open to fail PG 5

MUSINGS OF A MAKERPRO

Challenges of building an omni-wheel robot PG 34

Development Kit Selector

2019 Embedded Processor Report:

www.embedded-computing.com/designs/iot_dev_kits

The Evolving State of Signal Processing BEGINS ON PG 6

EMBEDDED WORLD PRODUCT PROFILES PG 32

THE RETURN TOÂ ANALOG PG 10


19foPr free admission 2erw e-code You

ucher

orld.de / vo

-w embedded

Nürnberg, Germany

February 26 – 28, 2019

ACCELERATING THE EVOLUTION OF CRITICAL INFRASTRUCTURE FROM AUTOMATED TO AUTONOMOUS

TODAY, TOMORROW, AND BEYOND Your one-stop resource for the entire spectrum of embedded systems: discover more than 1,000 companies and get inspired by the latest trends and product developments, by renowned speakers and exciting shows.

For nearly 40 years, Wind River software has enabled

Keep up to date:

embedded-world.de

digital transformation across critical infrastructure sectors. Learn how we’re helping a wide range of

Media partners

Exhibition organizer NürnbergMesse GmbH

industries accelerate their evolution from automated

T +49 9 11 86 06-49 12 Fachmedium der Automatisierungstechnik

to autonomous systems and ensuring the softwaredefined world of the future is a safe, secure reality.

F +49 9 11 86 06-49 13 visitorservice@nuernbergmesse.de Conference organizer WEKA FACHMEDIEN GmbH T +49 89 2 55 56-13 49

www.windriver.com/automated-to-autonomous www.windriver.com

F +49 89 2 55 56-03 49 info@embedded-world.eu


AD LIST PAGE ADVERTISER 14

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass  rich.nass@opensysmedia.com EDITOR-IN-CHIEF Brandon Lewis  brandon.lewis@opensysmedia.com TECHNOLOGY EDITOR Curt Schwaderer  curt.schwaderer@opensysmedia.com

ASSOCIATE TECHNOLOGY EDITOR Laura Dolan laura.dolan@opensysmedia.com

ASSISTANT MANAGING EDITOR Lisa Daigle  lisa.daigle@opensysmedia.com

CONTRIBUTING EDITORS Majeed Ahmed Jeremy S. Cook John Koon

DIRECTOR OF E-CAST LEAD GENERATION AND AUDIENCE ENGAGEMENT Joy Gilmore  joy.gilmore@opensysmedia.com ONLINE EVENTS SPECIALIST Sam Vukobratovich  sam.vukobratovich@opensysmedia.com

CREATIVE DIRECTOR Stephanie Sweet  stephanie.sweet@opensysmedia.com

SENIOR WEB DEVELOPER Aaron Ganschow  aaron.ganschow@opensysmedia.com

WEB DEVELOPER Paul Nelson  paul.nelson@opensysmedia.com CONTRIBUTING DESIGNER Joann Toth  joann.toth@opensysmedia.com EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

SALES/MARKETING

SALES MANAGER Tom Varcie  tom.varcie@opensysmedia.com (586) 415-6500

MARKETING MANAGER Eric Henry  eric.henry@opensysmedia.com (541) 760-5361 STRATEGIC ACCOUNT MANAGER Rebecca Barker  rebecca.barker@opensysmedia.com (281) 724-8021 STRATEGIC ACCOUNT MANAGER Bill Barron   bill.barron@opensysmedia.com (516) 376-9838 STRATEGIC ACCOUNT MANAGER Kathleen Wackowski  kathleen.wackowski@opensysmedia.com (978) 888-7367 SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek  len.pettek@opensysmedia.com (805) 231-9582 SOUTHWEST REGIONAL SALES MANAGER Barbara Quinlan  barbara.quinlan@opensysmedia.com (480) 236-8818

INSIDE SALES Amy Russell  amy.russell@opensysmedia.com ASIA-PACIFIC SALES ACCOUNT MANAGER Patty Wu  patty.wu@opensysmedia.com

BUSINESS DEVELOPMENT EUROPE Rory Dear  rory.dear@opensysmedia.com +44 (0)7921337498

WWW.OPENSYSMEDIA.COM

PRESIDENT Patrick Hopper  patrick.hopper@opensysmedia.com

EXECUTIVE VICE PRESIDENT John McHale  john.mchale@opensysmedia.com

EXECUTIVE VICE PRESIDENT Rich Nass  rich.nass@opensysmedia.com

CHIEF FINANCIAL OFFICER Rosemary Kristoff  rosemary.kristoff@opensysmedia.com GROUP EDITORIAL DIRECTOR John McHale  john.mchale@opensysmedia.com VITA EDITORIAL DIRECTOR Jerry Gipper  jerry.gipper@opensysmedia.com TECHNOLOGY EDITOR Mariana Iriarte  mariana.iriarte@opensysmedia.com CREATIVE PROJECTS Chris Rassiccia  chris.rassiccia@opensysmedia.com

PROJECT MANAGER Kristine Jennings  kristine.jennings@opensysmedia.com

SOCIAL

    

Facebook.com/Embedded.Computing.Design

@Embedded_comp

LinkedIn.com/in/EmbeddedComputing

Pinterest.com/Embedded_Design/

Instagram.com/Embedded Computing

 youtube.com/user/VideoOpenSystems

SENIOR EDITOR Sally Cole  sally.cole@opensysmedia.com

ACCES I/O Products, Inc. – PCI Express Mini Card, mPCIe Embedded I/O solutions 17 Critical Link – Mitysom-A10S 1 Digikey – Development Kit Selector 35 Embedded World – Exhibition & Conference ... it's a smarter world 25 Lauterbach Inc – Trace 32 11 PEAK-System Technik GmbH – You CAN get it … 31 PICMG – Join the PICMG IIoT specification effort 7 Sintrones Tech Corp – Intelligent transportation systems 36 Virtium LLC – Balance is everything 18 Wind River Systems, Inc. – A catalyst for changing the embedded development paradigm 1 Wind River Systems, Inc. – Accelerating the evolution of critical infrastructure from automated to autonomous 9 WinSystems, Inc. – Embed success in every product

FINANCIAL ASSISTANT Emily Verhoeks  emily.verhoeks@opensysmedia.com SUBSCRIPTION MANAGER subscriptions@opensysmedia.com CORPORATE OFFICE 1505 N. Hayden Rd. #105 • Scottsdale, AZ 85257 • Tel: (480) 967-5581

REPRINTS WRIGHT’S MEDIA REPRINT COORDINATOR Wyndell Hamilton  whamilton@wrightsmedia.com (281) 419-5725

www.embedded-computing.com

EMBEDDED COMPUTING DESIGN ADVISORY BOARD Ian Ferguson, ARM Jack Ganssle, Ganssle Group Bill Gatliff, Independent Consultant Andrew Girson, Barr Group David Kleidermacher, BlackBerry Jean LaBrosse, Silicon Labs Scot Morrison, Mentor Graphics Rob Oshana, NXP Kamran Shah, Silicon Labs

Embedded Computing Design | Spring 2019

3


CONTENTS

Spring 2019 | Volume 17 | Number 1

FEATURES 6

opsy.st/ECDLinkedIn

@embedded_computing

6

2019 Embedded Processor Report: The evolving state of signal processing

COVER

By Brandon Lewis, Editor-in-Chief

10 The return to analog By Gene Frantz, Octavo Systems 12 Using a memory protection unit with an RTOS

The 2019 Embedded World issue showcases embedded tools and solutions for those designing in the areas of industrial control, edge computing, autonomous machines, and more (profiles start on page 32). Also in this issue: RTOS for memory protection units, applications for USB-C, embedded machine learning, the evolution of signal processing, and the “return to analog.”

By Jean Labrosse, Silicon Labs

16 Embedded hypervisors aren’t new, but … By Colin Walls, Mentor Graphics, Embedded Division

20 Aelectrification primer on the battery management system (BMS) for powertrain

WEB EXTRAS

By Majeed Ahmad, Contributing Editor

22 USB-C power delivery: Charging, conversion, and emerging

10

applications

Q&A with Andrew Cowell, Renesas Electronics Corporation

 Blog series: Why You Didn't Get the RTOS Business & Other Software Stories By Colin Walls of Mentor, a Siemens business https://bit.ly/2R6A3NC

24 The “relativity” of high-Q capacitors By Jeff Elliott for Johanson Technology 28 Exploring embedded machine learning By Curt Schwaderer, Technology Editor 32 EMBEDDED WORLD PROFILES Crystal Group Dolphin Interconnect Solutions Wind River

@embedded_comp

 Embedded Insiders Podcast: CES 2019: Attendance Slightly Down; Innovation Way, Way Up With Rich Nass and Brandon Lewis, ECD, with special guest Louis Parks https://bit.ly/2CB7L90

Connect Tech Technologic Systems

EVENTS  APEC 2019 (Applied Power Electronics Conference) March 17-21, 2019 Anaheim, CA www.apec-conf.org

 Embedded Technologies Expo & Conference/Sensors Expo & Conference 2019

12

Published by:

COLUMNS 5

RISC-V: Too Open to Fail

By Brandon Lewis, Editor-in-Chief

TRACKING TRENDS

June 25-27, 2019 San Jose, CA www.sensorsexpo.com

34 MUSINGS OF A MAKERPRO

Challenges of building an omni-wheel robot By Jeremy S. Cook, Contributing Editor

2019 OpenSystems Media® © 2019 Embedded Computing Design All registered brands and trademarks within Embedded Computing Design magazine are the property of their respective owners. ISSN: Print 1542-6408 Online: 1542-6459 enviroink.indd 1

4

Embedded Computing Design | Spring 2019

10/1/08 10:44:38 AM

www.embedded-computing.com


TRACKING TRENDS

brandon.lewis@opensysmedia.com

RISC-V: Too Open to Fail By Brandon Lewis, Editor-in-Chief Open-source RISC initiatives have mostly failed to live up to expectations. OpenRISC, for example, introduced an ISA and architectural description for a family of 32- and 64-bit processors, but that resulted in only one implementation and limited adoption outside of academia. Another RISC-based open-source hardware project, OpenSPARC, has also had limited success. Therefore, it shouldn’t be surprising that optimism around the RISC-V ISA has been guarded. However, there are several cascading factors that make this particular open-source hardware technology too open to fail. 1. From licensing fees to licensing “frees” With any free and open source technology, the first and most obvious reason RISC-V is attractive is cost. But what makes “free” particularly poignant in the case of RISC-V is the cost of alternatives. At this point it is common knowledge that licensing even the simplest Arm Cortex-M0 CPU cores for commercial use starts at around $40,000, with a 1-2 percent royalty charge added for every chip sold. Based on feedback from customers, Dan Ganousis of RISC-V IP provider SiFive and formerly of Codasip says “the total cost of an -M0 DesignStart license when royalties are included is a minimum of $370,000 up front” because Arm values royalties at a minimum of $330,000. Of course, “free” anything is never really free at all; you have to account for the extra development work associated with the RISC-V ISA and any of the derivative cores that have been open sourced. Then again, smart people can make $370,000 go a long way. 2. No more Mr. Moore, more M&A, and Sand Hill Road stays dry When considering the groundswell of support for RISC-V, it’s important to note that several related industry trends are contributing to its success. One that shares a direct link with high IP licensing costs is that Sand Hill Road just isn’t swiping the platinum VC card for semiconductor startups anymore. How could you blame them? The cost of developing even the most rudimentary chip starts at around $1 million when you factor in hardware and software engineering, IP licensing, tools, tapeout, and testing. And that’s before you get to chip production, which requires silicon, packaging, assembly, another round of testing, and – of course – those royalties. www.embedded-computing.com

Meanwhile, Arm and x86 architectures have dominated major electronics markets for so long that there is little room left for innovation. In the embedded space, every major chip vendor offers basically the same portfolio of Arm-based processors with a specialized core here or there and different smatterings of peripherals. Moore’s Law, the foundation of Intel’s x86 business strategy, is over. If you’re looking for proof, look no further than the more than $250 billion in semiconductor M&A activity since 2015. Some of these have been multibillion-dollar deals designed to keep shareholders happy in the nearto mid-term, while many others have been fliers on startups in perceived growth areas. Neither is a sign of notable organic innovation. 3. The “nation-state” processor Every advanced economy in the world today is rooted in electronics. Even if high tech isn’t one of the country’s leading industries, virtually every sector – from communications, finance, and healthcare to energy, manufacturing, and transportation – is based on capabilities enabled by electronic systems. It is therefore in the interest of governments to protect electronic capabilities, both now and in the future. One way of ensuring this is to reduce dependence on foreign technology and bring the electronics development onto domestic soil. In the event of extended trade wars or worse, electronic self-reliance can be the difference between a position of weakness and a position of strength. For years, India’s government has funded the development of six 32- and 64-bit RISC-V CPUs at the Indian Institute of Technology Madras and Center for Development of Advanced Computing. In fact, Krste Asanovic’ , chief architect and cofounder of SiFive and one of the original developers of the ISA at U.C. Berkeley, says the country has “decided on RISC-V as national ISA for India.” Too open to fail Given the number of solid architectures that have faded into obscurity, sometimes the stars have to align for a technology to be successful. For some, RISC-V is the answer to Arm that Arm was to Intel, or – in the software context – what Linux was to Windows; for others, it offers a new path towards innovation. For others still, it can provide insurance against turbulent political and economic times. Whatever the reason, RISC-V is too open to fail. Embedded Computing Design | Spring 2019

5


EMBEDDED PROCESSING

2019 Embedded Processor Report: The Evolving State of Signal Processing By Brandon Lewis, Editor-in-Chief

The DSP revolutionized the field of signal processing back in the 1980s by reducing noise, improving accuracy, and easing programming for engineers working with analog signals. Over the next few decades, DSPs advanced to provide greater performance, floating-point computation, and extreme optimization for specific types of workloads.

T

hen the IP licensing model took off and embedded developers quickly realized the versatility and cost benefits of off-the-shelf CPU cores. Just the CPU and some application-specific peripherals sufficed for many applications, while more highly specialized systems could integrate chips with soft cores for functions like FFTs, encoding, and decoding. Fast forward, and many of today’s IoT and machine learning devices require a blend of old and new. Not only do these systems benefit from the low cost and flexibility enabled by general-purpose CPUs, they also need advanced signal processing capabilities to clean analog signals and perform highly precise and power-efficient fast MAC operations.

6

So where does that leave DSP solutions today? “It’s no secret that the market for discrete DSPs has not flourished like other, more heterogeneous processor types through the last several years,” says Dan Mandell, senior analyst for IoT and Embedded Technology at VDC Research. “The biggest players in embedded DSPs such as Analog Devices, NXP, and Texas Instruments have all placed a much greater focus on their offerings for microcontrollers, which often provide cost-effective mixed-signal processing and control at a low cost for high volumes. “However, DSPs provide much greater performance at low power for emerging applications ranging from automotive radar and other imaging or sensing, V2X and general functional communications, machine vision, video surveillance, and more,” Mandell continues. “Automotive and industrial applications requiring mid- to highperformance fixed- and floating-point signal processing are looking more and more towards DSP solutions. “We are seeing growing demand for DSPs again in a number of industries,” he adds.

Embedded Computing Design | Spring 2019

www.embedded-computing.com


EMBEDDED PROCESSING

A DSP architecture for every use case One reason for the decline in discrete DSP solutions over the past two decades is that low-cost CPU cores have become increasingly proficient in handling signal processing tasks themselves. For instance, Arm’s Cortex-M4, Cortex-M7, Cortex-M33, and Cortex-M35P processors include DSP extensions and an optional floating-point unit (FPU). The cores also support 2 x 16-bit or 4 x 8-bit SIMD instructions, enabling parallelism in the computation of video, audio, speech and other signals.

enhanced the DSP capabilities of its ARC cores in recent years, most notably by adding DSP options to its ARC HS CPU family. A good indicator of the trend toward heterogeneous compute architectures is at Texas Instruments, one of the original pioneers of digital signal processing. Today, SoCs like the Sitara AM57x bring a heterogeneous multicore design based on a combination of Arm Cortex-A15 application processors, C66x Series DSPs, real-time microcontrollers, GPUs, and machine learning accelerators that can all be tuned for various tasks.

“The performance level of today’s microprocessors can easily handle many signal processing workloads that have been, in the past, relegated to specialized DSPs,” says Rhonda Dirvin, senior director of marketing programs for the Embedded and Automotive Line of Business at Arm. “There is a growing need to have these processing capabilities on low-power, lower-cost microcontrollers. “There are unique use cases out there today that play well to the microcontroller space,” she continues. “For example, the always-on keyword spotting of smart speakers is a very good use case. The microcontroller is running various echo- and noise-cancellation algorithms, and once the keyword is detected, it can use microphone beamforming to gather the rest of the audio more clearly. This scenario requires the low-power nature of a microcontroller in addition to the processing capabilities for signal processing.” For moderate signal processing workloads, integrating specialized DSPs alongside CPU cores in a custom SoC is an increasingly popular option. Mike Demler, senior analyst at the Linley Group, notes that CEVA’s recently released CEVA-BX hybrid DSPs bring higher performance digital signal processing capabilities alongside CPU cores that are “equivalent or even superior to some Arm cores in [the EEMBC’s] CoreMark per MHz” rating. Demler also notes that Synopsys has continually www.embedded-computing.com

Embedded Computing Design | Spring 2019

7


EMBEDDED PROCESSING

“Heterogeneous architectures that have both Arm and DSP processors are becoming increasingly popular where high computation blocks can be offloaded to the DSP,” says Mark Nadeski, marketing manager for Catalog Processors at Texas Instruments. “Optimized performance to reduce cost or power are still key care-abouts in the embedded space. This can often be done by offloading work to a DSP.”

FOR MODERATE SIGNAL PROCESSING WORK­LOADS, INTEGRATING SPECIALIZED DSPS ALONGSIDE CPU CORES IN A CUSTOM SOC IS INCREASINGLY POPULAR. Solving the programming problem One potential area of concern with heterogenous architectures is a lack of programming familiarity within the embedded-engineering workforce. On one hand, engineers being trained when discrete DSPs were the rage are beginning to age into retirement, which could result in a lack of experience among the general-purpose-educated developers. To supplement signal processing development on its general-purpose CPU platforms, Arm offers a suite of CMSIS-DSP libraries that include basic math functions like vector add/multiply; fast math functions like sine, cosine, and square root; transforms, such as FFT functions; matrix functions; filters; motor control functions; and so on. However, Dirvin realizes that “there is still a gap in the general developer community on knowing which algorithms to use where and how to apply those algorithms in their system.” In response, Arm and its ecosystem partners offer tools that run “what-if” scenarios, provide code generation, and build coefficient tables to help engineers work with the various algorithms. MathWorks’ MATLAB, for example, offers a graphical tool that can help developers ease into the intricacies of signal processing. Figure 1 shows an

8

FIGURE 1

MathWorks’ MATLAB is one of many tools that helps ease the development of signal processing applications. Shown here in MATLAB is a FIR filter running on an Arm Cortex-M device.

FIR filter running on an Arm Cortex-M device to filter two sine waves of different frequencies. “The good news is that there are a lot of software tools that make developing applications easier than ever, but there’s still no replacing an analog/signal-processing or neural-network expert for the tough problems,” Demler observes. Programming discrete DSPs has also evolved over the years and become easier for designers. Texas Instrument’s C66x family of DSPs, for example, can be programmed entirely in C code or with C libraries, Nadeski says. “The C compiler is so efficient that programming at the assembly level isn’t necessary. For times when programming at the assembly level is necessary, designers can use optimized libraries that are callable through an open software architecture like OpenCL,” he explains. Deep neural nets and next-gen signal processing Precision inputs are pivotal to many of the advanced applications mentioned earlier and have enabled a renaissance of sorts in DSP technology, and currently no field of engineering requires more higher-accuracy signals than artificial intelligence and machine learning. Because the market for neural network processing is still in its infancy, DSPs have an opportunity to provide stopgap functionality in the short term and the potential to carve out a larger niche in the future. “Traditional AI and ML algorithms like decision trees, random forests, and so on require a lot of if/then-type processing, which fits well on general purpose processors,” Nadeski says. “However, neural networks are needed to perform deep learning – the next evolution of AI and ML – functions, and are heavily based on convolutions. These run much more efficiently on DSPs and dedicated accelerators like the Embedded Vision Engine (EVE) subsystems available in the Sitara AM57x processors than they do on general purpose processors. “As neural networks work their way into more and more embedded products, architectures like DSPs that can perform efficient implementations of a neural network inference may become more popular,” Nadeski suggests.

Embedded Computing Design | Spring 2019

www.embedded-computing.com


ROBUST IIOT SOLUTIONS

Embed Success in Every Product WINSYSTEMS’ rugged, highly reliable embedded computer systems are

designed to acquire and facilitate the flow of essential data at the heart of your application so you can design smarter solutions. We understand the risk and challenges of bringing new products to market which is why technology decision makers choose WINSYSTEMS to help them select the optimal embedded computing solutions to enable their products. As a result, they have more time to focus on product feature design with lower overall costs and faster time to market. Partner with WINSYSTEMS to embed success in every product and secure your reputation as a technology leader.

817-274-7553 | www.winsystems.com 715 Stadium Drive, Arlington, Texas 76011 ASK ABOUT OUR PRODUCT EVALUATION!

Single Board Computers | COM Express Solutions | Power Supplies | I/O Modules | Panel PCs

EBC-C413 EBX-compatible SBC with Intel® Atom™ E3800 Series Processor EPX-C414 EPIC-compatible SBC with Intel® Atom™ E3800 Series Processor PX1-C415 PC/104 Form Factor SBC with PCIe/104™ OneBank™ expansion and latest generation Intel® Atom™ E3900 Series processor


EMBEDDED PROCESSING

The return to analog By Gene Frantz, Octavo Systems

T

Can today’s integrated circuit technology resolve the problems inherent to analog signal processing? If the answer is yes, perhaps we can usher in a new era of explosive growth in new applications enabled by signal processing.

he concept of processing interesting signals is not new. It has been around as long as we, as humans, have recognized we could manipulate the world around us to make life easier. If I quickly review the last century with respect to signal processing, I can split it into an era of analog signal processing superseded by the present era of digital signal processing. The event that caused this abrupt move from analog to digital signal processing? The invention of the microprocessor, followed by the invention of the digital signal processor. The success of the digital era of signal processing –the era we’re presently in – has to do with ›› Advancements made in the theory of signal processing ›› The creation and advancements of the digital signal processor ›› The applications of signal processing theory and the digital signal processing elements We now seem to be at the end of the road. But the good news is that new applications begging for signal processing solutions have continued to expand. Unfortunately both the theory and processing capability have lagged the opportunities. Specifically, the raw performance of the signal processor has lagged and the body of signal processing theory has not seemed to have found a way around the hardware lag. The issue with the signal processor hardware is that the digital processor architectures do not provide enough raw performance to tackle the new opportunities. And when enough raw performance is assembled, the resulting power dissipation becomes problematic, not to mention carrying significant cost issues.

10

Embedded Computing Design | Spring 2019

But these are exciting problems, motivating us as researchers and industrialists to explore new avenues. Or as I put it, we are asking ourselves new questions for which there are no answers – or at least no answers yet. One interesting area to explore requires us to look back rather than forward. That is, can we revisit the world of analog signal processing to make that next advancement? Aha, the first question which has no answer. Now for a bit of history: Analog computing was a casualty of the digital computer, specifically, around 1970 when the microprocessor was invented. This was followed about a decade later with the invention of the digital signal processor (DSP). The digital revolution permanently eliminated the need for analog computers (note I have said “analog computers” rather than “analog signal processing”). We (yes, I was one of them) in the world of DSP convinced the industrial and academic worlds that there was no longer www.embedded-computing.com


Octavo System

TWITTER

LINKEDIN

Analog Devices

TWITTER

LINKEDIN

www.octavosystems.com

www.analog.com

@octavosystems

@DI_news

www.linkedin.com/company/octavo-systems-llc/

You CAN get it...

www.linkedin.com/company/analog-devices/

EMBEDDED PROCESSING

Hardware & software for CAN bus applications

a need for analog, as digital computational elements could solve all of the problems that existed in an analog processing system: ›› ›› ›› ›› ›› ››

Hall 1, Booth 483

Noise Accuracy Dynamic range Linearity Ease of programming Reliability

But it is time we revisit the world of analog computing. There have been amazing advancements made in signal processing theory and in integrated circuit technology. But it seems we are conducting ourselves similarly to attendees at a junior-high dance, where the signal processing people are on one side of the gym and the IC architects are on the other side of the gym. No one wants to walk across the gym and ask someone to dance. The amazing aspect of this situation is the rapidly expanding number of new applications that are begging the two groups to dance with each other. Sadly, no one is making the first move.

PCAN-M.2 CAN FD interface for M.2 slots. Available with up to 4 channels. Delivery includes software, APIs, and drivers for Windows ® and Linux.

A few months ago I was on a panel discussion at the International Conference on Acoustics, Speech and Signal Processing (ICASSP), the premier IEEE conference on signal processing. My opening remarks were on just this subject. My goal was to get the signal processing research community away from their wall in the gym and begin to solve the issues related to an analog signal processing element. The obvious next move is to try to pry the IC architects away from their respective gym wall. With this mental picture in mind, let me put the technology in perspective. It is the multiply function which limits both the raw performance of signal processing and the battery life of the DSP. A cursory comparison of an analog multiplier to a digital multiplier suggests that the raw performance of an analog multiplier could be many orders of magnitude higher in raw performance while experiencing several orders of magnitude lower power dissipation. In addition, the purchase and operational cost could be significantly lower. It all seems too good to be true.

PCAN-USB Pro FD High-speed USB 2.0 interface for the connection of 2 CAN FD and 2 LIN networks with galvanic isolation.

But we won’t return to analog processing concepts until we resolve the problems associated with analog signal processing that I mentioned previously. The challenge is – in light of new applications that need higher performance at lower power dissipation – can we, with today’s integrated circuit technology, resolve the problems inherent to analog signal processing? If so, we can be at the start of a new era of explosive growth of new applications that signal processing enables. My answer to the challenge is a resounding “Yes, we can!” if we can get the dance started. Gene Frantz is a cofounder and current chief technology officer of Octavo Systems. He is also a Professor in the Practice at Rice University in the Electrical and Computer Engineering Department. Previously, Gene was the Principal Technology Fellow at Texas Instruments where he built a career finding new opportunities and building new businesses to leverage TI’s DSP technology. Gene holds 48 patents, has written over 100 papers/articles, and presents at conferences around the globe. He has a BSEE from the University of Central Florida, a MSEE from Southern Methodist University, and an MBA from Texas Tech University. He is also a Fellow of the IEEE. www.embedded-computing.com

PCAN-Explorer 6 Professional Windows software for observation, control, and simulation of CAN FD and CAN 2.0 busses.

www.peak-system.com

Embedded Computing Design | Spring 2019

11


EMBEDDED SOFTWARE

Using a memory protection unit with an RTOS By Jean Labrosse, Silicon Labs

Memory-protection units (MPUs) have been available for years on processors such as the Cortex-M, and yet embedded developers shy away from using them. Is it because they aren’t useful? Is it because MPUs are complex devices? Do they add too much overhead?

L

et’s start off with a brief description of what an RTOS is and then show how an MPU fits into the picture. An RTOS (or real-time kernel) is software that manages the time of a central processing unit (CPU) or a microprocessing unit (MPU) as efficiently as possible. Most RTOSs are written in C and require a small portion of code written in assembly language to adapt the RTOS to different CPU architectures. When you design an application (your code) with an RTOS, you simply split the work into tasks, each responsible for a portion of the job. A task (also called a thread) is a simple program that thinks it has the CPU all to itself. Only one task can execute at any given time on a single CPU. Your application code also needs to assign a priority to each task based on the task importance and a stack (RAM) for each task. In fact, adding low-priority tasks will generally not affect a system’s responsiveness to higher-priority tasks. A task is also typically implemented as an infinite loop. The RTOS is responsible for the management of tasks. This is called multitasking.

12

Multitasking = scheduling Multitasking is the process of scheduling and switching the CPU between several sequential tasks. It provides the illusion of having multiple CPUs and maximizes the use of the CPU, as shown in Figure 1. Multitasking also helps in the creation of modular applications. With an RTOS, application programs are easier to design and maintain. Most commercial RTOSs are preemptive, which means that the RTOS always runs the most important task that is ready to run. Preemptive RTOSs are also event-driven, which means that tasks are designed to wait for events to occur to execute. For example, a task can wait for a packet to be received on an Ethernet controller; another task can wait for a timer to expire, and yet another can wait for a character to be received on a UART. When the event occurs, the task executes and performs its function, if it becomes the highest-priority task. If the event that the task is waiting for does not occur, the RTOS runs other tasks. High Priority

Events

Signals/Messages from Tasks or ISRs

Low Priority

Task

Task

Task

Task

Task

(Code+Data+Stack)

(Code+Data+Stack)

(Code+Data+Stack)

(Code+Data+Stack)

(Code+Data+Stack)

RTOS (Code)

Select Highest Priority Task

CPU (8-, 16-, 32, 64-bit, DSP)

FIGURE 1

Embedded Computing Design | Spring 2019

The RTOS decides which task the CPU will execute based on events.

www.embedded-computing.com


Silicon Labs

www.silabs.com

TWITTER

FACEBOOK

@siliconlabs

LINKEDIN

www.facebook.com/siliconlabs

www.linkedin.com/company/siliconlabs/

YOU TUBE

www.youtube.com/user/ViralSilabs

EMBEDDED SOFTWARE

Waiting on tasks consumes zero CPU time; signaling and waiting for events is accomplished through RTOS API calls. RTOSs allow you to avoid polling loops, which would be a poor use of the CPU’s time. The example below shows how a typical task is implemented:

The event that the task waits for can be triggered either from a task or a peripheral device interrupt handler through an RTOS API call. The API would typically run the RTOS scheduler, which would then decide to either switch to a new, more important task or simply resume the interrupted task (if the event was from an interrupt). An RTOS provides many useful services to a programmer, such as multitasking, interrupt management, intertask communication and signaling, resource management, time management, memory partition management and more. An RTOS can be used in simple applications where there are only a handful of tasks, but it’s a must-have tool in applications that require complex and time-consuming communication stacks, such as TCP/IP, USB (host and/or device), CAN, Bluetooth, Zigbee,

FIGURE 2 Task

Shown is an RTOS and application code running with full privileges.

Task

I/O

Task

Task

ISR

Task

Task

I/O

ISR

ISR Task Task

Task

Task Task

I/O

Task

Task I/O

FIGURE 3 Separating an application into multiple processes.

Memory

Process 1 Task

Process 2 Task

I/O

Task

Task

ISR

Task

Task

Memory

I/O

ISR

Memory Shared Memory

ISR

Process 3 Task

Task Task

Task Task Task

Memory

www.embedded-computing.com

I/O

Task

I/O

Memory

and more. An RTOS is also highly recommended whenever an application needs a file system to store and retrieve data as well as when a product is equipped with some sort of graphical display (blackand-white, grayscale, or color). Finally, an RTOS provides an application with valuable services that make designing a system easier. For performance reasons, most RTOSs are designed to run application code in privileged (or supervisor) mode, thus allowing those applications full control of the CPU and its resources. This is illustrated in Figure 2, where all tasks and ISRs have unrestricted access to memory and peripherals. Unfortunately, this implies that application code can corrupt the stacks or variables of other tasks either accidentally or purposely. In addition, allowing any task or ISR full access to all peripherals can have dire consequences. What is an MPU? An MPU is hardware that limits access to memory and peripheral devices to only the code that needs to access those resources. It enhances both the stability and safety of embedded applications and is thus often used in safety-critical applications such as medical devices, avionics, industrial control, and nuclear power plants. MPUs are now finding their place in the IoT because limiting access to memory and peripherals can also improve product security. Specifically, crypto keys can be hidden from application code preventing access from an attacker. Isolating the flash memory controller with the MPU can also prevent an attacker from changing an application, thus only allowing trusted code to perform code updates. With the help of an MPU, RTOS tasks are grouped into processes, as shown in Figure 3. Each process can consist of any number of tasks. Tasks within a process are allowed to access memory and peripherals that are allocated to that

Embedded Computing Design | Spring 2019

13


EMBEDDED SOFTWARE

process. However, as far a task is concerned, it doesn’t know that it’s part of the same process except for the fact that it’s given access to the same memory and I/Os as the other tasks within the process. When you add an MPU, very little has to change from a task’s perspective since your tasks should be designed such that they don’t interfere with each other unless they have to anyway.

MPUS ARE NOW FINDING

Figure 3 shows that processes can communicate with one another through shared memory. In this case, the same region(s) would appear in the MPU table for both processes. An application can also contain system-level tasks as well as ISRs that have full privileges, thus allowing them access to any memory location, peripheral devices, and the CPU itself.

ACCESS TO MEMORY

If a task attempts to access a memory location or a peripheral outside of its sandbox, then a CPU exception is triggered, and the exception handler can terminate the task or all tasks belonging to the process.

THEIR PLACE IN THE IOT BECAUSE LIMITING AND PERIPHERALS CAN ALSO IMPROVE PRODUCT SECURITY. SPECIFICALLY, CRYPTO KEYS CAN BE HIDDEN FROM APPLICATION CODE PREVENTING ACCESS FROM AN ATTACKER. Exactly what happens when such a violation occurs greatly depends on the application and to a certain extent which task causes the violation. For example, if the violation is caused by a graphical user interface (GUI), then terminating and restarting the GUI might be acceptable and might not affect the rest of the system. However, if the offending task is controlling an actuator, the exception handler might need to immediately stop the actuator before restarting the task. Ideally, access violations are caught and corrected during product development because, otherwise, the system designer will need to assess all possible outcomes and make decisions on what to do when this happens in the field. Recovering from an MPU violation can get quite complicated. In RTOS-based applications, each task requires its own stack. Stack overflows are one of the most common issues facing developers of RTOS-based systems. Without hardware assistance, stack overflow detection is done by software but is unfortunately rarely caught in time, which potentially makes the product unstable, at best. The MPU can help to protect against stack overflows, but unfortunately, it’s not ideal.

14

Embedded Computing Design | Spring 2019

www.embedded-computing.com


Initial Top-of-Stack

Task Stack

Task Stack

Task Stack

(RAM)

(RAM)

CPU Registers + FPU Registers

Stack Growth Used Stack

Task Control Block

Current SP

Restore

Save

2

1

(TCB)

Task Control Block (TCB)

Stack Size

CPU FPU

Free Stack

Red Zone

Base Address

FIGURE 4

Context Switch

Red-Zone Size

Pictured: The MPU region used to detect stack overflows.

The addressable addresses of a process is defined by a table that’s loaded into the MPU when the RTOS switches in a task. The table simply defines the memory (or I/O) ranges (called regions) that a task is allowed to access as well as attributes associated with those regions. Attributes for a region may specify if a task is allowed to read/write from/to a region, or only be allowed to read or execute code from the region (eXecute Never attribute, i.e. XN), etc. The eXecute Never attribute is highly useful as it can be used to prevent code from executing out of RAM, thus reducing the ability for hackers to perform code injection attacks. The number of entries in the table depends on the MPU. As shown in Figure 4, an MPU region can be used to detect stack overflows. In this case, a small region is used to overlay the bottom of each task stack. The MPU attributes are configured such that if any code attempts to write to that region, the MPU generates an exception. The size of the region determines how effective this technique would be at catching a stack overflow. The larger the region, the more chances you’d catch a stack overflow, but at the same time, the more RAM would be unavailable for your stack. In other words, the Red Zone in the figure would be considered unusable memory because it’s used to detect illegal writes. A good starting point for www.embedded-computing.com

FIGURE 5

The MPU configuration is updated on a context switch.

the Red Zone size would be 32 bytes. If your task stack is 512 bytes, then 32 bytes would only represent about 6 percent, leaving 480 bytes of usable stack space. Because of the fairly limited number of regions available in an MPU, regions are generally set up to prevent access to data (in RAM) and not so much to prevent access to code (in flash). However, if your application doesn’t make use of all the regions, security would also be improved by limiting access to code. The process table is typically assigned to a task when the task is created. The RTOS simply keeps a pointer to this table in the task’s control block (TCB). An RTOS context switch now includes additional code to update the MPU with the process table of the task being switched in, as shown in Figure 5. Notice that the MPU configuration doesn’t need to be saved when a task is switched out, since the configuration for the task is always loaded from the table. In summary, an MPU is hardware that limits the access to memory and peripheral devices to only the code that needs to access those resources. Tasks are grouped into processes that are isolated from one another. If a task attempts to access a memory location or a peripheral device outside of its allotted space, then a CPU exception is triggered, and depending on the application, the offending task or the whole process can be terminated. The MPU can be used to detect any stack overflows, but then each task in turn then needs to give up a small portion of its own stack in order to be used as a Red Zone. Each process is defined by a process table. A pointer to the process table is saved in the task’s TCB when the task is created. This allows the RTOS to load the MPU with the task’s process table when the task is switched in. This operation obviously consumes extra CPU clock cycles, which adds to the context switch time. Generically speaking, extending an RTOS to use an MPU seems to be quite straightforward; in practice, however, there are quite a few issues to consider. That’s a subject for another article. Jean Labrosse founded Micrium in 1999 and continues to maintain an active role in product development as a software architect at Silicon Labs, ensuring that the company adheres to the strict policies and standards that make the RTOS products strong. Jean is a frequent speaker at industry conferences and he is the author of three definitive books on embedded design and the designer of the uC/OS series of RTOSs. He holds BSEE and MSEE degrees from the University of Sherbrooke, Quebec, Canada. Embedded Computing Design | Spring 2019

15


EMBEDDED SOFTWARE

Embedded hypervisors aren’t new, but … By Colin Walls, Mentor Graphics, Embedded Division

Some technologies, it seems to me, should not really exist. They do exist, however, because they address a specific need. Typically, such technologies stretch something to make it perform in a way that was not originally intended. An example would be the fax machine: Fax came about as a novel way to use phone lines to move documents around; as soon as email became widespread, though, fax disappeared almost overnight.

T

he technology that I have in mind today is hypervisors, which are a software layer that enables multiple operating systems (OSs) to run simultaneously on one hardware platform. Hypervisors aren’t really a new technology; the first recognizable products were introduced on mainframe computers nearly 50 years ago. The incentives at that time were to make the best use of costly resources. The expensive hardware needed to be used efficiently to be economic, and downtime was expensive. An interesting irony is that IBM’s early virtualization software was distributed in source form (initially with no support) and modified/ enhanced by users; this was many years before the open-source concept was conceived. Now, hypervisors are increasingly relevant to embedded developers. The first question to ask when looking at the capabilities of any technology is: Why? So let’s ask: What’s the benefit of running multiple OSs on one piece of hardware, bearing in mind that this introduces significant complexity? The most important answer is security. A hypervisor provides a strong layer of insulation and protection between the guest OSs, ensuring that there’s no possibility of one multithreaded application interfering with another. A secondary, but still significant, motivation to run multiple OSs is IP reuse. Imagine that there is some important software IP available for Linux that you want to use in your design. However, your device is real time, so an RTOS makes better sense. If multicore is not

16

an option for you, using a hypervisor is the way forward, so that you can run both Linux and your RTOS. Hypervisors broadly come in two flavors, which are imaginatively named Type 1 and Type 2. Type 1 hypervisors run on bare metal; Type 2s require an underlying host OS. Type 1 makes the most sense for the majority of embedded applications. One note of caution: I attended a session recently where the speaker referred to Types 0, 1, and 2. Clearly, care and standardization is needed with this terminology. The three broad application areas for embedded hypervisors: Automotive: In this context, there’s the possibility for infotainment software, instrument-cluster control, and telematics to all be run on one multicore chip. As a mixture of OSs is likely to be needed, such as an RTOS for instrumentation and GPS and Linux for audio, a hypervisor makes sense. Industrial: For industrial applications (factories, mines, power plants, etc.) there’s typically a need for real-time control and sophisticated networking (like what’s available in Linux). In addition, in recent years there’s been an increasing concern about cyberattacks on or other introduction of malware into control systems. A hypervisor is a good way to separate systems and maintain security. Medical: Medical systems introduce some new challenges. Typically, there’s a mixture of real-time (patient monitoring and treatment control) and non-real-time (data storage, networking, and user interface) functionality, so a hypervisor initially looks attractive. Patient-data confidentiality is critical, so the security side of a hypervisor becomes significant. Lastly, the ability to completely separate the parts of the system that require certification (normally the real-time parts), make a hypervisor compelling. I said that a hypervisor enables multiple OSs on one hardware platform, implying that this meant a single processor. In fact, many hypervisor products support the use of multiple CPUs, with the hypervisor providing overall supervision and inter-OS communication. This is becoming the most important context in which hypervisors contribute to the design of complex, yet reliable, embedded software. Colin Walls is an Embedded Software Technologist at Mentor Graphics’ Embedded Software Division. Mentor Graphics www.mentor.com

Embedded Computing Design | Spring 2019

TWITTER

@mentor_graphics

FACEBOOK

www.facebook.com/ MentorGraphicsCorp

YOUTUBE

www.youtube.com/channel/ UC6glMEaanKWD86NEjwbtgfg

www.embedded-computing.com


ADVERTORIAL

EXECUTIVE SPEAKOUT

MITYSOM-A10S:

ARRIA 10 SYSTEM ON MODULE AND DEVELOPMENT KITS CRITICAL LINK’S LATEST PRODUCTION-READY, INDUSTRIAL PERFORMANCE SOM Open Architecture for User-Programmability Critical Link’s MitySOM-A10S is an Intel/Altera Arria 10 SoC SOM (system on module) developed exclusively for industrial applications. It is a production-ready board-level solution that delivers industrial performance and includes a range of configurations to fit your requirements. MitySOM-A10S features up to 480KLE FPGA fabric, dual-core Cortex-A9 32-bit RISC processors with dual NEON SIMD coprocessors, and 12 high speed transceiver pairs up to 12.5Gbps. The SOMs include on-board power supplies, two DDR4 RAM memory subsystems up to a combined 6GB, micro SD card, a USB 2.0 on the go (OTG) port, and a temperature sensor. The ARM architecture supports several high-level operating systems, including Embedded Linux out of the box. The MitySOM-A10S has been designed to support several upgrade options including various speed grades, memory configurations, and operating temperature specifications (including commercial and industrial temperature ranges). Customers using the MitySOM-A10S receive free, lifetime access to Critical Link’s technical support site, as well as access to application engineering resources and other services. Critical Link will also provide developers the design files for our base boards, further accelerating design cycles and time to market. Specifications › Up to 480KLE FPGA fabric › Dual-Core Cortex A9 processors › 4GB DDR4 HPS shared memory › 2GB DDR4 FPGA memory › 12 high speed transceiver pairs, up to 12.5Gbps › Max 138 Direct FPGA I/Os, 30 shared HPS/FPGA I/Os › Supports several high-level operating systems, including Linux out of the box › Designed for long life in the field with 24/7 operation (not a reference design) Flexible, Off-the-Shelf Board Level Solution for Industrial Applications Leverage the SoC’s dual core ARM and user-programmable FPGA fabric to do more embedded processing with 40% less power. 12 high speed transceiver pairs combined with Critical Link’s onboard memory subsystems make this SOM well-suited for the high-speed processing needs of the most cutting-edge industrial technology products. Example applications include: › › › › › › › ›

Test and Measurement Industrial Automation and Control Industrial Instrumentation Medical Instrumentation Embedded Imaging & Machine Vision Medical Imaging Broadcast Smart Cities / Smart Grid

Why choose a Critical Link SOM? Critical Link’s support is unmatched in the industry, including our application engineering and online technical resources. We provide production-ready board-level solutions that deliver industrial performance and include a range of configurations to fit your requirements. With Critical Link SOMs, it’s about time: Time to market, time to focus on company IP, and product lifetime. › Built for long term production, with 10-15+ year availability › Proven track record for product performance in the field › Base board design files and other resources available online at no cost › Lifetime product maintenance and support › 100% US-based development and assembly Visit us at Embedded World (Hall 4, Stand 4-180) for a chance to win a development kit or to discuss design support for your project. Not going to Embedded World? Email us at info@criticallink.com or visit www.criticallink.com.


ADVERTORIAL

EXECUTIVE SPEAKOUT

DIGITAL TRANSFORMATION: A CATALYST FOR CHANGING THE EMBEDDED DEVELOPMENT PARADIGM By Gareth Noyes, Wind River Chief Strategy Officer The Internet of Things is dead ... or so declared one of my colleagues recently. While I could dismiss the comment as flippant, it does point to an underlying cynicism of technology that has been nicely captured by Gartner’s eponymous Hype Cycle. As technologists we often focus on cool technologies themselves, and then get frustrated, or become dismissive, when the business rationale or required ecosystem lags in maturity to make deployment of these technologies viable, or they are simply too complex or take more time to do so. Whether you believe that IoT is just hype, that Big Data stumbled out of the gate, that Industry 4.0 will never happen, or that AI is just a fad, the recurring theme that weaves each of these narratives together is the relentless drive towards digital transformation seen across industries. One could accuse digital transformation itself as being a hyped buzzword, though its longevity as a business theme points to some underlying needs which are yet to be fully realized. At its core, digital transformation represents a rethinking of how enterprises use (digital) technology to radically change performance. In recent conversations with customers, digital transformation isn’t usually addressed head-on under that banner. More often, the topic arises tangentially due to some other problem we’re exploring, though there are three recurring themes that point to the need to change the way that embedded systems are developed. 1. Fixed function to flexible systems Many embedded or control systems are designed in a monolithic manner; custom hardware is outfitted with a tailored operating system, likely some complex middleware, and hosting one or more applications to perform a specific set of tasks. The entire device is packaged and sold as a single device, and an upgrade is performed by replacing a whole unit with a newer generation that has undergone a similar design cycle. Not only is this a cumbersome design approach requiring re-development and re-testing of a number of non-differentiating components each design cycle, but it is also inflexible when it comes to deploying new features or fixing broken ones (including security updates). Contrast this with the modern approach to enterprise or cloud software development where applications (or increasingly micro-services) have been developed independently of how or where they will be deployed, accelerating innovation and time-to-value. 2. Automated devices to autonomous systems Many embedded systems are designed to automate specific tasks. In industrial systems, for example, a Programmable Logic

Controller (PLC) is used to automate manufacturing processes such as chemical reactions, assembly lines or robotic devices. Generally, these devices perform with a high degree of accuracy, repeatability and reliability, though they need to be individually programmed to do so and often have little scope for performing outside of their initial design parameters. However, in order to drive productivity increases and impact larger business outcomes, learning systems will increasingly be used spanning a range of control devices at the cell, plant, or system level. Similar system-level approaches are emerging in autonomous driving applications, where information from multiple subsystems needs to be merged and processed in some central unit running machine learning algorithms for object classification, path-finding and actuation. Learning systems will also have a big impact on the type of computing workloads that need to be run on edge devices. Traditionally, embedded system design has begun with custom hardware, possibly encompassing customized silicon processors, on which software is layered – a “bottoms-up” approach. For machine learning implementations, the process is turned on its head; a defined problem statement will determine the best type of learning algorithm to use (for example an object classification problem may require a different approach to voice recognition), from which the best hardware platform will be selected to run the learning framework most efficiently. This may involve selecting CPUs with specific instruction sets or accelerators, or using GPUs or FPGAs alongside traditional processors, for example. In these environments, the software often defines the required hardware platform. 3. Software-defined everything The advent of autonomous systems will require a shift in system design focus from individual, resource-constrained, bespoke devices, to more flexible and programmable environments that can be changed or optimized more globally. This shift will not only impact the engineering approach to architecting intelligent systems, but also the supply chain which has long been established in various industries around the production of specific, functional “black boxes”, such as Electronic Control Units (ECUs) in automotive, or Distributed Control Systems (DCS) in industrial applications. Similarly, the skill set required to build these systems will evolve to encompass a much more software-centric aspect. Companies who may have defined their differentiation and captured their value by designing and selling hardware, will likely find that they need to develop a rich software competency. This will involve defining a software blueprint, and possibly tools, APIs


EXECUTIVE SPEAKOUT

and SDKs with which their ecosystem will deliver additional value-add components to an underlying computing platform. The responsibility for integrating middleware or applications from a number of suppliers could shift from the supply chain to the equipment manufacturers themselves, and bring with it a change in the support or liability models. Modernizing the development paradigm: the IT journey Enterprise IT systems have undergone a radical transformation in the last couple of decades. At the beginning of my career, I recall using not only mainframe computers, but a plethora of microcomputers, each with their own flavor of operating system. Look under the hood, and you’d find that these computers were powered by unique, and sometimes custom, processor architectures. As desktop PCs and servers emerged, Intel Architecture became the ubiquitous silicon architecture for enterprise IT systems, driving standardization of hardware, development tools, and a vibrant software ecosystem.

ADVERTORIAL

This decoupling has allowed developers to quickly develop, deploy and update applications and enormous scale, without worrying about purchasing or managing any hardware at all. While IT developers can quickly build and deploy hyperscale applications, build upon the knowledge of others by using rich application frameworks, modern development languages and tools, and use infrastructure that is managed by someone else, embedded developers mostly do not have this luxury. Instead, their development model leaves them struggling to stay apace of rapidly changing silicon architectures, unable to use a lot of the advances in software development and deployment methodology that their IT counterparts enjoy, and as a result struggle with rapid innovation, affordability of their systems, and product obsolescence. They are not enjoying the advances brought about by the IT journey.

Next, we saw the transformative power of virtualization, which led to consolidation of applications and a drive to higher hardware utilization rates, squeezing yet more efficiency into the IT landscape. While the motivation was initially driven by optimization of local computing resources, de-coupling software from the underlying hardware allowed centralization of computing resources and paved the way for cloud computing.

In order to change this, one must recognize that embedded systems often are very different from IT systems. Issues such as system performance and reliability, costs, resource and timing constraints, intolerance of failures or downtime, safety needs, all place very specific requirements on how systems are built and deployed. However, by recognizing and addressing these requirements, I believe we can start leveraging the advances seen in the IT domain and unlock more efficiency, innovation and affordability in the way embedded systems are built.

Today, cloud computing has removed the dependency between hardware and software, and applications or individual functions can be written quickly and efficiently, while having great control of underlying computing, storage and networking resources.

Wind River www.windriver.com Twitter: @WindRiver Facebook: @WindRiverSystems


POWER ELECTRONICS

A primer on the battery management system (BMS) for powertrain electrification By Majeed Ahmad, Contributing Editor

The tipping point for powertrain electrification – electric vehicles (EVs), hybrid electric vehicles (HEVs), and plug-in hybrid vehicles (PHEVs) – is in sight, as modern lithium-ion batteries are able to store and use energy in vehicles at higher power density and lower cost. According to an internal study from automotive chipmaker NXP, 50 percent of vehicles sold across the globe will have some form of electric propulsion by 2030. At the same time, however, lithium battery cells have significant challenges that demand a sophisticated electronic control system. Enter the battery management system (BMS).

A

ccording to media reports, “range anxiety” has been the key reason why Volkswagen engineers underestimated the ambitious road to vehicle electrification for so long. While industry observers call it a costly mistake on Volkswagen’s part, the German carmaker claims it’s catching up on electric drive technology with a massive boost in R&D funding. The BMS electronics is a crucial part of powertrain electrification because it monitors and manages the condition of lithium-ion battery cells to ensure a safe, reliable, and optimum battery operation. The role of BMS becomes especially critical amid the harsh and unpredictable environments of automotive design and use. The anatomy of a BMS Let’s begin with the battery pack, an array of lithium-ion cells that must be carefully monitored and balanced. The battery cells – in hundreds or even thousands – constitute a battery that generates voltages up to hundreds of volts. The battery passes on the DC voltage to the inverter that employs AC traction motors to provide acceleration for electric vehicles. (Figure 1.)

20

Here, on the battery side, a BMS solution carries out three primary functions in vehicle electrification: battery cell monitoring, state of charge (SOC) estimation, and battery cell equalization. Below is a sneak peek into these key building blocks of the BMS solutions that come in different battery pack and powertrain configurations. ›› Battery cell monitoring The deployment of large batteries offering 400V to 800V systems inevitably demands accurate monitoring of cell voltage. Here, a BMS solution facilitates battery cell monitoring by providing information on current, voltage, temperature, etc. in real-time conditions. And that plays a vital role in facilitating the early failure detections in electric vehicle batteries. A battery monitoring chip is usually a microcontroller monitoring the voltage of a cell or a group of cells. Furthermore, it typically performs temperature measurement of the battery pack and possibly of the cells themselves. There are usually two major subsystems in the BMS electronics. The cell monitoring controller (CMC) reports voltage and temperature data to the battery monitoring controller (BMC), which passes on the data summary to an electronic control unit (ECU) via the CAN bus. There are distributed and centralized system designs when it comes to BMS architecture comprising the CMC and BMC components. The fact that BMS accurately monitors battery cells by closely tracking the degradation of system performance also allows it to report the state of charge of battery packs. And that brings us to the next BMS building block. ›› State of charge Lithium-ion batteries are vulnerable to damages caused by both over- and undercharging of the battery cell. State of charge, one of the most important parameters in BMS, represents the difference among individual battery cells.

Embedded Computing Design | Spring 2019

www.embedded-computing.com


POWER ELECTRONICS

V+

FIGURE 1 BMS Front-end

BMS Front-end

BMS Control Unit

Safety switch

V+

Data/CTRL Bus

A look at a BMS for electric vehicles. Image: Research Gate.

M1 IC1

Vout

V-

M2 BMS Front-end

BMS Front-end

IC2

C2

BMS Front-end

FIGURE 2 The overvoltage or overcharging at excessive currents can cause thermal runaways. Moreover, not every cell in a battery pack loses charge at the same rate even though cells are connected in series. That’s because the charge cycle of a cell relies on several factors, including temperature and location of a cell in the battery. The BMS electronics, which provide accurate predictions for vehicle range and battery life expectancy, implements predictive algorithms to estimate the battery cell performance accurately. Next, it ensures that a cell isn’t charged at 100 percent of its SOC or discharged at zero percent of SOC, as both will degrade the battery capacity. One way to maximize battery pack capacity and minimize degradation is by accurately controlling the SOC of each battery cell. As a result, the BMS electronics can ensure that the charge of battery cells remains within the recommended range. And that’s done via cell balancing. ›› Battery cell balancing The difference between cell voltages, which indicates an unbalanced cell at the system level, can affect both individual cells and battery pack. One of the main causes of cell failure is cell voltage imbalance caused by leakage current in individual cells. The BMS ensures that cell voltages don’t exceed the rated maximum voltage, and it does that by employing passive and active balancing techniques. But a high-value resistor used in a passive balancing design itself dissipates power and it doesn’t respond to the temperature variations common in automotive design environments. The two major active balancing techniques are based on op-amps for voltage balancing and current balancing using the MOSFETs, respectively (Figure 2). However, op-amps can cause a power penalty if there is a mismatch between the capacitance values of two cells. On the other hand, MOSFETs, which enable natural cell balancing through complementary opposing current levels, ensure that there is little or no additional leakage current from the MOSFET itself. The MOSFET is put in parallel to a battery cell or stack of cells connected in series. The BMS value chain We know that if the state of charge of a battery pack is accurately monitored, there won’t be issues regarding battery cells. However, if a problem does arise relating to www.embedded-computing.com

C1

This is how a MOSFET automatically balances current between battery cells. Image: ALD.

overcharging or undercharging of battery cells, auto-balancing of battery cells ensures the safe supply of voltage to the inverter in electric vehicles. The BMS electronics will continue to evolve while more EVs and HEVs arrive on the roads. It’s able to fulfill the requirements of the present by efficiently monitoring and managing the cells in vehicle batteries. Majeed Ahmad has been a technology and trade journalist for more than 17 years. He is former editorin-chief of EE Times Asia; in addition, while serving as editor-in-chief at Global Sources, a Hong Kong-based publishing house, he spearheaded magazines relating to electronic components, consumer electronics, and computer, security, and telecom products. Majeed studied electronics and telecommunications at Eindhoven University of Technology, the Netherlands. He worked with bluechip companies like AT&T, Motorola, and Nortel before heading to the publishing industry. Majeed has authored three books: “Smartphone,” “Business Untethered,” and “Nokia’s Smartphone Problem.”

Embedded Computing Design | Spring 2019

21


POWER ELECTRONICS

USB-C power delivery: Charging, conversion, and emerging applications Q&A with Andrew Cowell, Renesas Electronics Corporation

The USB Type-C specification was published in 2014, promising a significant increase in bandwidth and power delivery capability. Five years later and with USB Type-C solutions gaining critical mass, Embedded Computing Design asks Andrew Cowell, Vice President of the Battery & Optical Systems Division at Renesas Electronics Corporation, what the USB-C power delivery (PD) features mean for embedded engineers. Edited excerpts follow. ECD: 12-18 months ago, USB Type-C was all the rage. What’s the status of the technology, both from a technical and an adoption standpoint?

adopting USB Type-C. Additionally, in-car charging of phones, tablets, and notebooks is driving USB-C into automotive applications, too.

COWELL: Compared to the successful 20+ years of Type-A and Type-B connectors, USB Type-C offers a different class of features and performance to meet the future needs of connectivity. While the future of legacy USB technologies is uncertain, we are optimistic and continue to see healthy adoption and proliferation of USB Type-C connectors.

ECD: Power Delivery (USB-PD), a key enhancement of USB-C, potentially opens up the technology to a much broader range of applications than before. Can you provide some insight into applications that could benefit from USB-C’s 100W power delivery capacity?

The adoption of the USB Type-C connection is definitely not dying down, and nothing is holding it back. The specifications supporting USB Type-C are mature and the adoption rate is solid and on track. For consumer applications, OEMs are moving as fast as they can to implement USB Type-C as their power and data connection. The penetration rate of USB Type-C within their major product lines is unprecedented. Many computing and consumer endequipment vendors have already adopted USB Type-C. For example, today, all of the new lines of Apple MacBook and iPad are using USB Type-C for power supply and data transfer, and many other notebook vendors are using it for at least one port. Several new smartphones in the market are now using USB-C, and the power bank is another consumer product

22

We expect to see widespread adoption of USB Type-C in the coming years.

COWELL: USB-C high-power delivery capabilities are mainly for consumer applications where USB-C technology was incubated. However, we expect to see USB Type-C go to a broader set of applications. The USB-C standards and the USB-C ports are all about an evolutional way to interconnect with electronics devices, no matter whether the end applications are consumer or non-consumer. As long as applications demand high-speed data transfer between electronic devices through a simple-to-use connector with certain power delivery requirements, USB-C is the right choice. A good example is the automotive industry’s fast adoption of USB-C ports. Another perspective of the future prospect of Power Delivery is the continuous explosion of battery-based devices. In addition to the previously-mentioned consumer applications like phones, tablets, and laptops, users of all battery-based devices expect certain common features: ›› ›› ›› ››

Small, simple and safe connectors Switchable power source/sink on a single connector Faster charging More efficient charging (less heat)

USB Type-C and Power Delivery are defined to meet those requirements and the USB Implementers Forum (USB-IF) has established robust compliance testing for the market to develop a safe ecosystem for 100W capabilities. Some medical devices, such as digital microscopes with image processing features, may adopt a single USB-C port to replace the USB A port + HDMI port for a simplified

Embedded Computing Design | Spring 2019

www.embedded-computing.com


Renesas Electronics Corp. www.renesas.com

TWITTER

@RenesasAmerica

LINKEDIN

www.linkedin.com/company/renesas/

FACEBOOK

www.facebook.com/ RenesasElectronicsAmerica

YOU TUBE

www.youtube.com/user/ RenesasPresents

POWER ELECTRONICS

FIGURE 2 The Renesas ISL9241 is the industry’s first USB-C ‘Combo’ buck-boost charger to support both narrow voltage direct charging (NVDC) and hybrid power buck-boost (HPBB).

FIGURE 1

Matrix Vision BlueFox3 USB3 Vision Camera uses a USB Type-C port (12.4MP, 23.4 Hz, Monochrome, 12-Pin I/O).

design with better image transfer performance, even if full 100W isn’t required. Another USB Type-C example includes machine vision cameras for industrial applications, such as those offered by companies like Matrix Vision (Figure 1) and Edmund Optics. ECD: What challenges do the power-delivery faculties of USB-C present in terms of charging and voltage, regulation for design engineers? COWELL: While the USB-C port offers high-power capabilities, its wide voltage range of 5V to 20V brings challenges to battery chargers and buck-boost converters connected to it. This is because there is no definite relationship between input-to-output or output-to-input, which warrants either a buck or boost converter topology. Renesas addressed this challenge by adopting a buck-boost topology approach and developed the industry’s first USB-C buck-boost battery charging solution, the ISL9237 introduced in early 2015. Since then, Renesas has developed the ISL9238x product family, and recently the ISL9241 product (Figure 2), which is the industry’s first USB-C ‘Combo’ buckboost charger to support both narrow voltage direct charging (NVDC) and hybrid power buck-boost (HPBB). www.embedded-computing.com

Renesas designed its buck-boost battery charger products to support USB-C power applications across the full voltage range (5V to 20V) with no dead zone. These products offer multiple operation modes to efficiently utilize adapter and battery power and improve overall efficiency. For example, the ISL9241 can be configured in HPBB mode for higher power applications to reduce the power loss and improve efficiency. The unique reverse turbo mode and supplemental power mode allow the system to run at its highest possible speed while prolonging battery run time. In addition to conventional protection features such as over voltage (OVP), over current (OCP), and over temperature (OTP) protection, Renesas battery charger products keep monitoring power charging/discharging and provide PROCHOT# signals for protection against battery/adapter overheating. Another customer challenge for USB-C adoption is that designs need to be compliant with USB-C PD standards. Renesas helps customers by providing USB-C compliant solutions, which in turn reduces their development time and cost. ECD: What are the areas of improvement for the technology, or potential USB-C innovations on the horizon? COWELL: Renesas will continue product innovation and provide total system ­solutions for USB-C applications. A good example is Renesas’ development of new solutions for multiple USB-C port devices. They enable more efficient and flexible power delivery between the sources and sink devices by organically integrating Renesas’ technologies in the USB-PD controller, buck-boost battery charger, and voltage regulator. We also see tremendous potential for more battery and power applications to adopt USB-C and USB-PD and use its cryptographic-based authentication. USB Type-C authentication offers an extra level of safety and security by providing a means for host systems to confirm the authenticity of a USB device or USB charger. This mitigates risks and prevents counterfeit device power sources from attempting to exploit a USB-C connection. Renesas continues to develop best-in-class PD and power management solutions to support a robust, safe, and unified USB-C ecosystem, plus expand into different markets. In addition to providing power delivery, USB-C is also defined as a configurable high-speed data bus, which OEMs will continue to utilize and evolve to provide a high-performance interface that suits different markets. Andrew Cowell is Vice President of the Battery & Optical Systems Division at Renesas Electronics Corporation. Prior to his current role, Cowell served as Senior Vice President of Intersil’s Mobile Power Products and Vice President of Analog Marketing at Micrel Semiconductor. He began his career as a design engineer at Advanced Power Supplies. He holds a First Class Honors degree in Electronics from Middlesex University in the U.K. Embedded Computing Design | Spring 2019

23


POWER ELECTRONICS

The “relativity” of high-Q capacitors By Jeff Elliott for Johanson Technology

For many high-power RF applications, the “Q factor” of embedded capacitors is one of the most important characteristics in the design of circuits. This includes products such as cellular/telecom equipment, MRI coils, plasma generators, lasers, and other medical, military, and industrial electronics.

O

ften expressed as a mathematic formula, the Q factor represents the efficiency of a given capacitor in terms of its rate of energy loss. In theory, a “perfect” capacitor would exhibit no loss and discharge a full energy transfer, but in the real world, capacitors always exhibit some finite amount of loss. Although many high-Q capacitors are available on the market, performance can vary widely depending on design and quality of manufacturing. The higher this energy loss, the more heat is generated within the capacitor that must be dissipated or cooled. For low-power applications this heat is insignificant. However, for higherpower applications, this heating can be substantial. If the temperature rises significantly, it can damage nearby components and, in extreme cases, desolder parts from the circuit board.

Although many low-power applications do not require consideration of the capacitor’s Q factor, energy losses can increase significantly at higher frequencies, leading to other performance issues even in low-power circuits.

24

Reduced receiver sensitivity and link budget can sometimes be correlated to higherloss capacitors. For this reason, high-power RF applications typically require high-Q capacitors, which are characterized as having ultralow equivalent series resistance (ESR). In addition to minimizing energy loss, high-Q capacitors reduce thermal noise caused by ESR to assist in maintaining desired signal-to-noise ratios. Your performance may vary Despite its critical role in RF electronics, not all high-Q capacitors are created equal. It turns out high-Q capacitors’ performance is actually relative, varying widely based on design, manufacturing, quality control, and even type of performance testing. Further muddying the water is the fact that manufacturers use numerous terms to reference their high-Q capacitors, including “high-Q,” “ultra-high-Q,” “low loss,” and “RF capacitors.” “In many ways, ‘high-Q’ is a relative term,” says Scott Horton of Johanson Technology, a company that manufactures a variety of multilayer ceramic capacitors (MLCCs). “It may seem like every [capacitor] manufacturer has a high-Q product, but the performance of the parts in the circuit can be quite different.” To distinguish between the choices, most MLCC capacitors publish ESR performance values online. However, the performance claims of capacitor manufacturers should be viewed with some doubt, says Horton. ESR tests are conducted in laboratory settings and are most often derived by one of two methods: by using vector network analyzers (VNAs) or resonant lines. However, the accuracy of this data is limited by setup and calibration of these systems. When measuring capacitor Q on a network analyzer, the configuration and calibration are critical to ensure meaningful data is collected. Not all measurements on VNAs are equally valid; in fact, poorly calibrated VNAs can yield wildly inaccurate results.

Embedded Computing Design | Spring 2019

www.embedded-computing.com


Johanson Technology

www.johansontechnology.com/high-q

TWITTER

LINKEDIN

@JohansonTech

FACEBOOK

www.linkedin.com/company/ johanson-technology-inc./

www.facebook.com/ JohansonTechnology

YOU TUBE

www.youtube.com/channel/ UCMvuwf1NXO8mVeNZ9tcQpsw

POWER ELECTRONICS

A more reliable method of testing the Q of capacitors is the well-established “resonant line” system; the Boonton 34A resonant line has been the de facto standard in the industry for decades. Companies like Johanson Technology publish ESR performance data from a Boonton 34A resonant line online. Since this method depends on the frequency accuracy of a signal generator and a very stable resonant line, measurements can be made with extreme precision that is repeatable over time. “I can’t comment on how some capacitor manufacturers end up with the values they publish, but when I put the capacitor on a resonant line, which is in accordance with the mil standards, and test these parts in a side-by-side A/B comparison test, we see significant differences from their published data. I would believe those relative results,” says Horton. Consistent manufacturing, layer counts Another area that can affect the ESR of a high-Q capacitor is the quality and consistency of the manufacturing process. By definition, MLCC capacitors consist of laminated layers of specially formulated, ceramic dielectric materials interspersed with a metal electrode system. The layered formation is then fired at a high temperature to produce a sintered and volumetrically efficient capacitance device. A conductive termination barrier system is integrated on the exposed ends of the chip to complete the connection. In MLCCs, capacitance is primarily determined by three factors: The k of the ceramic material, the thickness of the dielectric layers, and the overlap area and the number of the electrodes. So a capacitor with a given dielectric constant can have more layers and wider spacing between electrodes, or fewer layers and closer spacing to achieve the same capacitance.

characteristics significantly. As such, the leading capacitor suppliers tightly control the layer counts of each part made. Unfortunately, this is not a given in the industry, with some suppliers delivering products with the same part number, but a variable number of layers. In short, the same part number can have significantly different designs, which can lead to undesirable impedance changes in the capacitor. These variations occur from supplier to supplier and can even be seen with a single source. “If an MLCC manufacturer is not tightly controlling the layer count, they might be providing 10-layer parts in one batch and then later deliver 17-layer parts in a subsequent batch,” explains Horton. These two parts will not perform the same at high frequencies. Another cause of performance variation occurs when OEMs purchase through resellers who buy from multiple factories. In this scenario, the different factories have different designs, which have different high frequency performance. Thus, the items are sourced from different manufacturers that can produce significant variation in high frequency performance. This, too, can lead to a very real scenario where parts are inconsistent, which results in system performance variation. The series resonant frequency (SRF) is a key performance metric affected by varying layer counts; this variation can negatively affect the performance of any LC RF filters where those capacitors are used. Bandpass filters, for example, often use the resonant frequencies of the capacitor to “shape” its performance.

TRACE 32 ®

Debugger for RH850 from the automotive specialists

DEBUGGING

Code Coverage (ISO 26262)

AUTOSAR / Multicore Debugging

Multicore Tracing

Runtime Measurement (Performance Counter)

Significantly changing the layer counts in MLC capacitors can change performance www.embedded-computing.com

NEXUS TRACING

RH850, ICU-M and GTM Debugging

eec_rh850.indd 1

Onchip, Parallel and Aurora Tracing

www.lauterbach.com/1701.html

Embedded Computing Design | Spring 2019

25

07.11.2018 12:21:20


POWER ELECTRONICS

In other words, when layer counts vary, filters may not perform as designed and allow radiated emissions to exceed the FCC or ETSI requirement in the finished product. Lot-to-lot changes in capacitor performance can lead to costly product recalls. “If there is a shift in the series resonance frequency, your filter may no longer meet FCC emission requirements,” says Horton. “So, by tightly controlling the layer counts, manufacturers help ensure that LC filter performance remains consistent from lot to lot, day to day, month to month, year to year.” High-loss capacitors can also affect aspects such as battery life. For systems using RF amplifiers, it is inefficient to have power absorbed or dissipated by a capacitor. Engineers must then use amplifiers to make up for losses caused by low-Q capacitors, which results in faster battery drain in handheld devices. High-Q capacitors can also improve receiver sensitivity by reducing losses between the antenna and the transceiver. Variance in capacitor design, construction High-Q capacitors vary from standard capacitors in design. To achieve the lowest losses, leading companies use the lowest-loss dielectrics, inks, and electrode options. For example, most low cost-commodity capacitors use nickel electrodes; however, nickel is a poor conductor known for high loss at RF and microwave frequencies. Silver and copper electrodes are superior, perform better than nickel, and are used for most high-Q applications. This type of electrode has the added advantage that it does not create a magnetic field like nickel. This factor is important for applications such as MRI receiver coils, where strong magnetic fields are involved. For the highest-power RF applications, a number of leading manufacturers offer pure palladium electrodes. At higher frequencies, however, silver is a superior conductor when compared to palladium. For this reason, Johanson Technology incorporates silver electrodes in its ultrahigh-Q (lowest ESR loss) offering, the

26

FIGURE1 Johanson Technology uses silver electrodes in its ultra-high-Q (lowest ESR loss) offering, the E-Series multilayer RF capacitors.

E-Series multilayer RF capacitors (Figure 1), in its high-power standard 1111, 2525, and 3838 size capacitors. Capacitors in vertical orientation Even minor details like the orientation of the capacitor in the tape reels can have a direct impact on the performance of a circuit. Traditionally, high-Q capacitors are available primarily in a horizontal electrode configuration when mounted in tape and reels. Some manufacturers are offering the MLCC capacitors in both horizontal and vertical electrode orientation configurations. However, mounting capacitors in a vertical configuration is also an industry “trick” that effectively extends the usable frequency range of capacitors. In addition to the SRF (which is based on the given physical size/construction and a given capacitance value), capacitors also exhibit parallel resonant frequencies (PRF). As a rule of thumb, PRF is approximately double the SRF. At the PRF, the transmission impedance goes relatively high, and the capacitor is very high-loss around this frequency. By mounting the capacitor in a vertical position instead, the odd-numbered PRFs are eliminated (i.e., the 1st, 3rd, 5th, etc.). This arrangement pushes the first PRF significantly higher in frequency, which allows the capacitor to be used at significantly higher frequencies. High-Q relativity If there is a lesson from this discussion of high-Q capacitors, it is that selecting the ideal MLC capacitor requires more than a voltage, capacitance value, and tolerance. This may also explain why a capacitance value from one supplier may not directly correspond with another supplier in critical matching circuits. The design and quality/ consistency of manufacturing plays just as big a role, as does the type of testing to verify performance. “Don’t assume that because the capacitor is labeled ‘high-Q’ it is going to deliver the required performance,” concludes Horton. “These capacitors play a critical role in RF transmission and reception of military, medical, and industrial electronics, so they must perform as expected, optimized to minimize energy loss and variation from one batch to another. If not, these electronics may not perform as expected in the field.” Jeff Elliott is a Torrance, California-based technical writer. He has researched and written about industrial technologies and issues for the past 20 years.

Embedded Computing Design | Spring 2019

www.embedded-computing.com


BY ENGINEERS, FOR ENGINEERS In the rapidly changing technology universe, embedded designers might be looking for an elusive component to eliminate noise, or they might want low-cost debugging tools to reduce the hours spent locating that last software bug. Embedded design is all about defining and controlling these details sufficiently to produce the desired result within budget and on schedule. Embedded Computing Design (ECD) is the go-to, trusted property for information regarding embedded design and development.

embedded-computing.com


SPECIAL FEATURE

Exploring Embedded Machine Learning By Curt Schwaderer, Technology Editor

In 1943 neurophysiologist Warren McCulloch and mathemetician Walter Pitts wrote a paper on neurons and how they work. A model was created using an electrical circuit and the neural network came into being. Seventy years later, these forays have evolved into a number of large-scale projects by some of the top technology companies and technology communities around the globe: GoogleBrain, AlexNet, OpenAI, and Amazon Machine Learning Platform are examples of some of the most well-known initiatives relating to artificial intelligence (AI) and machine learning.

E

nter IoT and its embedded emphasis. Add its monetization dependencies on (near) real-time analysis of sensor data and taking actions on that information. These leading initiatives assume massive amounts of data can be fed into the cloud environment seamlessly where analysis an be performed, directions distributed, and actions taken, all within the time deadlines required for every application. Qeexo (pronounced “Keek-so”) CTO Chris Harrison believes machine learning belongs at the edge, and Qeexo is developing solutions to do just that.

Mobile sensors and AI Like many paradigm-shifting initiatives, this particular initiative started with a challenge – how can more sophisticated touch interaction for a mobile device be achieved? This question led to the exploration of fusing touchscreen data with accelerometer data to measure taps against a screen. The result was the ability to distinguish between finger, knuckle, nail, stylus tip, or eraser, which enables broader interaction between a user and the device.

28

Embedded Computing Design | Spring 2019

“If we’re going to put in sophisticated multitouch, we need to do some smart things in order to resolve ambiguous user inputs,” Harrison stated. “The machine learning software behind our FingerSense product differentiates between finger, knuckle, and nail touches. These new methods of input allow for access to contextual menus. This brings a right-click functionality as opposed to finger and hold.” Mobile device machine learning challenges The power and latency budget for machine learning on a mobile device was www.embedded-computing.com


SPECIAL FEATURE

LIKE MANY PARADIGMSHIFTING INITIATIVES, THIS PARTICULAR INITIATIVE STARTED WITH A CHALLENGE – HOW CAN MORE SOPHISTICATED TOUCH INTERACTION FOR A MOBILE DEVICE BE ACHIEVED? tiny. It took almost three years before the requirements were met. “As a mobile application developer, you have two choices on a mobile device – you can do things fast at higher power, or slower at lower power. This led to a key capability we call Hybrid Fusion. The machine learning software needs to be very clever about access to and processing of the sensor data in order to fit within the power and latency budget,” Harrison said. FingerSense machine learning became very good at doing edge- and deviceoptimized machine learning – something that traditional machine learning cloud environments don’t have to consider. “Most companies are thinking about deep learning from a gigantic servers and expensive CPUs perspective. We took the opposite path. The IoT goal is a ‘tiny’ machine that can effectively operate with limited resources and maintain nearreal-time deadlines of the application. By cutting our teeth in the mobile industry, it gave us the skills and technologies to apply machine learning to edge IoT and embedded,” Harrison stated. www.embedded-computing.com

One of the most exciting frontiers is bringing what Harrison calls “a sprinkle of machine learning” to IoT and small devices. For example, your light bulb doesn’t have to be able to do a web search for the weekly weather, but adding a touch of machine learning that allows it to sense movement and temperature to make on/off decisions has real-world value. Embedded machine learning architecture Using a device’s main CPU for embedded machine learning can be very powerconsumptive. So, instead of hooking accelerometer and motion sensors to the main CPU, Qeexo employs a low-power microcontroller that acts as a “sensor hub” in between the sensor and the main CPU. The sensor hub is specialized for the heavy lifting of sensor communication and is more power-efficient than the primary CPU. The sensor hub can also execute a little bit of logic, thus allowing the main CPU to be off for much longer. This tiered design optimizes power and latency budgets and makes the embedded machine learning environment possible on mobile device and IoT sensors. “Accelerometer data is constant streams of data with no logic being applied, so this needs to be continually sampled,” Chris said. “This is where the machine learning logic starts – and perhaps ends. There may be additional machine learning logic that can be done on the main CPU. You may decide that the sensor hub can filter out or pre-choose the data so less data goes to the main CPU.” One example is where bursts of traffic occur: If sensor information is idle, then generates a burst of information, and this burst moves into main memory or ties up the bus, things can go badly. Alternatively, the sensor hub coprocessor can provide a vector representation of the information to the main processor while still being able to interpret the sensor data itself, which streamlines system efficiency. The Qeexo machine learning environment is written in C/C++ and Arm assembly to optimize efficiency and operating system portability. Most of the operation is within a kernel driver component. The software also performs power management for battery powered devices. Summary One must not assume perfect, high-bandwidth network connectivity and infinite machine learning resources on the way to a successful IoT system. Harrison warns against the cloud environment being used as a crutch. “If you take the time to properly analyze, gather requirements, and design the IoT system, you can absolutely perform machine learning at the edge. This minimizes network requirements and provides a high level of near-real-time interaction. “If we can get away from [leveraging cloud processing for everything], we should be able to achieve a far more secure, private, and efficient system. There is a time and place for cloud connections, but engineers need to stop jumping immediately to that resource.” Given how fast these processors are improving, it certainly seems achievable. There is also a cost benefit: Most smart devices are priced out of the mass market today; if we can sprinkle intelligence into these devices, bring down the costs, and provide real value, adoption will accelerate. Embedded Computing Design | Spring 2019

29


MUSINGS OF A MAKERPRO

www.youtube.com/c/jeremyscook

Challenges of building an omni-wheel robot By Jeremy Cook, Engineering Consultant Consider a wheel with rollers arranged at 90 degrees to its traditional axis of rotation. This setup would allow the wheel to roll not just forwards and backwards, but side to side as well. While not a new concept, and something that’s commonly employed in industry in conveyor systems, these wheels can also be used to create a robot that can travel forward and backward, turn to the left and right, and – uniquely – slide left and right. (Figure 1.) While these wheels have some obvious advantages, you may wonder why they, along with their 45° Mecanum-wheel cousins, aren’t more commonly used in industry and high-end robotics. For one thing, they tend to be much more expensive than solid wheels, and – as I found out when I tried to build my own omni-bot – things are (as usual) more complicated than they initially appear. Traction, traction, traction Once you have a solid mechanical design, actually constructing an omnibot isn’t too difficult. In theory, the radially aligned stepper motors driven by Easy Driver driver boards and an Arduino Nano should give it very precise control. In fact, I’d envisioned using this robot as a sort of advanced camera dolly or even something that could perform CNC operations as it traversed and turned “effortlessly” in the X/Y plane. Unfortunately, the reality of the situation was much different, and my original choice of wheels with plastic rollers tended to slip on smooth

FIGURE 1 Movement and mechanical sketches. Image credit: Jeremy S. Cook.

FIGURE 2 Rubberized wheels provided more traction. Note tight electronics space. Image credit: Jeremy S. Cook.

30

Embedded Computing Design | Spring 2019

surfaces. I tried to correct this by coating the rollers with Plasti Dip to increase the traction, moving it in an “X” orientation where all four wheels are powered in forward and side-to-side movement. I even designed spacers and shocks in an attempt to increase contact of all four wheels. Unfortunately, none of this worked well on a smooth surface, although these hard rollers did perform admirably on a yoga mat – for whatever that’s worth. Ultimately, I did purchase new wheels with flexible rubber rollers. The ‘bot has performed much better after this replacement, but there are still intermittent issues when one wheel’s rollers are in positions with minimal rolling contact. (Figure 2.) Programming and electronics Beyond that “small” traction detail that took me an embarrassing amount of time to resolve (yet another reminder to never underestimate the scope of a “small” problem), I of course had to get the programming in order. The device nominally uses one pair of motors at a time to go forward and backward; its code, available on GitHub, uses a series of functions to help handle highand low-level control of the motors. In the final stages of this project, I mistakenly overwrote my code with something else I was working on, but thankfully had an earlier version that I was able to go back and modify. This was inconvenient, but not as disastrous as it could have been. Moral of that story: Always back your code up, preferably somewhere that can’t be overwritten in one step! Power-wise, I originally used a USB supply, but switched to a LiPo [lithium polymer] battery, adding eye-like warning lights to show when its voltage dropped too low. I used a screw terminal shield with www.embedded-computing.com


the Arduino Nano, eliminating some soldering work, and allowing me to modify things without too much trouble as needed. Interestingly, I thought the 12-inch circumference of this robot would leave plenty of room for its wiring, but as usual, things were crammed into its body. This worked out without much issue, but it makes me appreciate, once again, why no one ever complains about having too much electrical cabinet space. [Figure 3.] More omni-bots? While an exciting concept, in my opinion this mechanical technology hasn’t really taken off because of its added complication and cost compared to normal wheels, along with – and perhaps foremost – its difficulty obtaining sufficient traction on different surfaces. One might note that my robot resembles (unintentionally) a Roomba vacuum cleaner, and I can’t help but think that engineers at iRobot likely considered this technology, eventually settling on a simpler, cheaper, and more “tractiony” two-wheeled design.

FIGURE 3 Early omni-bot electrical testing. Image credit: Jeremy S. Cook.

Join the PICMG IIoT Specification Effort Plug & Play Interoperability at the Sensor Domain

While not the proper solution for every problem, this type of wheel would be good to consider in some situations. Of course, if you need to reduce friction in multiple axes, such as in a conveyor, these wheels would certainly be a solid choice. You can see the initial build process for this device on YouTube, and I plan to release a second showing the extra work needed to “perfect” it soon! References https://en.wikipedia.org/wiki/Omni_wheel https://en.wikipedia.org/wiki/Mecanum_wheel https://www.schmalzhaus.com/EasyDriver/ https://github.com/JeremySCook/OMNI-Bot https://www.youtube.com/ watch?v=Z3M38egxzrE

Jeremy S. Cook is a freelance tech journalist and engineering consultant with over 10 years of factory-automation experience. An avid maker and experimenter, you can follow him on Twitter (https://twitter.com/JeremySCook) or see his electromechanical exploits on the Jeremy S. Cook YouTube Channel (https://www.youtube.com/ c/jeremyscook).

Be a part of the PICMG IIoT open specification effort to bring true plug-and-play interoperability to the “last foot” of the network. With our low-fee model, companies large and small can work with thought innovators on the leading edge of technology. Join PICMG today!

www.embedded-computing.com

www.picmg.org

Embedded Computing Design | Spring 2019 20190109-PICMG-Island-Ad-04.indd 1

31 1/9/19 11:10 PM


Embedded Computing Design

2019 EMBEDDED WORLD The 2019 Embedded Computing Design Embedded World issue showcases embedded tools and solutions for those designing in the areas of industrial control, edge computing, autonomous machines, and more.

DEV TOOLS & OS

IOT

Wind River Systems, Inc.

32

Technologic Systems

34

HARDWARE

PROCESSING: OTHER

Connect Tech, Inc.

Dolphin Interconnect Solutions Inc.

33

Crystal Group, Inc.

33

34

Dev Tools and OS

WIND RIVER HELIX VIRTUALIZATION PLATFORM An Adaptive Workload Consolidation Platform Providing Virtualization Flexibility for Edge Computing Systems Wind River Helix Virtualization Platform, derived from the Wind River market leading VxWorks® real-time operating system (RTOS), provides a commercial-offthe-shelf (COTS) product for delivering aerospace, automotive, defense, and industrial solutions that enables workload consolidation with different levels of safety criticality into a single edge compute platform. This virtualization platform supports mixed criticality OSes, providing you with the ability to run safety-critical and non-critical applications side by side. Whether you’re most concerned with an RTOS, Linux, safety, security, latency, determinism, or a combination of these, Helix Platform gives you the flexibility of choice, allowing you to consolidate all types of workloads onto a single platform today and into the future. ®

ĄĄ Meets stringent requirements of safety-certification and affordability: Helix Platform has

Wind River

www.windriver.com

32

FEATURES

ĄĄ

ĄĄ

ĄĄ

ĄĄ

been designed to be certified and to simplify the certification of safety-critical applications according to the stringent requirements of the DO-178C, IEC 61508, and ISO 26262 safety standards. Supports industry standards conformance: Helix Platform supports an open, standards-based device virtualization framework that efficiently enables third-party operating systems without the overhead of emulation, by supporting ARINC 653 APEX API, POSIX®, and FACE™, independent build, link, and load (IBLL), C11 and C++14 and standards-based virtualization of common devices. Supports high portability with OS-agnostic virtualization: The Helix Platform hypervisor is OS-agnostic, providing the capability to run any operating system, such as VxWorks, Wind River Linux, and Microsoft® Windows®, unmodified, inside a virtual machine. Enables mixed-criticality support: Because the Helix Platform multi-core scheduler uses hardware virtualization assist, the platform enables virtualization of mixed-criticality unmodified guest operating systems. Provides extensive multi-core hardware support and availability: Helix Platform supports 64-bit Arm® and Intel® architectures that enable both 32- and 64-bit guest operating systems.

inquiries@windriver.com

 www.linkedin.com/company/wind-river/

Embedded Computing Design | Spring 2019

 800-545-WIND @WindRiver

www.embedded-computing.com


NVIDIA® Jetson™ AGX Xavier™ Solutions Connect Tech’s Rogue is a full featured Carrier Board for the NVIDIA® Jetson™ AGX Xavier™ module. This carrier board for Jetson AGX Xavier is specifically designed for commercially deployable platforms, and has an extremely small footprint of 92 x 105mm. The Mimic Adapter allows the NVIDIA Jetson AGX Xavier module to be installed onto an existing NVIDIA Jetson TX2/TX2i/TX1 carrier. Instantly compare performance metrics between existing TX2/TX2i/TX1 designs and the new Jetson AGX Xavier. Boasting 20 times the compute performance than the Jetson TX2, Jetson AGX Xavier enables a giant leap forward in capabilities for autonomous machines and edge devices. As the largest NVIDIA Eco-System Partner for the Jetson TX2/TX2i/TX1, Connect Tech is proud to announce commercially-deployable solutions for the new Jetson™ AGX Xavier platform. We are solving real world applications for deep learning at the edge. Find Connect Tech and NVIDIA at Stand 1-430 at Embedded World 2019. connecttech.com

Connect Tech Inc.

https://bit.ly/2AGcdTI

FEATURES ĄĄ Rogue has 6x 2-lane or 4x 4-lane MIPI CSI Camera Inputs ĄĄ Rogue provides access to an impressive list of latest generation

ĄĄ ĄĄ ĄĄ

ĄĄ

interfaces on the Xavier while adding additional interfaces of 3x USB 3.1, 2x GbE, 2x HDMI and a locking Mini-Fit Jr. power input connector Through the Mimic, a wide range of Xavier interfaces passed to TX2/TX2i/TX1 carrier Ideal for machine vision and deep learning applications NVIDIA® Jetson™ AGX Xavier™ has an impressive 512-core Volta GPU and 64 Tensor cores with discreet dual Deep Learning Accelerator (DLA) NVDLA engines Jetson AGX Xavier has 20 times the compute performance than the Jetson TX2

sales@connecttech.com

 www.linkedin.com/company/connect-tech-inc

 1-800-426-8979 @ConnectTechInc

Hardware

MXS824 24 Gen3 PCIe Switch The MXS824 is Dolphin's fourth generation PCIe switch product and enables users the ability to create a scalable PCIe fabric solution using standard PCIe copper or fiber cables. PCIe solutions running over back planes can now easily be enabled to run over external cables. Larger PCIe configurations can be realized by interconnecting multiple MXS824 switches. By loading the appropriate firmware, the MXS824 supports both transparent and non transparent bridging (NTB) use cases for clustering and I/O expansion applications.

FEATURES

The MXS824 is the high-end switching component in Dolphin’s new PCI Express Gen3 product family. This 24 port 1U cluster switch delivers 32 GT/s of non-blocking bandwidth per port at ultra low latency. Up to 4 ports can be combined into a single x16 / 128 GT/s port if higher bandwidth is required. Each connection is fully compliant with PCI Express Gen1, Gen2 and Gen3 I/O specifications.

ĄĄ Hot plug cabling support

Applications that do not need PCIe x16 speed can alternatively use the 8 port x16 PCIe Gen3 IXS600 switch.

Dolphin Interconnect Solutions www.dolphinics.com/

www.embedded-computing.com

ĄĄ 24 PCI Express Gen3 x4 ports ĄĄ Gen3 8.0 GT/S per lane ĄĄ NTB or Transparent use ĄĄ SFF-8644 Connectors ĄĄ 32 GT/s per port ĄĄ PCIe 3.0 or MiniSAS-HD cables ĄĄ Copper and Fiber-optic cables ĄĄ 19 Inch 1U rack mountable ĄĄ Redundant Fans ĄĄ Port status LEDs ĄĄ Ethernet based management and monitoring www.dolphinics.com/products/MXS824.html

paraison@dolphinics.com

 214-960-9066

Embedded Computing Design | Spring 2019

33

Embedded Computing Design

Hardware


Embedded Computing Design

IoT

TS-7180 The TS-7180 single board computer is developed for any industrial application, but is especially suited for industrial control automation and remote monitoring management applications such as unmanned control room, industrial automation, automatic asset management and asset tracking. The TS-7180 features the NXP i.MX 6 UltraLite processor. The TS-7180 comes with configurations that go up to 1 GB of RAM. For on board data storage there is 4GB eMMC MLC flash. A microSD socket is available for additional storage, or removable media requirements. For added system integrity the TS-7180 comes standard with Cypress 16kbit FRAM (FM25L16B). The TS-7180 has an impressive suite of Data Acquisition and Control options allowing you to focus on analyzing and interpreting data. The TS-7180 has a host of standard industrial interfaces with a WiFi and Bluetooth onboard radios standard. Cellular connectivity is also available with either NimbeLink or MultiTech modems via the cellular sockets. Digi’s many XBee radios are also supported. There are two 10/100 Ethernet ports, a RS-485 port, 4 COM ports (3x RS-232, 1 TTL UART) as well as a SPI, I2C and CAN Bus. The TS-7180 operates in a broad industrial temperature range of -40°C to 85° C for use in the harshest of environments.

Technologic Systems

www.embeddedARM.com

FEATURES ĄĄ

NXP i.MX6UL 698MHz ARM Cortex-A7 CPU

ĄĄ

512 MB DDR3 RAM (1 GB DDR3 RAM Option Available)

ĄĄ

4 GB MLC eMMC Flash

ĄĄ

Ethernet, WiFi, Bluetooth, GPS, USB, RS-232, RS-485, and CAN

ĄĄ

MultiTech and XBee/NimbeLink sockets www.embeddedARM.com

sales@embeddedarm.com

 480-837-5200

 @ts_embedded Processing: Other

RE1813 RUGGED EMBEDDED COMPUTER The Console Computer is a next generation Rugged Embedded RE1813, designed for programs that have Cyber Security requirements and used in industrial and military applications. A military version of Crystal Group’s RE1813 Rugged Embedded Computer has been chosen to provide enhanced mission capability for an international Airborne Warning and Control System to equip the aircraft with state-of-the-art computing capability. Common installations of the RE1813 include crew workstations on airborne applications where optimization of size, weight, and power is critical. This new embedded design is configurable for mobile applications requiring either AC or DC power supplies and where high performance is required with non-server functionality. To help guard sensitive data-at-rest, the RE1813 features removable solid state drive bays with encrypted drives, Intel processor, chipset, and Trusted Platform Module. Operating system support includes Windows 10®, Redhat® 6.5/6.6, Windows Server® 2016, and VMWare®.

Crystal Group, Inc.

www.crystalrugged.com

34

Embedded Computing Design | Spring 2019

FEATURES Lightweight aluminum construction – 10 lbs. (4.5 kg) ĄĄ Tray or wall mounted ĄĄ Large thermostatically controlled fans for quiet operation ĄĄ MS 3476L12 military circular power connector ĄĄ Up to three (3) 2.5" SSD removable drives ĄĄ Xeon D or Skylake CPU motherboard options ĄĄ Modular power supply for multiple input options ĄĄ One PCIe x16 expansion slot ĄĄ Ultra-rugged and compact for extreme ambient conditions ĄĄ SSD Sanitize (options available) ĄĄ

www.crystalrugged.com

info@crystalrugged.com  800-378-1636 www.linkedin.com/company/crystal-group/

 @CrystalGroup

www.embedded-computing.com


19foPr free admission 2erw e-code You

ucher

orld.de / vo

-w embedded

Nürnberg, Germany

February 26 – 28, 2019

TODAY, TOMORROW, AND BEYOND Your one-stop resource for the entire spectrum of embedded systems: discover more than 1,000 companies and get inspired by the latest trends and product developments, by renowned speakers and exciting shows. Keep up to date:

embedded-world.de Media partners

Exhibition organizer NürnbergMesse GmbH T +49 9 11 86 06-49 12 Fachmedium der Automatisierungstechnik

F +49 9 11 86 06-49 13 visitorservice@nuernbergmesse.de Conference organizer WEKA FACHMEDIEN GmbH T +49 89 2 55 56-13 49 F +49 89 2 55 56-03 49 info@embedded-world.eu


Balance is Ever ything

We make superior solid state storage and memory for industrial IoT ecosystems, with the optimum balance of quality, data integrity and cost-efficiency. •

Twenty years refined U.S. production and 100% testing - unlike offshore competition

A+ quality: 98.8% yield, 99.7% on-time delivery and 86 field-defects-per-million*

Extreme durability, longer life-cycles and intelligent, secure edge solutions

Visit our website to learn more and let’s keep the balance - together.

Familiar Done Differently ®

Solid State Storage and Memory

*QA marks averaged through entire year of 2017. Copyright 2018, Virtium LLC. Top image copyright: 123RF/Orla

www.virtium.com


Balance is Ever ything

SPRING 2019 VOLUME 17 | 1 EMBEDDED-COMPUTING.COM

TRACKING TRENDS RISC-V: Too open to fail PG 5

MUSINGS OF A MAKERPRO

Challenges of building an omni-wheel robot PG 34

Development Kit Selector

2019 Embedded Processor Report: We make superior solid state storage and memory for industrial IoT ecosystems, with the optimum balance of quality, data integrity and cost-efficiency. •

Twenty years refined U.S. production and 100% testing - unlike offshore competition

A+ quality: 98.8% yield, 99.7% on-time delivery and 86 field-defects-per-million*

Extreme durability, longer life-cycles and intelligent, secure edge solutions

www.embedded-computing.com/designs/iot_dev_kits

The Evolving State of Signal Processing BEGINS ON PG 6

EMBEDDED WORLD PRODUCT PROFILES

Visit our website to learn more and let’s keep the balance - together.

PG 32

Familiar Done Differently ®

Solid State Storage and Memory

ELECTRONIC SERVICE REQUESTED

OpenSystems Media

PRST STD U.S. POSTAGE PAID Copyright 2018, Virtium LLC. Top image copyright: 123RF/Orla

www.virtium.com

1505 N. HAYDEN RD. #105, SCOTTSDALE, AZ 85257

*QA marks averaged through entire year of 2017.

THE RETURN TO ANALOG PG 10

www.windriver.com/automated-to-autonomous


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.