Embedded Computing Design Winter 2017

Page 1

WINTER 2017 VOLUME 15 | 5 EMBEDDED-COMPUTING.COM

IOT INSIDER Voice rec is terrifying PG 5 TRACKING TRENDS

MIPS continues in 5G, IoT PG 6

Semiconductor Companies Bet Big on Automotive PG 10

Development Kit Selector

www.embedded-computing.com/ designs/iot_dev_kits

Deception Networks:

Increased Security Through Alternate Realities PG 24


AD LIST PAGE ADVERTISER 18

ACCES I/O Products, Inc. – PCI Express Mini Card and mPCIe Embedded I/O Solutions

32

American Portwell Technology – Boosting IoT Designs from Edge to Cloud

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass  rnass@opensystemsmedia.com EMBEDDED COMPUTING EDITORIAL DIRECTOR Curt Schwaderer  cschwaderer@opensystemsmedia.com TECHNOLOGY EDITOR Brandon Lewis  blewis@opensystemsmedia.com

1 Digikey – Development Kit Selector

29

Embedded World – Immerse Yourself in the World of Embedded Systems and Discover Innovations for Your Success.

AUTOMOTIVE CONTRIBUTOR Majeed Ahmed

CONTRIBUTING EDITOR Jeremy S. Cook

Micro Digital, Inc. – SMX RTOS: Ideal for Your Project

CREATIVE DIRECTOR Steph Sweet  ssweet@opensystemsmedia.com

SENIOR WEB DEVELOPER Konrad Witte  kwitte@opensystemsmedia.com

25

23 Qorvo – The Impact of the IoT Demystified

27 Toradex – Designed for Industrial IoT and Embedded Applications

3

WinSystems, Inc. – Rugged, Reliable, and Resilient Embedded Computing Solutions

SOCIAL

DIRECTOR OF E-CAST LEAD GENERATION AND AUDIENCE ENGAGEMENT Joy Gilmore  jgilmore@opensystemsmedia.com

WEB DEVELOPER Paul Nelson  pnelson@opensystemsmedia.com CONTRIBUTING DESIGNER Joann Toth  jtoth@opensystemsmedia.com EMAIL MARKETING SPECIALIST Drew Kaufman dkaufman@opensystemsmedia.com

SALES/MARKETING

SALES MANAGER Tom Varcie  tvarcie@opensystemsmedia.com (586) 415-6500

MARKETING MANAGER Eric Henry  ehenry@opensystemsmedia.com (541) 760-5361 STRATEGIC ACCOUNT MANAGER Rebecca Barker  rbarker@opensystemsmedia.com (281) 724-8021

    

STRATEGIC ACCOUNT MANAGER Bill Barron   bbarron@opensystemsmedia.com (516) 376-9838

CHIEF FINANCIAL OFFICER Rosemary Kristoff  rkristoff@opensystemsmedia.com

CHIEF TECHNICAL OFFICER Wayne Kristoff

Facebook.com/Embedded.Computing.Design

@Embedded_comp

LinkedIn.com/in/EmbeddedComputing

STRATEGIC ACCOUNT MANAGER Kathleen Wackowski  kwackowski@opensystemsmedia.com (978) 888-7367 SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek  lpettek@opensystemsmedia.com (805) 231-9582 SOUTHWEST REGIONAL SALES MANAGER Barbara Quinlan  bquinlan@opensystemsmedia.com (480) 236-8818 NORTHERN CAL STRATEGIC ACCOUNT MANAGER Sean Raman  sraman@opensystemsmedia.com (510) 378-8288

Instagram.com/Embedded Computing

youtube.com/user/VideoOpenSystems

WWW.OPENSYSTEMSMEDIA.COM

Ian Ferguson, ARM Jack Ganssle, Ganssle Group Bill Gatliff, Independent Consultant Andrew Girson, Barr Group David Kleidermacher, BlackBerry Jean LaBrosse, Silicon Labs Scot Morrison, Mentor Graphics Rob Oshana, NXP Jim Ready, Independent Consultant Kamran Shah, Silicon Labs

PRESIDENT Patrick Hopper  phopper@opensystemsmedia.com

EXECUTIVE VICE PRESIDENT John McHale  jmchale@opensystemsmedia.com

EXECUTIVE VICE PRESIDENT Rich Nass  rnass@opensystemsmedia.com

EMBEDDED COMPUTING DESIGN ADVISORY BOARD

ASIA-PACIFIC SALES ACCOUNT MANAGER Helen Lai  helen@twoway-com.com

BUSINESS DEVELOPMENT EUROPE Rory Dear  rdear@opensystemsmedia.com +44 (0)7921337498

Pinterest.com/Embedded_Design/

2

CONTENT ASSISTANT Jamie Leland jleland@opensystemsmedia.com

GROUP EDITORIAL DIRECTOR John McHale  jmchale@opensystemsmedia.com VITA EDITORIAL DIRECTOR Jerry Gipper  jgipper@opensystemsmedia.com ASSISTANT MANAGING EDITOR Lisa Daigle  ldaigle@opensystemsmedia.com

SENIOR EDITOR Sally Cole  scole@opensystemsmedia.com

TECHNOLOGY EDITOR Mariana Iriarte  miriarte@opensystemsmedia.com

CREATIVE PROJECTS Chris Rassiccia  crassiccia@opensystemsmedia.com

FINANCIAL ASSISTANT Emily Verhoeks  everhoeks@opensystemsmedia.com

SUBSCRIPTION MANAGER subscriptions@opensystemsmedia.com CORPORATE OFFICE 1505 N. Hayden Rd. #105 • Scottsdale, AZ 85257 • Tel: (480) 967-5581

REPRINTS WRIGHT’S MEDIA REPRINT COORDINATOR Wyndell Hamilton  whamilton@wrightsmedia.com (281) 419-5725

Embedded Computing Design | Winter 2017

www.embedded-computing.com


POWER TO PERFORM

Rugged, reliable and resilient embedded computing solutions WinSystems’ embedded single board computers are designed to support a broad range of industry applications in challenging operational environments. From energy and transportation management, to industrial IoT and automation—our systems enable the collection, processing and transmission of real-time data requirements at the heart of your overall system. From standard components to full custom solutions, WinSystems delivers world-class engineering, quality and unrivaled technical support. Our full line of embedded computers, I/O cards, and accessories help you design smarter projects offering faster time to market, improved reliability, durability and longer product life cycles. Embed success in every application with The Embedded Systems Authority!

SBC35-C398Q Quad-Core Freescale i.MX 6Q Cortex A9 Industrial ARM® SBC PX1-C415 PC/104 Form Factor SBC with PCIe/104™ OneBank™ expansion and latest generation Intel® Atom™ E3900 Series processor

SCADA

ENERGY

IOT

AUTOMATION

TRANSPORTATION

Single Board Computers | COM Express Solutions | Power Supplies | I/O Modules | Panel PCs

SCADA

SCADA SCADA SCADA ENERGY

ENERGYENERGYENERGYIOT

SCADA

IOT

IOT

IO60-M410 Data acquisition module for embedded systems with IO60 expansion and 24 GPIO

817-274-7553 | www.winsystems.com

IOT TRANSPORTATION TRANSPORTATION TRANSPORTATION TRANSPORTATION AUTOMATION IOTAUTOMATION AUTOMATION AUTOMATION TRANSPORTATION ENERGY

AUTOMATION

ASK ABOUT OUR PRODUCT EVALUATION! 715 Stadium Drive, Arlington, Texas 76011


CONTENTS

Winter 2017 | Volume 15 | Number 5

opsy.st/ECDLinkedIn

FEATURES 10 Semiconductor suppliers betting big on automotive

COVER

10

First unveiled at CES 2017, the Toyota Concept-i contains many of the hallmarks envisioned for future vehicles: autonomous drive capabiities; artificial intelligence (AI); a personable virtual assistant; and immersive infotainment. Not surprisingly, these are just the applications semiconductor vendors are targeting.

By Brandon Lewis, Technology Editor

16 Revolutionizing the user experience with a sense of agency By Steve Cliffe, Ultrahaptics

20 Exploring the software-enabled transformation of car audio

@embedded_comp

28

By Anil Khanna, Mentor, a Siemens business

WEB EXTRAS

24 Deception networks: Reducing alert fatigue and increasing

 Robotic exoskeletons: The key to human superpowers

security through an alternate reality By Brandon Lewis, Technology Editor

By Rudy Ramos, Mouser Electronics

28 Mid-range FPGAs for design and data security: No excuses By Ted Marena, Microsemi Corporation

http://bit.ly/RoboticExoskeletons

 Keep smiling with the iBrush

20

By Colin Walls, Mentor, a Siemens business http://bit.ly/SmileWithiBrush

 Why companies should consider joining Thread By Cees Links, Qorvo http://bit.ly/JoiningThread

EVENTS  CES 2018 Las Vegas, NV January 9-12, 2018 www.ces.tech

 embedded world 2018 Nuremburg, Germany February 27 – March 1, 2018 www.embedded-world.eu

COLUMNS 5

Voice rec is terrifying

IOT INSIDER

9

By Brandon Lewis, Technology Editor

6

TRACKING TRENDS

MIPS continues in 5G, IoT

By Curt Schwaderer, Editorial Director

31 EDITOR’S CHOICE By Jamie Leland, Content Assistant

8

4

MUSINGS OF A MAKERPRO

AUTOMOTIVE ANALYSIS

SerDes eyed for fatter bandwidth pipes inside smart cars

Published by:

By Majeed Ahmad, Automotive Contributor

Raspberry Pi smart home solutions By Jeremy S. Cook, Contributing Editor Embedded Computing Design | Winter 2017

2017 OpenSystems Media® © 2017 Embedded Computing Design All registered brands and trademarks within Embedded Computing Design magazine are the property of their respective owners. ISSN: Print 1542-6408 Online: 1542-6459 enviroink.indd 1

10/1/08 10:44:38 AM

www.embedded-computing.com


IOT INSIDER

blewis@opensystemsmedia.com

Voice rec is terrifying By Brandon Lewis, Technology Editor In a world obsessed with Internet privacy it’s surprising how little we talk about always-listening devices like the Amazon Echo. After all, a company that wants to learn intimate details about your life in order to sell you more stuff has a microphone permanently fired up in your kitchen. If you own an Echo and weren’t aware of this feature, open up your Alexa app, select the “Settings” menu, and then select “History.” Take a listen. Were all of those recordings intended for the Echo? I guess privacy is the price of convenience in modern consumerism. And things are about to get a whole lot more convenient. Cacophonies, cocktail parties, convenience, and Christmas XMOS is a fabless semiconductor company that spun out of the University of Bristol to focus on voice and music processing ICs. Among those ICs, devices based on the 32-bit xCORE MCU architecture have had notable ­success in the voice recognition market, delivering 16 programmable cores with DSP functions integrated in the same chip.

voice of a single speaker from a noisy environment. At distances of 5 m or more, the VocalFusion 4-Mic Dev Kit uses a combination of acoustic echo cancellation (AEC), adaptive beamforming, dynamic ­de-reverberation, and automatic gain control (AGC) to isolate and extract the voice signal of a primary speaker. Beyond this is where things start to get spooky. Earlier this year, XMOS acquired Setem Technologies, Inc. of Boston, MA, who develops massive Fourier transforms for blind-source signal separation. These blind-source separation algorithms mathematically decompose elements of source signals from a set of signals and then reconstruct them, either individually or as groups. In voice recognition this can be applied to an individual speaker, or even a conversation. Now, in theory (and perhaps in practice), blind-source separation can be used to isolate the voice frequencies of multiple speakers in a room, and thereby establish a biometric identity for each. As you can imagine, the application of such technology could be widespread, and not just in the sense that Amazon wants to know what every member of your family wants for Christmas. Surveillance, for instance, immediately comes to mind.

XMOS recently parlayed the xCORE architecture into the VocalFusion 4-Mic Dev Kit for Amazon’s Alexa Voice Service (AVS). The kit is designed around the VocalFusion XVF3000 integrated far-field voice processor and four high signal-to-noise-ratio (SNR) MEMS microphones from Infineon (Figure 1). XMOS claims the kit is the first far-field linear microphone array solution available on the market.

This takes us back to the VocalFusion 4-Mic Dev Kit’s linear microphone array. While many platforms such as the Amazon Echo and Google Home use a circular array of omni-directional microphones to provide 360-degree room coverage, a linear array is designed for 180-degree arcs. This is of interest because leaders in the voice recognition space envision a future where the tower-based virtual assistants of today recede into everyday objects like TVs, refrigerators, sofas, walls – you name it.

Outside of range, far-field voice processing gets really interesting when combating the “cocktail party” problem, or situations in which a platform needs to isolate the

This future is designed to be ultra-convenient, delivering service by the syllable. But be careful. You probably won’t know who, or what, is listening.

FIGURE 1 The XMOS VocalFusion 4-Mic Dev Kit for Amazon’s Alexa Voice Service (AVS) is based on the XVF3000 integrated far-field voice processor and a linear MEMS microphone array from Infineon.

www.embedded-computing.com

Embedded Computing Design | Winter 2017

5


TRACKING TRENDS

cschwaderer@opensystemsmedia.com

MIPS continues in 5G, IoT By Curt Schwaderer, Editorial Director LTE brings with it an all-IP Evolved Packet Core (EPC) with an increasing end-to-end IP packet data environment between mobile devices and the Internet. These advances toward an all-IP network are driving substantial increases in subscriber bandwidth use. Meanwhile, 5G standards aim to enable Internet of Things (IoT) use cases and applications. This, in turn, is driving requirements that need solutions: ›› Multiple radio access technologies (RATs) for LTE+ and 5G are needed for compatibility across generations, as well as fast context switching ›› Up to five component carriers for LTE and 5G can be aggregated to increase the downlink bandwidth needed to support LTE/5G applications ›› Voice over LTE (VoLTE) removes the circuit switch mobile voice path and uses the Session Initiation Protocol or Session Description Protocol (SIP/SDP) for signaling, and the real-time transport protocol (RTP) IP for communications content (but upgrading from Adaptive MultiRate (AMR) to more complex Enhanced Voice Services (EVS) codecs needs higher clock speeds and memory) ›› LTE Unlicensed Spectrum (LTE-U) augments capacity by aggregating unlicensed and licensed carriers, creating the RAT problem and making co-existence with technologies like Wi-Fi important ›› Pre-standard 5G is targeted for the 2018 Winter Olympics, putting incredible time to market pressure on the mobile operators and ecosystem MIPS and LTE/5G The MIPS IP portfolio and enabling ecosystem make it a good choice for integrated SoC solutions that include a physical modem stack, RF stack, plus a crypto engine. Such solutions focus on physical layer control and the L2/L3 protocol stack, which requires a combination of performance to handle the total data bandwidth, plus fast context switching to support features like carrier aggregation. The MIPS multi-threaded, multicore CPU facilitates this. The physical layer itself is typically on a DSP/hardware accelerator and is available from ecosystem partners, although some manufacturers integrate their own physical layer. The result is that a variety of basestations and user equipment use MIPS. Customers have deployed MIPS in products ranging from NarrowBand IoT (NB-IoT) to LTE and LTE-Advanced, with some working on 5G.

6

Embedded Computing Design | Winter 2017

Fine-grained multithreading One of the key features of MIPS cores used in this application area is “fine-grained multithreading.” Traditional CPU multithreading involves a thread running until it’s interrupted by an event that results in a longer latency stall. MIPS implements a hardware-based multithreading capability where the CPU core checks whether the current thread is stalled every cycle. If stalled, the hardware scheduler will change execution to another thread that is ready to run. By utilizing this capability in hardware, even single-cycle stalls can be dealt with, resulting in higher overall instructions per cycle (IPC). Fine-grained multithreading capability provides for extremely fast context switching: when implementing LTE products, using four threads for four component carriers enables the implementer to store the context of each component carrier within a thread. The hardware takes care of the context switching, which allows concurrent simultaneous operation of the four components. There are also additional cores or threads available to run applications with a larger OS like Linux. 5G requirements 3GPP categorizes 5G into three parts: ›› 5G enhanced mobile broadband (eMBB) – This is characterized by data rates and carrier aggregation. 5G uses extended mm wave frequencies, but can give you increased performance and data rates. This is where the fine-grained multi-threading capabilities and multicore support of the MIPS CPUs is important. ›› 5G ultra-reliable and low-latency communications (URLLC) – These applications are characterized by low latency requirements with strong security and high reliability. Applications involve autonomous vehicles, remote medical procedure control, and factory automation where real-time and low latency is critical. ›› 5G massive machine-type communications (mMTC) – This is basically the IoT. These applications are characterized by low cost, low power, and low complexity. MIPS multicore, multi-threading capability, power, integration, and virtualization/security features can help each area in Figure 1 achieve its requirements. Security The Omnishield feature in MIPS provides security through separation of memory and I/O spaces used by www.embedded-computing.com


FIGURE 1 Features of the MIPS architecture help address the needs of 5G networks, including enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultrareliable low-latency communications (URLLC).

each functional component. MIPS extends this notion by allowing up to 255 domains to be created within a system so that applications can be isolated from one another. This is accomplished using what appears “logically” as a second memory management unit (MMU), which is a characteristic of the MIPS Virtualization architecture footprint. There are multiple scenarios where Omnishield comes into play: ›› Malicious software attempting to gain access to unauthorized information ›› Malicious or accidental software execution that could cause a system crash A type 1 hypervisor that supports full virtualization of the CPU is needed to implement Omnishield. The hypervisor manages resource/privilege access to one or more of these domains or “guests.” A guest can be defined by a single application or as an entire OS or multiple OSs. Secure boot Another important issue with 5G and IoT is how to perform software downloads and updates in a secure, reliable way. MIPS implements secure boot and a chain of trust capability to address this. A level-0 bootloader root of trust (RoT) is where it all starts. The level-0 bootloader authenticates the next bootloader, then loads and transfers control to that www.embedded-computing.com

bootloader. This level-0 bootloader is programmed at the OEM and can’t be changed after manufacture. As a result, it can be trusted to authenticate and transfer control to follow-on agents that perform software updates. The level-1 bootloader deals with authentication of the image. Authentication typically involves confirming that the payload (code, data, secrets, etc.) hasn’t been tampered with. Once authenticated, decryption of the payload may also be performed (if necessary). The level-1 bootloader then gives way to a level-2 bootloader, which authenticates the hypervisor image to establish a chain of trust. Once the trusted hypervisor is launched, it can perform the initial configuration of the system, set up memory domains, and authenticate guests. Summary LTE and 5G define a rich feature set and a variety of applications that require a unique mix of low latency, high reliability, and security features. Virtualization for low-end IoT devices where security is important, coupled with the availability of DSP extensions, opens up a wider market for voice-controlled speakers, modems, VoLTE, and surveillance cameras, to name a few. The features and capabilities of the MIPS architecture and ecosystem partners provide an effective means for meeting these demanding requirements to realize new and emerging IoT applications. Embedded Computing Design | Winter 2017

7


MUSINGS OF A MAKERPRO

www.youtube.com/c/jeremyscook

Raspberry Pi smart home solutions By Jeremy Cook, Engineering Consultant We’ve all heard of smart devices, and we can see this technology filtering into our homes bit by bit. While much of this technology is costly and/ or proprietary, you can build up your own smart home to your exact specifications for a very reasonable price by using various ­flavors of Raspberry Pi available for as little as five dollars. Here are a few examples of what can be done with this amazing system. Smart calendar Before things went digital, many families used a wall calendar to display upcoming events. With the right equipment, including a Raspberry Pi and a recycled LCD laptop screen, you can now make a useful digital equivalent (Figure 1). As outlined in the article “Raspberry Pi: Wall mounted calendar and notification center” on Instructables, the screen is set to show a custom homepage and can display a weather forecast, public transportation information, and events in the area. This general concept can be taken even further as a smart mirror. If you add listening capabilities via systems like Alexa or Google Home, control of lighting, appliances, or even getting an answer to your question is only a voice command away. Media streaming Remember many years ago when you had to tune in at a certain time to see your favorite show? This may still be the case for some content, but today’s media consumption is more focused on on-demand watching, like binge-watching your favorite series or seeing if you can catch a bit of your favorite YouTube channel during a spare moment. There are many options for streaming out there, but for an entirely custom solution, the Raspberry Pi presents a great option. This tutorial will take you through installing Kodi, formerly known as XBMC or Xbox Media Center. While using a ‘Pi in this way has many

CCD camera While you’ll likely want to have the most powerful Raspberry Pi available running as a media streaming device, what if you have an older Raspberry Pi or two in your parts bin? One idea would be to use them as security cameras. As seen in a Maker Share Z-Wave writeup, this type of system can run on an original Raspberry Pi Model B, and even an A – no matter what ‘Pi hardware you have from years ago, a custom security camera is well within reach. You’ll need to add a Raspberry Pi camera, as well as housing, and with these older models some way to connect to your network for remote monitoring. Of course, if you’re purchasing everything new, a Raspberry Pi 3 or Raspberry Pi Zero W come with Wi-Fi capabilities built in, so you won’t have to worry about adding hardware to connect if you’re using one of them. Once you have this up and running, you can monitor your house or business from afar, or set it up to record video when something moves. Z-Wave Smart Home Maker Challenge If you think you could “smarten” your home in a way that hasn’t been thought of yet, you might consider entering your idea in the Z-Wave Smart Home Maker Challenge, put on by Sigma Designs and Maker Media. If your idea is chosen, you’ll be given a Raspberry Pi equipped with the Z-Wave Developer Kit, along with a Z-Wave-certified device of your choosing from Zwaveproducts.com. Even better, the grand prize is a trip to the Consumer Electronics Show in Las Vegas, and a chance to show off your project in the Z-Wave Alliance Smart Home Pavilion!

FIGURE 1 Raspberry Pi wall-mounted calendar and notification center

8

advantages (especially when working with your own personal media collection), for online streaming you’ll need to use a paid service called PlayOn and run it on a Windows PC.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


AUTOMOTIVE ANALYSIS

SerDes eyed for fatter bandwidth pipes inside smart cars By Majeed Ahmed, Automotive Contributor When it comes to advanced driver assistance systems (ADAS), much of the design limelight goes to cameras and sensors that generate video, audio, and control information to enable features like autonomous braking, pedestrian detection, parking assistance, and collision avoidance. But what about the underlying data transport technology that moves around the high-resolution content in vehicles? A surround view system, for instance, streams 1280 x 800 pixel video at a rate of 30 frames per second (fps). Whether it’s ADAS or infotainment or connected car technologies like vehicleto-vehicle (V2V) and vehicle-to-everything (V2X), they all mandate greater bandwidth, complex interconnects, and robust data integrity in order to facilitate real-time driver assist and safety features. Market research firm Strategy Analytics forecasts that bandwidth requirements in vehicles will grow 25 times by the year 2020. The meteoric rise in the amount of aggregated sensor data will inevitably overwhelm the automotive bandwidth now mostly served by buses or networks such as CAN, LIN, MOST, FlexRay, LVDS, and Ethernet. In fact, the advent of megapixel-resolution imaging in vehicles seems to preclude these automotive links, except LVDS and Ethernet.

technology, which provides a compression-free alternative to Ethernet for transfer of 4K videos and high-definition audio in the cars of future (Figure 1). The serializer and deserializer (SerDes) chipsets in GMSL connectivity solutions ensure that shielded twisted pair (STP) or coax cables of up to 15 meters meet the most stringent electromagnetic compatibility (EMC) requirements of the automotive industry. The spread-spectrum technology built into the SerDes chips eliminates the need for an external spread-spectrum clock while boosting protection against electromagnetic interference (EMI). The serializer IC boasts error detection of video and control data via a crosspoint switch for multiple cameras. And it drives longer cables with programmable pre/de-emphasis features. On the other hand, a deserializer IC, which tracks data from a spread-spectrum serial input, also facilitates adaptive equalization to improve error rates. GMSL technology was recently adopted as a miniature automotive chassis in the ADAS Surround View kit offered by Renesas. Earlier, Maxim’s GMSL interface was used to transport high-speed data in NVIDIA’s DRIVE CX (cockpit) and PX (piloted driving) platforms. Here, GMSL transported data between the NVIDIA SoC and multiple camera inputs. And the deserializer chipset synchronized video streams from four cameras while simultaneously powering each camera over the same coax cable. Maxim’s SerDes technology is striving to fill bandwidth gaps in the r­apidly changing automotive design landscape. Having automotive powerhouses like Renesas and NVIDIA on its side seems like a good start.

FIGURE 1

But while an in-vehicle Ethernet backbone can transport data 100x faster than a CAN bus link, it still requires compression of video feeds and doesn’t seem to scale to frame rates for high-resolution video streams from multiple cameras. And that’s where high-speed serial links come into the picture.

The MAX96708 is a 14-bit GMSL deserializer for megapixel cameras featuring a crosspoint switch that maps data to multiple outputs.

High-speed serial links Take, for example, Maxim Integrated’s gigabit multimedia serial link (GMSL) www.embedded-computing.com

Embedded Computing Design | Winter 2017

9


SILICON: AUTOMOTIVE ICs

Semiconductor suppliers betting big on automotive By Brandon Lewis, Technology Editor

As Internet of Things (IoT) solutions begin to mature, semiconductor vendors are looking for the next big growth market. That increasingly appears to be the automotive sector, where autonomous drive, vehicle communications, and vehicle electrification systems present an enormous opportunity to sell chips in high volumes. The European arms of several global technology companies confirmed this trend at a recent press event in Munich. Not surprisingly, enabling technologies for advanced driver assistance (ADAS) and active safety systems took center stage. Advanced image sensors take automotive vision beyond 20/20 ON Semiconductor’s acquisition of Fairchild Semiconductor gave the company a broad portfolio of discrete power solutions for the automotive market. But it was the acquisition of Aptina Imaging Corporation in 2014 that helped drive the company’s leadership in automotive vision systems: ON Semiconductor currently commands nearly 70 percent of the front-camera ADAS market and more than 50 percent of the total automotive image sensor market[1]. Aptina CMOS image sensor technology is at the heart of ON Semiconductor’s recently released Hayabusa Image Sensor platform, which features 1 MP to 5 MP

10

variants with simultaneous 120 dB ultra-high dynamic range (UHDR) and LED flicker mitigation (LFM). Simultaneous UHDR and LFM is enabled by a 3.0 micron superexposure backside illuminated (BSI) pixel technology with 100,000 electrons of charge. This technology allows more light to be captured before an image is saturated, thus eliminating any low-light tradeoff. Simultaneous UHDR and LFM In imaging, dynamic range represents the disparity between the lightest and darkest parts of an image, and, in turn, a camera’s ability to reproduce that image. It is expressed in dB. Dynamic range in real-world scenery can be significant – at times in excess of 140 dB. As you can imagine, this presents challenges in object detection and recognition for safety-critical automotive vision systems. Figure 1 shows the difference between the images generated by an automotive backup camera with and without HDR technology. The previous example, now more than five years old, uses the (then) Aptina Imaging 1.2 MP ARO132AT CMOS image sensor to deliver HDR. However, that device was not equipped with LFM.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


SILICON: AUTOMOTIVE ICs

A major factor in the high resolution of the A0233 and other Hayabusa CMOS image sensors is on-chip companding, which facilitates lossless compression of 24-bit RAW HDR data into 12-bit outputs. These outputs are sent to an image signal processor through an LVDS serialzer/deserializer (Figure 3). The smaller bitstream requires less bandwidth, and therefore less power, which reduces heat in the camera module that can affect image quality. In addition, the lower bandwidth means cheaper silicon and cabling solutions are required. System-wide safety All variants within the Hayabusa product line share a common pixel design and architecture, so design teams can easily scale their efforts across multiple systems or vehicle designs.

Although not visible to the human eye, LEDs such as those used in taillights and traffic signals pulsate or “flicker.” In lowlight situations this flickering can cause blurriness that confuses image signal processing algorithms, which is amplified because image sensors require longer exposure time in dark environments so they can capture enough photons to produce a quality image. As a result, vision systems often struggle with tasks like reading traffic signs or identifying vehicle types (for example, a motorcycle versus a car’s turn signal).

FIGURE 2 The 2.6 MP ARO233 “Hayabusa” CMOS image sensor from ON Semiconductor provides simultaneous UHDR and LFM capability.

AR0233

12-bit / <1Gbps

FIGURE 3

SoC for SVS ISP + DSP

12-bit / <1Gbps

AR0233

SER

12-bit / <1Gbps

ECU

Deserializers

AR0233

12-bit / <1Gbps

SER

AR0233

SER

www.embedded-computing.com

Shown here is the difference in image quality for an automotive backup camera system with and without HDR capability. (Source: Aptina Imaging Corporation, now part of ON Semiconductor.)

SER

In contrast with legacy products, LFM capabilities on Hayabusa image sensors such as the 2.6 MP ARO233 reduce this phenomenon without sacrificing lowlight performance (see BSI pixel technology). According to Bahman Hadji, a former Aptina Imaging employee and Senior Product Manager for Automotive Imaging Solutions at ON Semiconductor, the ARO233 delivers the highest resolution in terms of calibration and yield in the 2 MP CMOS image sensor segment today thanks to LFM and the 120 dB UHDR that mirrors real-world environments (Figure 2).

FIGURE 1

Hayabusa image sensors compress 24-bit RAW HDR data into 12-bit outputs, which reduce bandwidth, power consumption, and system cost.

Embedded Computing Design | Winter 2017

11


SILICON: AUTOMOTIVE ICs

The devices are also delivered as safety elements out of context (SEooC) with an ASIL B rating, as they are able to evaluate each frame for faults in real time. Detected faults are sent in the metadata of each frame, giving vision systems more time to react to potential safety issues. This also facilitates the creation of fault image libraries that can be used to verify algorithms and analyze overall system behavior. ON Semiconductor acquired mm wave technology from IBM’s Haifa research team earlier this year, and is currently evaluating LiDAR investment opportunities to round out its automotive sensor portfolio, Hadji said. This will give the company a strong position in future ADAS and autonomous vehicle designs, especially considering its power supply solutions pair nicely with high-­ performance automotive sensor fusion processors from the likes of NVIDIA, Intel, Renesas, and others. The push to process vehicle sensor data And there are many others now in the race to process all of that vehicle sensor data. Among them, Toshiba has been evolving its Visconti line of image recognition

processors in parallel with increasingly demanding European New Car Assessment Programme (Euro NCAP) requirements. Starting in 2014, the Euro NCAP began rating vehicles based on active safety technologies such as lane departure warning (LDW), lane keep assist (LKA), and autonomous emergency braking (AEB). These requirements extended to daytime pedestrian AEB and speed assist systems (SAS) in 2016. In 2018 the requirements will expand further to include nighttime pedestrian AEB, as well as day and nighttime cyclist AEB. To meet the demand for vision systems that can accurately identify both mobile and stationary objects in daytime and nighttime settings, image recognition processors like Toshiba’s TMPV7608XBG Visconti4 processor incorporate a suite of compute technologies (Figure 4). In addition to CPUs and DSPs, eight hardware acceleration blocks allow the device to efficiently execute highly specialized automotive computer vision (CV) workloads, such as affine transformation (linear mapping), filtering, histograms, matching, and pyramid image generation. Two new hardware acceleration blocks on the TMPV7608XBG specifically address the challenges of nighttime and mobile/stationary object detection: the enhanced co-occurrence histogram of oriented gradients (CoHOG) and structure from motion (SfM) accelerators. For nighttime ADAS applications, the enhanced CoHOG accelerator goes beyond conventional pattern recognition by combining luminance- and color-based ­feature descriptors that offset the low contrast between objects and their surroundings. According to Toshiba, enhanced CoHOG accelerators not only reduce the time required for object recognition, but result in pedestrian detection that’s as reliable at night as it is during the day. Meanwhile, the SfM accelerator uses sequential images from a monocular camera to develop three-dimensional estimates of the height, width, and distance to an object. Stationary objects can therefore be detected without any learning curve, and motion

TMPV760 Series Camera (Surround-F)

Bridge (TX )

Bridge (RX )

Video Input I/F

Camera (Surround-L)

Bridge (TX )

Bridge (RX )

Video Input I/F

Camera (Surround-R)

Bridge (TX )

Bridge (RX )

Video Input I/F

Camera (Surround-B)

Bridge (TX )

Bridge (RX )

Video Input I/F

Speaker CAN CAN

FIGURE 4 12

CAN MCU

32-bit RISC MeP

Video Switch

Display

or 32-bit RISC MeP

Video Composition

MPEs

I 2C

Accelerators

PCM I/F

DMA Controller

CAN MCU I/F

Video Output I/F

Car Navigation

DDR SDRAM Controller

Serial NOR Memory Conroller

LPDDR 2-800 SDRAM

SPI Flash Memory

The Toshiba TMPV7608XBG Visconti4 image recognition processors leverage CPUs, image processing engines (DSPs), and image processing accelerators (hardware accelerators) to compute a range of automotive computer vision (CV) workloads. Embedded Computing Design | Winter 2017

www.embedded-computing.com


analysis and pattern recognition can be applied to detect moving objects such as pedestrians or vehicles. Because the three-dimensional information reduces the region of interest within an image, ADAS systems are able to recognize and react to obstacles more quickly.

vehicle-to-everything (V2X) connectivity, and other active components, in addition to cameras. Data from all of these inputs must be processed, analyzed, and fused in real time so that corrective action can be taken quickly in hazardous situations.

These accelerators operate in conjunction with eight media processing engines (MPEs) in the TMPV7608XBG’s DSP subsystem, each of which are equipped with double-precision floating-point units (FPUs). As a result, the device can simultaneously execute eight image recognition applications in parallel with a response time of 50 milliseconds (Figure 5). Running at clock frequencies of 266.7 MHz, this represents a 50 percent reduction in processing time compared to previous Visconti processors.

PX PEGASUS PLATFORM (AVAILABLE IN 2H 2018) WILL

Toshiba’s publicized design wins for Visconti4 image recognition processors include a front-camera-based active safety system from DENSO Corporation. AI and autonomous drive’s call for compute But image processing is just one piece of the puzzle in today’s automotive safety systems. Modern ADAS applications and semi-autonomous vehicles rely on inputs from radar, LiDAR, proximity sensors, GPS,

“... NVIDIA CLAIMS THAT VARIANTS OF ITS DRIVE

PROVIDE UP TO 320 TRILLION DEEP LEARNING OPERATIONS PER SECOND (TOPS), WHICH IS CONSIDERED MORE THAN SUFFICIENT FOR LEVEL 5 AUTONOMOUS VEHICLES.” Artificial intelligence (AI) would appear the ideal technology to enforce driving policies and make real-time decisions in autonomous and semi-autonomous vehicle use cases. However, conventional cloud-based implementations of AI are unsuitable in automotive safety applications, largely because of the latency associated with data transmission, but also due to privacy, security, cost, and network coverage issues. As an alternative, supercomputer-class processors capable of running on-chip artificial or deep neural networks (ANNs/DNNs) are being designed into the electronic control units (ECUs) of automotive safety systems in cars like Teslas. For example, NVIDIA claims that variants of its Drive PX Pegasus platform (available in 2H 2018) will provide up to 320 trillion deep learning operations per second (TOPS), which is considered more than sufficient for level 5 autonomous vehicles. Unfortunately, these processors come with challenges of their own. Aside from considerable power consumption and cost per unit, the die size of these chips is

FIGURE 5 The TMPV7608XBG Visconti4 processor equips eight media processing engines alongside hardware acceleration blocks to enable eight simul­ taneous image recognition applications with 50 millisecond response times.

www.embedded-computing.com

Embedded Computing Design | Winter 2017

13


SILICON: AUTOMOTIVE ICs

enormous (Table 1). If factoring in one such processor for each of the roughly 100 million cars produced every year, the automotive market would demand three times the silicon currently produced for smartphones. This far outstrips current silicon wafer manufacturing capacity. Again, DSP IP blocks offer a solution that’s more optimized for embedded automotive use cases, with the CEVA Deep Neural Network (CDNN) providing an example. CDNN includes a neural network generator, software framework, and hardware accelerator that are tailored to work with CEVA-XM imaging and vision DSP cores (Figure 6). The value proposition here is reduced power consumption, lower cost, and the ability to distribute intelligence throughout a system design. Central to CDNN are DSPs like the CEVA-XM6, which includes vector and scalar processing units and a 3D data processing schema. The vector and scalar processing units make the -XM6 well suited for sensor data fusion, while its 3D data processing scheme helps accelerate neural network performance. In the CDNN context, these DSPs are accompanied by one or more hardware accelerators that deliver 512 multiplyaccumulate (MAC) operations per cycle in convolutional neural network (CNN) processing. All other neural network layers – of any type or number – are run by the DSP itself. But what makes the CDNN toolkit unique is the CEVA Network Generator. The Network Generator converts pre-trained neural networks developed in frameworks such as Caffe and TensorFlow into real-time neural network models that can run in embedded systems. From there, the second-generation CDNN Software Framework can be used for application tuning. According CEVA, Inc., the CDNN toolkit processes CNNs four times faster than GPU-based alternatives at 25x the power efficiency. Richard Kingston, the company’s Vice President of Market Intelligence, Investor, and Public Relations said that the technology currently has more than five design wins in the automotive sector, with notable partners

14

NVIDIA ADAS Products

DRIVE PX2 (Autocruise)

SoC Discrete GPUs

Drive PX2 (Autochauffeur)

1x Tegra Parker SoC

2x Tegra Parker SoC

N/A

2x Pascal GPUs

Drive PX2 (Fully Autonomous Driving) 4x Tegra Parker SoC 4x Pascal GPUs

Die Size Chips (mm )

200 mm

800 mm

Deep Learning TOPS

N/A

24 DL TOPS

48 DL TOPS

Integer Processing Power

N/A

120 SpecINT

240 SpecINT

Power (W)

10 W

80 W

160 W

Key Customers

Baidu

Tesla

N/A

2

2

2

1600 mm2

TABLE 1

While high-performance processors provide the computational horsepower to run on-chip neural networks for fully autonomous driving, the die size required to produce them currently outstrips silicon wafer manufacturing capacity. Source: NVIDIA and Bernstein Research.

FIGURE 6

The CEVA Deep Neural Network (CDNN) is a toolkit for developing, generating, and deploying neural networks on embedded DSPs.

being ON Semiconductor, NEXTCHIP, and a tier one automotive OEM who is using it in a fully autonomous vehicle design. Connectivity puts precision into automotive positioning Although solutions like the CDNN toolkit allow neural networks to run locally on embedded processors, some level of cloud connectivity is required for the most effective AI systems. When connected to a datacenter, new or anomalous information captured by embedded devices during the inferencing process (the process during which targets apply logic outlined by an AI model to make decisions) can be fed back into AI models to optimize them over time. Aside from that, a main application for connectivity in ADAS and autonomous drive use cases is positioning. For instance, precision global navigation satellite systems (GNSS) are now available in Europe, Russia, and the United States that deliver accuracy down to the centimeter level for applications such as LDW and V2X communications. The increased precision of GNSS systems can be attributed to updated error modeling techniques, which are used to account for error sources such as satellite orbital position errors, satellite clock errors, and ionospheric and tropospheric interference. To offset these errors in GNSS systems, private error correction services have historically used observation-space representation (OSR). In OSR, a two-way communications link is used to combine the observations of GNSS reference stations with distancedependent errors based on the location of GNSS receivers in the field (Figure 7). The corrected errors are then delivered to the target platform as a lump sum, which limits their accuracy.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


FIGURE 7 Observation-space representation (OSR) error modeling techniques require a GNSS reference station and two-way communications with a GNSS user in the field. Error corrections are delivered in a lump sum, reducing their accuracy.

GNSS satellites transmit signals in different bands, including GPS L1 (1575 MHz), GPS L2 (1227 MHz), and GPS L5 (1176 MHz). This is fortunate in the context of PPP because more distortion can be removed from the atmosphere with each additional frequency used.

Com. Satellite or cellular network 1 way SSR broadcast

Dual-band receivers from u-blox will be available in 2018 that take advantage of this phenomenon. As shown in Figure 9, using this dual-band GNSS approach to eliminate multipath errors, in conjunction with SSR correction services, delivers the positioning accuracy required by safety-critical applications.

FIGURE 8 GNSS Ref. Station

State-space representation (SSR) error modeling eliminates the GNSS reference station and relies only on the observations of a specific GNSS receiver. Error corrections are delivered as individual components to improve positioning accuracy.

Communication Link (2 ways) By contrast, state-space representation (SSR) relies on a one-way broadcast from satellites directly to an individual GNSS receiver, with only that receiver’s observations accounted for in error correction (Figure 8). Error corrections are also delivered as separate components in an SSR implementation, which combines with the single-source observation to yield the centimeter-level positioning mentioned previously. Multi-band solves the multipath problem But for all of their benefit, correction services do not account for multipath errors caused by the reflection or diffraction of satellite signals as they pass through an environment. To overcome this challenge in automotive safety and other applications that require precise point positioning (PPP), vendors like u-blox have adopted a multiband approach.

• Performance comparison

Cutting error correction costs will connect cars While high-precision GNSS is still somewhat of a niche technology, u-blox projections indicate that the market will mature by 2025. To realize mass-market adoption, however, costs must be removed from private GNSS error correction services. Sapcorda Services GmbH (Safe And Precise CORrection DAta) is a joint venture between Bosch, Mitsubishi Electric, Geo++, and u-blox that will provide precision GNSS positioning services for mass-market automotive, industrial, and consumer applications. The real-time correction data service will be hardwareagnostic, and delivered in a public, open data format. References: 1. Techno Systems Research (TSR). http://www.t-s-r.co.ip. “Automotive Camera Market Analysis 2016.” February 2017.

FIGURE 9

North East Down

+5m L1 GNSS

Dual-band GNSS receivers and SSR error correction provide significantly better positioning accuracy than existing solutions.

-5m +5m L1/L2 GNSS with SSR -5m Time = 100 minutes

www.embedded-computing.com

Embedded Computing Design | Winter 2017

15


SILICON: TOUCH INTERFACES

Revolutionizing the user experience with a sense of agency By Steve Cliffe, Ultrahaptics

A

What is a sense of agency?

sense of agency is simply the feeling that our actions produce an obvious effect. More broadly speaking, it’s a feeling of being in control of our environment – or, at least, clearly having an influence on it. The sense of agency is usually present in the natural world. Whenever you are holding a physical object, your hand feels that object; you can intuitively sense its weight and texture, and easily move it around. Compare that familiar sensation to fumbling to pick up an object in a video game. Even with the best virtual reality (VR) headsets, the sense of agency in a virtual world is far weaker, leading to objects that seem unreal, with no presence or texture.

16

Virtual environments and user interfaces (UIs) superficially mimic the appearance of the natural world without offering the full range of sensory feedback. The result is unintuitive UIs and dissatisfied users – for example, an automatic door that unexpectedly doesn’t open as we approach, or a power button on a phone or computer that doesn’t produce an immediate response. Of course, no user will ever complain to a vendor, “your product lacks a sense of agency!” We just press the ‘button’ harder or press it repeatedly. Why is it important? Years of scientific research and testing have emphasized how the sense of agency is intrinsically linked with a good user experience. Users “strongly desire the sense that they are in charge of the system and that the system responds to their actions[1].” Because of the perceived satisfaction, simply providing a stronger sense of agency in some cases could be enough to make users ignore other problems with a UI. For exam­ple, research has shown that when we feel we’re in control, we’re less aware of delays in response. This phenomenon is known as the intentional binding effect (Figure 1).

Embedded Computing Design | Winter 2017

www.embedded-computing.com


Ultrahaptics

www.ultrahaptics.com

TWITTER

@ultrahaptics

LINKEDIN

www.linkedin.com/company/ultrahaptics

FACEBOOK

www.facebook.com/Ultrahaptics

SILICON: TOUCH INTERFACES

For designers and engineers, a useful and practical lesson of the intentional binding effect is that people tend to feel that any UI that makes them feel in control is better than one where their sense of agency is less clear (Sidebar 1, page 19). In fact, most users feel that such products are simply better and more responsive. Faster, more powerful hardware, carefully redesigned software, and other conventional means can be used to design a more responsive interface. Or, the same perceived benefits can be achieved by strengthening a user’s sense of agency.

Touch: The missing element in a strong sense of agency Building on work in intentional binding and time perception, recent research has shown how the human sense of agency becomes stronger or weaker depending on which of our senses are used in an action (Figure 2). Importantly, the findings show that haptic feedback provides a stronger sense of agency than visual feedback, with the perceived actionoutcome time interval being shorter. This, therefore, provides an opportunity for designers and developers to harness the sense of touch in order to achieve a stronger sense of agency.

It should not be a surprise that the sense of touch provides the greatest sense of agency. The skin is by far the largest sensory organ in the human body, with an average surface area of almost two square meters. It consists of about 5 million receptors, which are densely packed in the areas they are needed most – for example, there are approximately 3,000 touch receptors in each fingertip. Although commercial products have focused on sound and vision for decades, touch-based haptic feedback can, in fact, create a much stronger sense of agency. Practical experience

FIGURE 1

This diagram illustrates the difference in actual and perceived time by a user with different stimuli.

FIGURE 2

This diagram illustrates perceived and actual time delays between different feedback outcomes from the same stimuli.

www.embedded-computing.com

Embedded Computing Design | Winter 2017

17


SILICON: TOUCH INTERFACES

with real-world products has shown that haptic feedback offers numerous benefits. These include: ›› Reinforcing the sense of agency and sense of reality ›› Allowing faster and more accurate control in the absence of real physical contact, or when physical contact provides a limited sensory input (such as a smooth touch screen) ›› Strengthening feedback to the user in cases when other sensory feedback is limited, weak, or confusing (such as in a noisy environment, situations in which vision is obscured, or the user’s attention is focused elsewhere) What’s wrong with haptics? Haptic technologies are already being used in a variety of markets, from the ubiquitous tiny vibration motor built into mobile phones and tablets to more sophisticated entertainment setups that use a variety of haptic devices to provide a range of sensations.

“... THE ABILITY OF HAPTICS AS AN ADDITIONAL, INTUITIVE SENSORY CHANNEL TO SOUND AND VISION ALSO MAKES IT EXTREMELY VALUABLE IN PROFESSIONAL APPLICATIONS.” Indeed, the ability of haptics as an additional, intuitive sensory channel to sound and vision also makes it extremely valuable in professional applications. VR and augmented reality (AR) have numerous applications beyond entertainment, such as simulation and training. Adding a sense of touch can make simulations more realistic and more effective, and thereby shorten costly training sessions. The aerospace industry has decades of experience and knowledge in haptics and similar technologies – probably more than any other industry – for both simulation training and operational flight. However, while the widespread applications of haptics could lead one to believe that the technology is mature and pervasive, there are serious drawbacks to current haptics technology that make it expensive and difficult (in many cases, impossible) to implement. This is evident when considering that haptic feedback is far less common that visual and audio outputs, and often based on primitive components such as simple vibration motors where it is available. The most obvious challenge with haptics is that it requires physical contact. To put it simply, in order to perceive the feeling of touch you need to be touching something. Generally speaking, in virtual environments the real-world interface will not closely match the virtual object in shape or texture. A video game controller is not a gun or a ball, and a joystick is not a scalpel. Haptic devices are also restricted by sometimes bulky and unwieldy form factors that may block a user’s view of the display or environment they are trying to control.

18

Embedded Computing Design | Winter 2017

www.embedded-computing.com


FIGURE 3

Bosch has demonstrated a vehicle entertainment system with Ultrahaptics technology that uses virtual mid-air controls that are easy to feel and adjust without looking away from the road.

Contactless haptics By utilizing an array of dozens of ultrasound transducers arranged in a flat, square, or rectangular grid to emit an inaudible ultrasound signal, Ultrahaptics provides invisible, contactless haptic feedback over a range of up to a meter. In addition to working at a distance, the technology produces a wider range of effects than vibration motors. These include the sensation of moving objects (like a ball held in the fingers), flowing water, a strong breeze, a virtual pushbutton or dial, or bubbles bursting against the skin. Even materials with a variety of textures can be simulated (Figure 3).

SIDEBAR

By varying the output of each transducer, a sensation of physical force can be created at points in the air where the ultrasonic sound waves intersect. The waves interfere with each other and – precisely at those points, a few milli­meters wide – they produce a stronger and much lower frequency signal that stimulates the skin’s tactile mechano­ receptors just as a physical object would.

and enabling new applications and products. In the competitive consumer electronics markets filled with commodity products, companies can use exciting technologies like invisible contactless haptic feedback to make their products stand out.

Stronger agency, lower touch Contactless haptic tech greatly strengthens sense of agency, enhancing existing applications for haptic technology, increasing user satisfaction and safety,

References:

Steve Cliffe is President and CEO of Ultrahaptics.

1. Shneiderman’s Eight Golden Rules of Interface Design. Accessed November 07, 2017. https://faculty.washington.edu/ jtenenbg/courses/360/f04/sessions/ schneidermanGoldenRules.html.

THE EFFECT OF SENSE OF AGENCY ON PERCEIVED TIME In 2002, researcher Patrick Haggard and others showed that the human perception of time appears to speed up when we feel we are in control of our actions. The shift in perceived time can be significant, adding or subtracting 30 to 50 percent to the total time in some cases. For the user, time actually seems to be compressed as their sense of agency becomes stronger. This research was formalized as the intentional binding effect – a quantitative measure of how strongly we feel our intentional actions are connected to their outcomes. Intentional binding strength is based on two components – action binding and outcome binding – each of which measures how much our internal perception of time differs from objective reality. The specialized scientific use of the word “binding” refers to how closely (in milliseconds) our perception of an event matches reality. Action binding is the perceived time difference for our action, and outcome binding is a similar measurement for the outcome of that action. For example, when pressing a button we generally perceive that the action occurred 10 to 30 milliseconds later than what objective measurement shows. The shift in time itself is the action binding.


SOFTWARE: SOFTWARE-ENABLED CAR AUDIO

Exploring the software-enabled transformation of car audio By Anil Khanna, Mentor, a Siemens business

There is no doubt that software continues to have a material impact on the features and capabilities of the automobile – be it enhanced safety, improved fuel efficiency, better-sounding entertainment, or autonomous driving. However, the match between promises and actual performance has historically been far from perfect.

T

he J.D. Power market research company conducts an annual survey of U.S. vehicle owners that addresses top complaints. The findings of the survey offer a window into where the gaps between expectations and performance exist. The 2014 survey ranked problems with voice recognition systems as, by far, the number one issue raised by U.S car owners. Other common complaints related to difficulty with Bluetooth connectivity and excessive wind noise.

Moving ahead to the recent 2017 survey, one would have expected most of these issues to be resolved. Inter­ est­ingly enough, complaints related to voice recognition systems, Bluetooth

20

connectivity, and noise from road and wind are still the most prominent! In fact, per the study, “the Audio/Communication/Entertainment/Navigation category continues to be the most problematic area, accounting for 22 percent of all problems reported – up from 20 percent last year[1].” Clearly, there is more work to be done. It seems obvious that legacy approaches to address many of these issues have come up short. For example, traditional methods of adding more passive dampening materials to minimize external wind/road noise and make the interior cabin quieter can only do so much. Moreover, in addition to adding cost, passive dampening adds incremental weight, which has a negative effect on the fuel economy of the vehicle. Carmakers are turning to technology, especially intelligent software-driven solutions, to tackle these problems. Let’s look at the example of noise cancellation and how active, software-based methods offer an answer. Transporting audio within the car Although acoustics has long been a key automotive design consideration, interestingly enough most improvements to the in-cabin experience have centered on the overall in-vehicle infotainment (IVI) application. According to conventional wisdom,

Embedded Computing Design | Winter 2017

www.embedded-computing.com


Mentor, a Siemens business www.mentor.com anil_khanna@mentor.com

TWITTER

@mentor_graphics

LINKEDIN

www.linkedin.com/company/ mentor_graphics

FACEBOOK

www.facebook.com/ MentorGraphicsCorp

GOOGLE PLUS

https://plus.google.com/ +mentorgraphics

YOU TUBE

www.youtube.com/channel/ UC6glMEaanKWD86NEjwbtgfg

SOFTWARE: SOFTWARE-ENABLED CAR AUDIO

whatever worked for a better IVI experience was also deemed good enough for audio. This made sense since audio and video usually went hand-in-hand. However, the emergence of acoustic-centric applications such as active noise cancellation (ANC), engine sound enhancement (ESE), vehicle alert systems (AVAS), and other technologies have more recently led designers to take a much closer look at audio technology on its own. But first things first: a fundamental component to enabling these new audio applications is the underlying bus technology. A shift from passive to software-based approaches requires an advanced bus infrastructure for the efficient transportation of audio data. The Automotive Audio Bus (A2B) developed by Analog Devices is a relative newcomer, and more importantly the only bus exclusively dedicated to audio. A2B: Digital audio over lightweight cable A2B gives carmakers a cost-effective way to deliver multi-channel digital audio and control data and power, all over the same lightweight unshielded twisted pair (UTP) cable. Cable and assembly costs for A2B systems can be up to 75x lower than analog alternatives, and the lighter weight can also enable lower CO2 emissions. A2B also includes system-level diagnostics and compliance with automotive electromagnetic compatibility (EMC), electromagnetic interference (EMI), and electrostatic discharge (ESD) standards. And with deterministic, low-latency performance (50 microseconds) and 50 Mbps bandwidth, A2B is well suited for high-quality audio and other applications, including infotainment and noise cancellation. A2B-based connectivity delivers many benefits, especially relative to traditional analog-based networking still in use in the vast majority of vehicles on the road. Thanks to phantom power and a single master/multiple slave line topology that supports daisy chaining of nodes, A2B systems eliminate the need for local component power supplies and control processors to manage software overhead. As a result, A2B provides an easy and efficient way to link a head unit to an array of speakers and amplifiers around the vehicle in a scalable daisy chain, which is vastly simpler than implementing a high-end sound system with many independent, point-to-point connections.

As is always the case, the popularity of an emerging technology is dependent on, and can also be measured by, the reception of early adopters and other ecosystem players. There are several examples of A2B industry adoption and ecosystem growth, most notably Ford’s January 2016 announcement that it will use A2B as its primary infotainment network technology. A few months before that, German communications tech firm Peiker (since acquired by Valeo) an­ nounced new A2B digital microphones with transceivers built in, supporting both in-car communication and noise cancellation. And, from an ecosystem perspective, Mentor, a Siemens business, is among the first independent tool vendors providing critical A2B test support, including the A2B Analyzer System – the only third-party development platform engineered to help significantly reduce development time for A2B systems by speeding configuration and functional testing. Leveraging active noise cancellation Unlike passive noise cancellation, which uses physical noise dampening materials, active noise cancellation is implemented using DSP techniques. The idea behind active noise cancellation is quite simple – carefully-placed microphones pick up external noise, which is then processed

FIGURE 1 The Automotive Audio Bus (A2B) delivers multi-channel digital audio, control data, and power over a cost-effective, lightweight, unshielded twisted pair (UTP) cable.

www.embedded-computing.com

Embedded Computing Design | Winter 2017

21


SOFTWARE: SOFTWARE-ENABLED CAR AUDIO

and a 180 degree out-of-phase antinoise signal is generated to cancel the undesirable noise. Active noise cancellation is already available in some car models, though it has generally targeted periodic, low-frequency engine noise. Although every engine is unique, the noise behavior is predictable and typically dependent on the engine’s performance (RPMs). As a result, engine noise can be modeled to a fair degree of certainty and then subsequently dealt with. But engine noise is only a part of the overall picture. The second, trickier source of noise comes from the road – broadband in nature, unpredictable, and almost impossible to model. Unlike engine noise, road noise also varies with changes in road surface. Most noise cancellation solutions available today only deal with engine noise and are incapable of handling road noise. The issue of tackling road noise has gained even more importance with the emergence of electric vehicles (EVs), which feature electric motors instead of gasoline engines. Although EVs produce no engine noise, road and wind noise must still be addressed. Although fundamentally the premise of cancelling road noise is the same as that of cancelling engine noise, the complexity of the challenge requires additional components to reliably track and cancel noise on a real-time basis. A combination of carefully-placed accelerometers, microphones, and speakers work in conjunction to pick up road vibrations, process the resulting sounds, and then generate the required antinoise directed at the car’s occupants. Proprietary, high-performance algorithms ensure fast convergence, resulting in rapid adaptation and responses to noise from changing road surfaces. An even bigger challenge is how to deploy a cost-effective combination of microphones, accelerometers, hardware, and software to cancel the random broadband road noise. Until now, technical and cost challenges have prevented carmakers from offering broadband noise cancellation. However, the combination of A2B networking technology, powerful DSPs, off-the-shelf A2B-based components (accelerometers, microphones), and

22

software IP are bringing the road noise cancellation solution significantly closer to reality. By offering deterministic latency, A2B is perfect for networking microphones, accelerometers, and other components. Advanced active noise cancellation technologies such as Mentor’s broadband XSe Active Noise Control (ANC) solution have been designed to precisely tackle the dual problem of cancelling engine noise and road noise. Using the XSe ANC as an example, an advanced algorithm effectively cancels both engine and road noise to create an ambient environment within the car cabin. Quiet zones created around the driver and passengers cover steady state, dynamic, and non-periodic components of engine, transmission, and road noise without interfering with the enjoyment of music, the utility of audio-based navigation systems, or the sirens of emergency vehicles. Highperformance solutions such as XSe ANC enable advanced functionality with minimal hardware components.

“... THE COMBINATION OF A2B NETWORKING TECHNOLOGY, POWERFUL DSPS, OFF-THE-SHELF A2B-BASED COMPONENTS (ACCELEROMETERS, MICROPHONES), AND SOFTWARE IP ARE BRINGING THE ROAD NOISE CANCELLATION SOLUTION SIGNIFICANTLY CLOSER TO REALITY.” Somewhat paradoxically, other emerging audio applications involve sound enhancement for both driver enjoyment and safety. The controlled rumble of a high-end sports car engine, for example, is a big part of that car’s signature appeal, which is why that engine noise can be digitally enhanced and piped into the cabin via advanced tools for analyzing the transmission of sound and vibration in a car’s cabin. Other sound-­ generation applications include various acoustic alerts, from chimes that play when the car is started or when parking, as well as other safety alerts that combine to form the car’s personality, brand identity, and thus its relationship to drivers and passengers. It goes without saying that it is desirable, or even a requirement, for these multiple audio applications to coexist on the same vehicle. Without an innovative software-based solution, this would not be possible. Unleashing the promise of software-enabled car audio With the availability of software-based noise cancellation and enhancement solutions, engineers have one more tool in their arsenal to tackle noise-related problems. But, as referenced, automotive audio system design has until recently been the domain of engineers focused on infotainment and head unit systems, while the challenges of reducing extraneous noise and vibration in a car are traditionally handled by an OEM’s noise, vibration, and harshness (NVH) team. Therefore, to gain maximum advantage from the move towards software-based acoustics solutions, carmakers and their suppliers will need to work across traditional organizational boundaries to unleash the true promise of software-enabled audio. The automobile of the future is changing in profound ways, so it makes sense that audio is finally emerging from the shadows to take its rightful place as both a key enabler of new ideas, as well as a differentiator for carmakers. As these softwareenabled solutions start to roll out in production vehicles on a consistent basis, one can expect to see customer satisfaction reflected in better scores in future surveys. Anil Khanna is Senior Manager at Mentor, a Siemens business, responsible for the automotive audio business line. Khanna is based in Wilsonville, OR.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


EXECUTIVE SPEAKOUT

The Impact of the IoT Demystified By Cees Links, GM of Qorvo Wireless Connectivity Business Unit Formerly Founder & CEO of GreenPeak Technologies

The Internet of Things (IoT) is a modernday buzzword with lofty expectations to have a profound impact on society. But what is it, how will we use it, and what will that impact be? The IoT breakthrough? Why has such an old concept like IoT been the center of so much hype in recent years? The first fundamental change was that the Internet became nearly ubiquitous. Initially connecting computers, the Internet now connects homes and buildings. And with the advent of wireless technology (Wi-Fi, LTE), access to the Internet changed from a technology into a commodity. The second fundamental change was essentially Moore’s Law rolling along, with smaller, more powerful, and lower cost devices being developed to collect data. And finally, low-power communication technologies were developed that extended the battery life for these devices from days into years, connecting them permanently and maintenance-free to the Internet. What is holding the IoT back today? As with many technologies, after a few years of high expectations, the IoT is slowly entering the Valley of Disillusionment, that quiet phase where sobering reality starts kicking in. The IoT is suffering today from a lack of understanding of its true value proposition; and at the same time, a plethora of proprietary and open communication standards inhibit interconnectivity, create confusion with consumers, and among product builders themselves, keep product prices high and delay market growth. On top of all that, large companies seem determined to seek the Holy Grail (and promote their own ecosystems). And that, really, is the crux of the IoT illusion. “Things” sounds so simple. But the IoT is more complex than we anticipated. More complex, but also more promising.

What is the core value of the IoT? It’s all about “making better decisions faster.” This motivator drove computers into existence. Does anybody remember how to do bookkeeping without a computer? Or run a manufacturing plant? Making better decisions faster drove the Internet into existence. And making better decisions faster is driving the IoT into existence, too. It will make our personal lives more comfortable, more safe, and secure. The IoT will make the quality of our products better. We will be able to better monitor our environment, and our impact on it. The IoT is not a break from the past, it is a natural progression in making better decisions faster, and a continuing engine for our economic growth and wealth creation – driving out poverty altogether. The impact of the IoT So where are we with the IoT? The IoT is more than a smart meter. It is a complete new wave of automation that includes everything from omni-sensing to artificial intelligence, from smartphones to smart homes, and from smart industries to smart cities. It is all about being better informed, about being able to make faster and betterqualified decisions. It is about safety, security, privacy, and integrity. It is also about losing certain jobs and creating new ones. It is about economic growth and wealth creation based on better decision making. We still have a lot to learn (maybe less technology and more business models on maximizing the value add), but we are in the middle of shaping a better world for the next generations. Maybe a new Golden Age, an enlighted world? We will see, because we can! Meet Qorvo at CES 2018 to discover our smart home and IoT innovations!

Qorvo • www.qorvo.com


STRATEGIES: SECURITY

Deception networks: Reducing alert fatigue and increasing security through an alternate reality By Brandon Lewis, Technology Editor

W

hile this number is un­nerving for enterprises of any kind, it’s particularly disconcerting for industrial and Internet of Things (IoT) companies that deal in sensitive and/ or safety-critical products. 146 days is nearly five months, or almost half a year that advanced persistent threats have to siphon sensitive IP or customer data, propagate into critical systems, and, potentially, do serious physical damage[2]. Assuming, for a moment, that all Internetconnected organizations are responsible enough to employ standard anti-virus (AV) software, firewalls, and other security

24

The most concerning revelation to come out of the security industry over the past couple of years isn’t the Mirai botnet, nor the hacks of Verizon, Yahoo! (before the acquisition), or the Democratic National Committee (DNC), or even the infamous Jeep hack. Instead, it came from security company FireEye’s June 2016 Mandiant M-Trends Report, in which it was revealed that the average time between compromise and detection of a cyberattack is 146 days[1].

information and event management (SIEM) system measures, alert fatigue can be pointed to as a major contributor to the extended dwell time of cyber threats. Alert fatigue is a phenomenon in which the person(s) responsible for managing an organization’s security infrastructure is consistently bombarded with breach notifications to the point that they eventually disregard them. This usually occurs when the majority (or perceived majority) of notifications are actually false positives, or when no context is provided in the alerts from a range of different security tools. The more systems and security tools throughout an organization, the higher the likelihood of alert fatigue. “Legacy security systems were designed to address complex security problems as they popped up in the wild,” says Alton Kizziah, Vice President of Global Managed Services at Kudelski Security. “We created signature-based AV to fight virus attacks, we created intrusion detection systems (IDS) to help defend the perimeter, we created firewalls to control packets, we created the Open Web Application Security Project (OWASP) to protect application weaknesses, and on and on.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


STRATEGIES: SECURITY

For example, by its very nature a honeypot probably wouldn’t contain much activity “These systems were designed to detect or the log data in the case of an embedded device, or browsing history in the case of an specific security problems, like ‘I’m lookShortly after the 10 year anniversary of iPhone, enterprise PC. Many of the little things that make a real system “real” simply won’t ing for a malware that’s trying to reach Ken Chair IP ofaddress,’” the MIPIhe Alliance Sensor be there, raising warning flags for hackers who will just move on to another vector. out toFoust, a known-bad continues. “That’s just one step in a chain of Working Group reflects ona how that platform Although honeypots haven’t proven sweet enough, what they do represent is a step many events that are part of breach, towards pushing alert fatigue back on the hacker, who at least has to stop and asses and there might be multiple different catapulted the use of sensors in smartphones to steps happening at the same time as an the network landscape before proceeding with an attack. The security industry never­ attacker moves.but also the challenges that theless requires a more sophisticated means of deceiving malicious actors, which it new heights, increased may have found in the form of deception networks. sensor integration hasgot wrought for legacy interfaces “What happens is you’ve all of these legacy devices looking for different Sweetening the pot with alternate realities like SPI, UART, I2Cand in creating terms of cost, problems, creatingand alerts, a Likepower, honeypots, deception networks are deployed as part of a real network. Unlike lot of false-positive alerts, which results them, they are actually deployed on real devices. and performance. in alert fatigue with the guys who actually do the monitoring,” Kizziah says. “They’re Deception networks take the honeypot concept to the extreme, creating fake adminEnter Improved ­ relevant story unable the to tell a coherent, istrator accounts, applications, and data that reside next to genuine components on that can give context to the administrathe same machine. For those familiar with string theory, a deception network creates Inter-Integrated a next-generation chip- network, an alternate reality that is interspersed with the real nettors of the tools – isCircuit it really(I3C), just this one a sort of alternate particular event that’s the problem, or is work so that hackers can’t be sure whether they are attempting to compromise a real to-chip it connected to a greater, larger, more component or a deceptive one. At worst this severely limits the ability of an attacker complex attack that’s happening to our toonly entermobile a network and propagate laterally; at best, an attacker attempts to use data interconnect capable of supporting not environment? from a component that doesn’t truly exist to advance their attack, and the deception devices, but Internet of Things (IoT), wearables, is triggered. and “The trouble is that long-lasting breaches automotive sensor are still outpacing all ofsubsystems as these technologies that we designed to prevent them in well. the first place,” he says. “None of them really address the needed outcome, which is, ‘We don’t want an attacker in our ® environment for months and months.’” Artificial honey One alternative to traditional network security measures that has become popular in recent years is the honeypot, or a pseudo system with many of the trimmings of a real system that is actually deployed as bait. The idea behind a honeypot is that hackers, under the impression that the honeypot either contains valuable information or can be used to move laterally across a network of devices, will attempt to compromise a fake system that is actually isolated and heavily monitored. Once an attacker attempts to exploit the system, security professionals can take the necessary steps to expel them and protect the rest of the network. The problem with honeypots, especially against advanced threats, is that they simply aren’t real systems, which the savvy attacker will deduce in short order. www.embedded-computing.com

SMX RTOS

Ideal for Your Project. • Integrated platforms for 150+ boards • Progressive MPU security • TCP/IPv4/6, mDNS, SNMPv3, SNTP, smxrtos.com/mpu SSH, TLS/SSL, HTTPS, SMTPS • Advanced RTOS kernel smxrtos.com/special WiFi 802.11n, P2P, SoftAP, WPA2 • • USB host and device • Broad ARM & Cortex support smxrtos.com/processors Flash file systems • • Drivers, BSPs, Bootloader, • Full source code. No royalty. GUI, IEEE754 Floating Point • Custom source eval and free trial Y O U R

R T O S

P A R T N E R

www.smxrtos.com

Embedded Computing Design | Winter 2017

25


STRATEGIES: SECURITY

“For instance, if you’re on an end user’s laptop, the real network shares sit right next to the deceptive ones. There are real administrator accounts right next to deceptive ones. So they are in the list and look real,” says Kizziah. “Everything about the system is real, except the parts that aren’t. “It becomes a really frustrating endeavor for an attacker because even if they recognize that deception is deployed, it doesn’t improve their attack – it actually slows them down. They will constantly second guess whether or not to try a pass-the-hash attack on a particular account because it might not be a real account. If they do and are caught, it’s over. When the technology sees an attempted lateral move, that’s a pretty clear indicator that something fishy is going on,” he explains. “What ­happens is you have a lot fewer false-positive alerts. Nobody should have access to those accounts. They don’t exist. So why are they trying to log into a network share? “Everything moves in slow motion, and there’s no obvious way to separate the reality from the alternate reality,” Kizziah continues. “Sometimes I describe it as trying to find a needle in a stack of ­needles. What’s really beautiful about that is that that needle in a stack of needles is the exact paradigm that ­ threat analysts have been struggling with for years. We basically turn the alert fatigue back towards the attacker. Now they have to deal with too much information, where before we had to deal with too much.” To integrate deceptive technology with real systems on a real network, cybersecurity firms like Illusive Networks use machine learning to analyze network attack vectors and strategically place deceptions. Once the deceptions are in place, the technology integrates with endpoint protection and threat monitoring services such as those provided by Kudelski Security to provide real-time forensics and attack mitigation before hackers can move laterally (Figures 1A, 1B, and 1C).

in and on top of the real one, double the infrastructure and resources are required. In fact, the deceptions are rather breadcrumbs that are pushed out to endpoints from

FIGURES 1A

FIGURES 1B

FIGURES 1C

FIGURES

Penetrating minds will assume that, because deception technology essentially creates an additional network that resides

26

Embedded Computing Design | Winter 2017

A deception network, such as those offered by Illusive Networks, deploys artificial components throughout a network stack, including endpoints, applications, and data. Once an attacker attempts to move laterally into or from a deceptive component, or act upon its data, a deception is triggered that alerts security professionals in real time. Figure 1A (top) shows a standard network; Figure 1B shows a network outfitted with deception technology; Figure 1C traces how an attacker attempted to move laterally across a network outfitted with deceptive technology. www.embedded-computing.com


a central deployment server, and can be scaled up or down based on system requirements. No new infrastructure is needed.

the corporate world around compliance, and reasons you have to have different technologies. Breaches are going to happen. They’re going to be advanced. If you don’t have a layered defense you’re going to be impacted more by these breaches.

Defense in depth: An inconvenient truth Taking things back up a level, one obvious shortcoming of the deceptive strategy is that its use means that a hacker must have gained access to a network in the first place. It’s designed to prevent lateral breaches that can persist indefinitely and result in catastrophic data loss, theft, or system damage, and provide security analysts with actionable information about how to respond to attacks.

“What we’ve found is things like advanced endpoint protection capability, deception capabilities, and endpoint response are all very complementary as long as you have a comprehensive threat monitoring and response strategy to handle the output,” Kizziah continues. “We look at this as layered defense for the endpoint. What you can prevent, you do. When you can deceive and slow down, you do. And [with deception technology], when a breach is detected you should be able to detect it earlier in the lifecycle than with legacy tools. You can respond faster because you’re actually on the endpoints where the breach is happening, so you’re responding faster with containment and forensic collection.

Traditional security measures still need to be applied to prevent attackers from infiltrating the network at all. If you’re not deploying a layered defense strategy, Kizziah says, “you’re making a mistake.” “I would never recommend that we could supplant AV with deception,” he says. “There are too many variables in

www.embedded-computing.com

“All of those things together strengthen defenses at the actual point of attack,” he adds. A free trial of Illusive Networks’ deception technology can be found at www.illusivenetworks.com/product#dem. References: 1. “FireEye Releases First Mandiant M-Trends EMEA Report.” FireEye. Accessed August 11, 2017. https://www.fireeye.com/company/press-releases/2016/fireeye-releasesfirst-mandiant-mtrends-emea-report.html. 2. “Enhanced Cybersecurity Services: Protecting Critical Infrastructure.” Industrial Embedded Systems. October 29, 2013. Accessed August 11, 2017. http://industrial. embedded-computing.com/articles/enhanced-protecting-critical-infrastructure/.

Embedded Computing Design | Winter 2017

27


STRATEGIES: SECURITY

Mid-range FPGAs for design and data security: No excuses By Ted Marena, Microsemi Corporation

Security in embedded designs has been a hot topic for a number of years, but security means different things to different people and organizations. Coming from experience, what one individual considers secure design requirements can vary dramatically from others.

D

espite all the awareness and discussion about embedded systems needing higher levels of security, few systems today have a mandated security specification. Unfortunately, security is an afterthought for the majority of design specifications. Many engineers and architects assume that software will secure the system, so they simply need to concern themselves with protecting the IP that goes into their processor or system. However, nothing could be further from the truth.

›› Protect data communications to prevent fraud and keep company brands from being tarnished

Design engineers and architects need to implement security features not only in software, but in hardware too. Hardware security can:

›› Design security means taking steps to prevent IP from being extracted from silicon, which translates into a requirement for differential power analysis (DPA) countermeasures. ›› Data security involves securing cloud and/or machine-to-machine (M2M) communications, leveraging secure storage for physically unclonable functionbased (PUF-based) key generation and a DPA-resistant encryption engine.

›› Protect products from being cloned or overbuilt ›› Protect IP that enables companies to keep differentiating

28

If you do not understand how to implement these features, you are not alone. Fortunately, there are options in the form of mid-range-density FPGAs that address modern security requirements. The following addresses the key functions required to make a design significantly more secure. Breaking down security requirements The first step is to break down the security requirements of a design into two broad categories: design security and data security.

Let’s look at each category in more detail and provide application examples of how mid-range FPGA architectures can address their requirements.

Embedded Computing Design | Winter 2017

www.embedded-computing.com


Microsemi Corporation

LINKEDIN

www.microsemi.com

www.linkedin.com/company/microsemi

YOU TUBE

www.youtube.com/user/MicrosemiCorp

STRATEGIES: SECURITY

Design security Most engineers know that it is important to protect the bitstream of an FPGA from being extracted. The usual method is to encrypt the bitstream with a key. Although keys are an obstacle for hacks, they are not adequate for today’s requirements. Why? Because of a technology called DPA. When a device performs operations such as reading a key or decrypting a file based on a key, it gives off magnetic signals. These signals can be detected by an inexpensive electromagnetic probe. In a recent test conducted meters away from a design, DPA techniques were shown to be able to determine when a key was being read or accessed. From there, a probe and PC or logic analyzer only need a matter of minutes to find a pattern in the signals and determine what the key is. Once you have the key, you can decrypt the bitstream of the FPGA.

“IN A RECENT TEST CONDUCTED METERS AWAY FROM A DESIGN, DPA TECHNIQUES WERE SHOWN TO BE ABLE TO DETERMINE WHEN A KEY WAS BEING READ OR ACCESSED.”

Let me repeat that: You can decrypt an encrypted bitstream simply using DPA with an inexpensive probe and a PC. Fortunately there is a way to eliminate the DPA phenomena. Designers need to look for FPGAs with DPA countermeasures built into the device.

features that address DPA leakage, and prevent DPA attacks from compromising the bitstream.

To date, only a limited number of FPGAs have had this capability built in, and many of these devices were high-end and costly. Now, cost-optimized, mid-rangedensity FPGAs such as Microsemi’s PolarFire family incorporate DPA counter­measures for use in a wide variety of applications. PolarFire FPGAs incorporate design

Data security A growing number of devices are connected to other devices or the cloud, which puts the data security of a system

P 2eodwe f1or 8free aodrmldis.dsioe /nvoucher E-c

w

dded-

embe

Nuremberg, Germany

27.2 – 1.3. 2018

DISCOVER INNOVATIONS Immerse yourself in the world of embedded systems and discover innovations for your success.

embedded-world.de

Exhibition organizer NürnbergMesse GmbH T +49 9 11 86 06-89 12 F +49 9 11 86 06-89 13 visitorservice@nuernbergmesse.de Conference organizer WEKA FACHMEDIEN GmbH T +49 89 2 55 56-13 49 F +49 89 2 55 56-03 49 info@embedded-world.eu

Media partners

Fachmedium der Automatisierungstechnik

Fachmedium für professionelle Automobilelektronik

www.embedded-computing.com ew18_177-799x123-825_INT_EN_PICMG_Systems_Technology_BES.indd

1

Embedded Computing Design | Winter 2017 20.09.17

29

12:03


STRATEGIES: SECURITY

at risk. Hardware architects need to take responsibility for solving this issue, as software solutions alone are not adequate. To ensure data communications are secure, the information being sent must be encrypted and decrypted on the receiving side using a specific algorithm and keys. There are many common algorithms (including AES-256, SHA, and ECC), and they must be based on a key to be utilized. For connection to the cloud, a dual-key strategy known as public key infrastructure (PKI) is required. PKI is based on both public keys and private keys. Every node on the network has a certified public key that has been approved (or signed) by a trusted third party. Every node on the network also has its own private key known only to that node. When sending secure communications, you use the certified public key of the node you are sending data to, as well as your own private key to encrypt the data. Only the node with the correct public key and corresponding private key will be able to decrypt the data. This is a basic description of how data to the cloud is secured.

But there are two important issues that hardware engineers can solve. 1. Does the cryptographic engine (either FPGA logic or a cryptographic processor) have DPA countermeasures built in? If not, the private key can be determined, putting your data communications at risk. 2. How is the private key protected? There are numerous hardware components that can protect a key, but the most secure system is one in which a PUF is designed into the FPGA device. This leverages the unique attributes of each individual silicon die as a sort of biometric identifier of the device. By using a PUF and non-volatile memory (NVM), a key can be stored with the highest level of encryption. When designs have data communications that must be secured, look to FPGAs that incorporate DPA countermeasures for the fabric and faculties for secure key generation and storage. PolarFire FPGAs are designed to protect private keys and perform completely secure data communications (Figure 1).

All the key building blocks are incorporated on the chip, including a DPA-safe cryptographic processor, PUF, key storage, and RNG. Designers simply need to program the processor to generate a key using the RNG and a specific encryption protocol (such as AES 256), then the onboard PUF provides secure key storage and the crypto processor implements the secure communication! Very little FPGA fabric resource is used to enable the secure communications. Design and data security: No excuses As securing devices becomes more challenging, designers need technology building blocks that make their system inherently secure. Mid-range-density FPGAs are now available that address the emerging security challenges facing embedded engineers, including design security through DPA countermeasures and data security with cryptographic processors. These security solutions also deliver lower power in smaller form factors. No longer is it acceptable for designers to ignore security requirements. Ted Marena is Director of SoC and FPGA Product Marketing at Microsemi Corporation.

FIGURE 1 Shown here is a functional block diagram of a midrange-density PolarFire FPGA consisting of a data security processor, physically unclonable function (PUF), random number generator (RNG), and secure nonvolatile memory (NVM).

30

Embedded Computing Design | Winter 2017

www.embedded-computing.com


EDITOR’S CHOICE

Editor’s Choice

Calculate cost for a custom IIoT IC (Hint: It’s cheaper than you think) S3 Semiconductors’ free online bill of materials (BOM) calculator provides an inside look into how creating a custom IC for your next Industrial Internet of Things (IIoT) application can be economically viable, even in low to medium volumes. The tool considers variables such as the number of units being manufactured; how long the product will be manufactured; processor characteristics; and whether analog, data converter, and RF components are present in your design. In addition to estimating the cost of developing a custom IC, the BOM calculator provides a guide to break-even volumes. The tool aims to disprove the myth that custom ICs are too expensive for low-volume designs, and demonstrate the benefits of component integration in IIoT systems. The BOM calculator can be found at www.s3semi.com/bom-calculator.

S3 Semiconductors www.s3semi.com/bom-calculator www.embedded-computing.com/news/calculate-cost-for-a-custom-iiot-ic-hint-it-s-cheaper-than-you-think

Code analyzer automatically applies AUTOSAR Adaptive C++14 guidelines The PRQA AUTOSAR Compliance Module is the first commercially available static code analysis tool to automatically apply the AUTOSAR Adaptive Platform’s new C++14 Coding Guidelines. The AUTOSAR Compliance Module is delivered as an extension to the QA-C++ 4.2 static code analyzer. The AUTOSAR Adaptive Platform is aimed at high-performance electronic control units (ECUs) used in fail-operational systems, such as those required by autonomous vehicles. PRQA will present on the new standard at Automotive IQ’s ISO 26262 conference in Düsseldorf, Germany, March 13-16, 2018.

PRQA www.prqa.com www.embedded-computing.com/news/code-analyzer-automatically-applies-autosar-adaptive-c-14-guidelines

Thread Group expands test infrastructure to support influx of interoperable products The Thread Group has expanded its testing and certification infrastructure with the addition of four new labs across the United States, United Kingdom, Germany, and Taiwan. The organization has also designated TÜV Rheinland and 7layers as official authorized testing labs for Thread products, in addition to UL. The increased infrastructure will help ensure interoperability between a growing number of Thread components and products. In addition to the facilities expansion, the organization also released the “Thread Ready” designation for devices that meet the core requirements of the Thread specification but do not require the full scope of certification. The new Nest Detect and Nest Guard products, for example, have been validated as Thread Ready devices.

Thread Group www.threadgroup.org www.embedded-computing.com/news/thread-group-expands-test-infrastructure-to-support-influx-of-interoperable-products

www.embedded-computing.com

Embedded Computing Design | Winter 2017

31



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.