RTC Magazine

Page 1

INTERVIEW OF DANNY SHAPIRO, SR. DIRECTOR OF AUTOMOTIVE, NVIDIA THE GREAT MACHINE VISION IMPACT DEVELOPER’S GUIDE FOR SELF-DRIVING CARS

Real World Connected Systems Magazine. Produced by Intelligent Systems Source

Vol 18 / No 7 / July 2017

Machines Can See and Think. They Will Take Over Your Driving. Are You Ready?

An RTC-Media Publication RTC MEDIA, LLC


SAFE RELIABLE SECURE

T R U STED S O F T WAR E F OR E M B E DDED D E V I CE S For over 30 years the world’s leading companies have trusted Green Hills Software’s secure and reliable high performance software for safety and security critical applications. From avionics and automotive, through telecom and medical, to industrial and smart energy, Green Hills Software has been delivering proven and secure embedded technology. To find out how the world’s most secure and reliable operating systems and development software can take the risk out of your next project, visit www.ghs.com/s4e

Copyright © 2016 Green Hills Software. Green Hills Software and the Green Hills logo are registered trademarks of Green Hills Software. All other product names are trademarks of their respective holders.


CONTENTS

Real World Connected Systems Magazine. Produced by Intelligent Systems Source

2.0: THE GREAT MACHINE VISION IMPACT

06

28

2.1: The New Face of Machinery

32

2.2: NBASE-T Brings New Bandwidth to Imaging System Design

Peter Thorne, Cambashi

Ed Goffin, Pleora Technologies

38

Machine Vision Impacts Future Self-driving Cars

2.3 Putting Eyes on Machines: Embedded Vision 101 David Olsen and Georgi Stoykov Renesas Electronics America Inc.

DEPARTMENTS

3.0: FUTURE TRANSPORTATION

05

42

EDITORIAL

The Race for Driverless Cars is Heating Up

3.1: Transportation in Smart Cities: What Needs to Happen? Bryce Johnstone, Imagination Technologies

1.0: TECHNOLOGIES DRIVING FUTURE CARS 06

1.1: Machine Vision Impacts Future Self-driving Cars by John W. Koon

08

1.2: The Making of Self-Driving Cars: A Developer’s Guide by Árpád Takács, AImotive

14

1.3: How Neural Networks Are Changing the Advanced Driver Assistance Systems by Gordon Cooper, Synopsys

22

1.4: Seamless, Multi-user Wi-Fi Connection For Modern Cars by Richard Barrett, Cypress Semiconductor

28 The New Face of Machinery

RTC Magazine JULY 2016 | 3


RTC MAGAZINE

PUBLISHER President John Reardon, johnr@rtc-media.com Vice President Aaron Foellmi, aaronf@rtc-media.com

EDITORIAL Editor-In-Chief John Koon, johnk@rtc-media.com

ART/PRODUCTION Art Director Jim Bell, jimb@rtc-media.com Graphic Designer Hugo Ricardo, hugor@rtc-media.com

ADVERTISING/WEB ADVERTISING Western Regional Sales Manager John Reardon, johnr@rtc-media.com (949) 226-2000

Integrated Rack Level Systems Elma integrates 19� COTS components from Cisco, Dell and others into our equipment racks, providing fully configured and tested turnkey systems. As your single source supplier, Elma provides component life cycle management, documentation, spares support and design services.

Eastern U.S. and EMEA Sales Manager Ruby Brower, rubyb@rtc-media.com (949) 226-2004

BILLING Controller Cindy Muir, cindym@rtc-media.com (949) 226-2021

TO CONTACT RTC MAGAZINE: Home Office RTC-Media, 940 Calle Negocio, Suite 230, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050 Web: www.rtc-media.com Published by RTC-Media Copyright 2017, RTC-Media. Printed in the United States. All rights reserved. All related graphics are trademarks of RTCMedia. All other brand and product names are the property of their holders.

4 | RTC Magazine JULY 2017


EDITORIAL

The Race for Driverless Cars is Heating Up by John Koon, Editor-In-Chief

From a distance, the Roborace is just another event with fast car on the race track. Taking a closer look, you will find there is no human driver inside. My heart pounds fast as I watch the video I worrying the NVIDIA-based machine driver may malfunction resulting in a crash at a speed of 200 mph. The Roborace was introduced last year and is taking place around the world; Buenos Aires, Monaco, Paris, New York and Montreal. Formula One race cars are programmed to compete without human drivers; first in history. It is a race of algorithm and technology. Why do all the leading technology and cloud companies rush to join the race for driverless cars? They include Microsoft, Google, Intel, NVIDIA, Baidu (the largest search firm in China) and IBM. The future cars represent untapped revenue. Some forecast the market to be $77B by 2035. Technology companies partner with automakers and OEMs to get a head start. The road can be long and winding and full of challenges but the rewards are great. What is the new formula of future car market? It is autonomous + artificial intelligence + electrical + machine vision. To illustrate the point, Intel acquired (August 2017) the Jerusalem-based Mobileye for $15 billion, the largest acquisition of its kind in the Israel history. Mobileye is a maker of cameras and sensors for driverless cars. Intel’s partnership with BMW and Mobileye also includes Delphi (May 2017). Additionally, Intel also acquired a few other companies in 2016; Italy-based Yogotech, a chip designer specializing in driverless cars, Itseez with focus on machine vision and Wind River, an IoT company to achieve its cars-to-cloud strategy. Google and Uber teamed up and have already spent billions. Recently, Microsoft and Baidu announced

Intel’s partnership with BMW, Mobileye and Delphi targets the cars-to-cloud market. This is part of Intel’s migration strategy from PC.

cooperation in developing driverless car technology. Microsoft will support Baidu’s platform outside China using its Azure cloud. Furthermore, Microsoft plans to provide artificial intelligence, deep neural network and machine learning to support cloud computing. The largest software and silicon companies, Microsoft and Intel are interested in cars because future cars will consume a lot of data (via 5G) and cloud business will be the bread and butter not just for Microsoft and Intel but all other technology companies as well. Microsoft is paying attention to the Azure (growing in high double digits) and how it may impact the future of the company. Audi cars now have Intel inside. But NVDIA is pushing hard. Not only the NVIDIA GPU supercomputer is driving the Roborace, NVIDIA will be powering the new Audi A8 to achieve Level 3 Automation

(hands-off but not mind-off). Its technology provides self-learning capability so driverless cars will be able to improve its driving skills over time. Recently, IBM partners with automaker, Local Motors, using Watson, called Olli, to provide autonomous public transportation. Olli will replace human bus drivers and be able to carry on a conversation with its passengers. Mercedes, Clarion and General Motors are in full support of using AI in autonomous driving. The potential growth for future cars is limitless and this is just the beginning.

RTC Magazine JULY 2017 | 5


1.1 TECHNOLOGIES DRIVING FUTURE CARS

This Formula One at the Paris Roborace was powered by NVIDIA computers. The Roborace events were introduced last year and are taking place around the world; Buenos Aires, Monaco, Paris, New York and Montreal. Self-driving race cars are programmed to compete without human drivers. This is first in history and it is a race of algorithm and technology.

Machine Vision Impacts Future Self-driving Cars– Interview of NVIDIA NIVIDA is the leader in Deep Learning and GPU. For the past few years, it is gaining in market momentum and stock value (Investments from Sunsoft is another proof). Its Deep Learning technology not only drives cars; it helps self-driving cars to improve skills overtime. RTC Magazine’s Editor-in-Chief, John Koon caught up with Danny Shapiro, Senior Director of Automotive to gain his latest insight is future cars. by John Koon, Editor-In-Chief

1. What is your vision of future self-driving cars? How important is machine vision in making self-driving cars commercially feasible? Self-driving cars will have an incredibly positive effect on society access to transportation will be transformed. Autonomous cars will not only redefine the way people commute, giving them hours back each day, but will change how goods are transported. We believe machine vision plays a role to make cars commercially feasible, but it is not the sole answer to an autonomous future. Just like we use our five senses to navi-

6 | RTC Magazine JULY 2017

gate the world around us, we believe for cars to better pilot themselves, they should also include other sensors such as radar, lidar, ultra-sonic, and HD maps to further augment the execution of an aware SDC. Fundamentally AI is essential to be able to take data coming from these sensors, and be able to interpret it. There is no way that computer vision algorithms can be programmed to account for the near infinite number of scenarios that happen on our roads. But with deep learning, autonomous vehicles can be trained to drive better than humans.


2. What hurdles need to be overcome before fully autonomous vehicles can be achieved? Do you think the 2020 goals are achievable? Currently the biggest hurdle autonomous technology companies are facing is the legislative red tape. State and federal regulators are having a hard time keeping up with the cadence of these new technologies. However, just in the last year alone, there have been leaps and bounds in improvements in coming up with a streamlined plan for the rollout of self-driving cars. NVIDIA recently testified in front of the U.S. Senate Committee on Commerce, Science, and Transportation for the need to implement AI in self-driving cars, and provided guidance on rule making to ensure safe deployment of this vital technology on our roads. Is 2020 achievable? Yes absolutely. NVIDIA is developing systems to bring fully autonomous cars by 2020 that will be able to operate in specific environments. OEMs such as Audi announced this year that by 2020 they will have level 4 capable vehicles powered by NVIDIA ready for market deployment. 3. In your opinion, what technologies will be used in self-driving vehicles? Examples include: radar, machine vision, deep learning/artificial intelligence, smart sensor, IoT and big data analytics. How does vehicle-to-vehicle technology fit in? What is missing? Everything you mentioned will all play a vital role in the rollout of autonomous vehicles. But we believe what plays one of the biggest roles is deep learning. Through deep learning, the entire suite of sensors will be able to have a much greater understanding of what is happening at any given moment. Deep learning also plays a major role in big data analytics. Information these vehicles are generating along with smart city information can be used improve traffic flow. V2V technology is a nice to have capability in a car, but it is not essential. A vehicle must be able to navigate autonomously even before V2V communication is established. Similarly, connectivity to the cloud cannot be a requirement for self-driving. All processing for autonomy must take place on board the vehicle – hence the need for an energy efficient supercomputer, design for sensor fusion and deep learning. 4. How important is the infrastructure such as smart freeway to the success self-driving cars? Vehicle-to-infrastructure (V2I) will further augment the driving experience, however given there are no standard implementations, or widespread adoption; this is not a useful solution in the short or even medium term. Self-driving cars need to be self-contained. With a programmable and updateable platform on board, software

updates can leverage V2I and V2V data when it is available. 5. What contribution does your company make to the field of self-driving cars? While sensors play a vital role in the operation of autonomous cars a powerful computing platform needs to be able to make sense of the information these sensors are generating. The NVIDIA DRIVE PX car computing platform is designed to handle the entire driving pipeline including sensing, localization and path planning. The platform is designed for deep learning inferencing and is capable of performing 30 trillion operations per second while only consuming 30 watts. In addition, NVIDIA also developed a complete, open software development stack for companies to use when developing their autonomous cars, shuttles, large trucks, and more. Currently over 225 OEMs, Tier 1s, start-ups, HD mapping companies, and research institutions are currently using our solutions for an autonomous future. NVIDIA Santa Clara, CA (408) 486-2000 www.nvidia.com/drive

Danny Shapiro is Senior Director of Automotive at NVIDIA, focusing on artificial intelligence (AI) solutions self-driving cars, trucks and shuttles. The NVIDIA automotive team is engaged with over 225 car and truck makers, tier 1 suppliers, HD mapping companies, sensor companies and startup companies that are all using the company’s DRIVE PX hardware and software platform for autonomous vehicle development and deployment. Danny serves on the advisory boards of the Los Angeles Auto Show, the Connected Car Council and the NVIDIA Foundation, which focuses on computational solutions for cancer research. He holds a Bachelor of Science in electrical engineering and computer science from Princeton University and an MBA from the Haas School of Business at UC Berkeley.

RTC Magazine JULY 2017 | 7


1.2 TECHNOLOGIES DRIVING FUTURE CARS

The Making of Self-Driving Cars: A Developer’s Guide Ever since the dawn on automobiles, self-driving cars have been a hot topic for sci-fi fans, tech pioneers and sociologists. History has proven one of the key contributors to general development and modern civili-zation is mobility, so it is no exaggeration that a global-scaled autonomous transportation system will have an unprecedented impact on our society - changing the way we live, work and travel. by Árpád Takács, Outreach Scientist, AImotive

Human driving is a complex task. We can recognize and understand the environment, plan and re-plan, control and adapt in a fraction of a second. We can silently communicate with the environment while we are driving, follow written and unwritten rules, and heavily rely on our creativity. On the other hand, today’s automation systems strongly follow an if–then basis of operation, which hardly deployable in self-driving cars. We cannot account for every single traffic scenario, or store the look of every car model or pedestrian for better recognition. To bridge this gap between current technological availability and the demand from society and the mar-ket, many ideas and prototypes have been introduced over the past few years. Regardless of the technology details and deployability, there has always been a common tool: machine learning, and through that, Arti-ficial Intelligence (AI). Figure 1. When ready for mass production, self-driving cars will be

the very first demonstration of AI in safety-critical systems on a global scale. Although it might seem we are planning to trust our lives to AI complete-ly, behind the wheel there will be a lot more than just a couple of bits and bytes learning from an instruc-tor, taking classes on millions of virtual miles. A common approach to solve the problem of self-driving is to analyze human driving, collect tasks and sub-tasks into building blocks, and create a complete environment for self-driving car development, nar-rowed down to the three main components: algorithms, development tools, and processing hardware.

Algorithms: from raw information to a unified understanding

The first, and possibly the most important component of self-driving development is the set of algorithms used in various building blocks for solving necessary tasks related to sensor handling, data processing, per-ception, localization and vehicle control. The ultimate goal at this level is the integration of these blocks into the central software that runs in the car, which poses several engineering challenges. There is a hier-archy among these tasks and subtasks, which can be broken down to three groups: recognition, localiza-tion and planning.

Figure 1 One of AImotive’s prototype vehicles circling the streets of Budapest. While testing AI-based algorithms is a complex task, testing licenses are now issued all over the world to self-driving companies.

8 | RTC Magazine JULY 2017


MISSION CONTROL

Rugged, reliable and resilient embedded computing solutions Whatever the operational environment—aerial, space, ground or submersible— WinSystems has you covered with a full line of embedded computers, I/O cards, cables and accessories. Our rugged, reliable and resilient single board computers are capable of processing a vast array of data for controlling unmanned systems, machine intelligence, mission management, navigation and path planning, From standard components to full custom solutions, WinSystems delivers world-class engineering, quality and unrivaled technical support. Our full line of embedded computers, I/O cards, and accessories help you design smarter projects offering faster time to market, improved reliability, durability and longer product life cycles. Embed success in every application with The Embedded Systems Authority!

EBC-C413 EBX-compatible SBC with Latest Generation Intel® Atom™ E3800 Series Processor EPX-C414 Quad-Core Freescale i.MX 6Q Cortex A9 Industrial ARM® SBC

SCADA

ENERGY

IOT

AUTOMATION

TRANSPORTATION

Single Board Computers | COM Express Solutions | Power Supplies | I/O Modules | Panel PCs

SCADA

SCADA SCADA SCADA ENERGY

ENERGYENERGYENERGYIOT

SCADA

IOT

IOT

817-274-7553 | www.winsystems.com

TRANSPORTATION TRANSPORTATION TRANSPORTATION TRANSPORTATION IOT AUTOMATION IOTAUTOMATION AUTOMATION AUTOMATION TRANSPORTATION ENERGY

AUTOMATION

ASK ABOUT OUR PRODUCT EVALUATION! 715 Stadium Drive, Arlington, Texas 76011

PX1-C415 PC/104 Form Factor SBC with PCIe/104™ OneBank™ expansion and latest generation Intel® Atom™ E3900 Series processor


1.2 TECHNOLOGIES DRIVING FUTURE CARS

Figure 2 Raw camera input, AI-based pixel-wise (semantic) segmentation of object classes and monocular depth estimation. The role of individual algorithms is to extract as much as possible information of the environment and pass it on to fusion algorithms.

However, not all of these specifically require AI-based information about the route the car is about to take. solutions. It is the developers’ responsibility and choice to find While recognition and localization algorithms have reached the right balance between traditional and AI-based algorithms, a very mature state in most applications, planning and decision and, if needed, use a combi-nation of these for the very same making are relatively new fields for developers. Naturally, as problem such as lane detection or motion planning. The choice the planner modules can only rely on the available information, of algo-rithms and their fusion largely depends on the number the quality and reliability of this layer largely depends on the and types of sensors used on such a platform, which is the main input from the recognition and localization layers. That said, differentiator among developers. only an integrated, full-stack system devel-opment is feasible in There are no identical prototype platforms among the develthe future deployment of self-driving cars, one that has a deep oper communities, as these rely on infor-mation coming from understanding of what each building block and layer requires, platform-specific combinations such as various cameras, Light acquires and provides. Detection and Rang-ing (LIDAR) units, radars and ultrasonIn this context, the planning layer is responsible for underic sensors, or other external or internal devices. Historically, standing the abstract scenario, object tracking and behavior rely-ing on LIDARs as primary sensors has been a standard way prediction, local trajectory planning and actuator control. To to go, simultaneously solving recognition and localization tasks give an example: this is the layer which understands that there is through point-cloud matching and analyzing. However, human a slow vehicle in our lane, explores free space for an overtaking drivers rely on their vision 99% of the time while driving; therema-neuver, decides if there is enough time before exiting the fore a camera-first approach is growing more popular by the day. highway through localization, and calculates an optimal trajecWith today’s algorithms and processing capabilities, we are able tory to be followed. While it all sounds quite simple through this to extract not only the class, but the distance, size, orientation example, it still poses one of the largest challenges in self-driving and speed of objects and landmarks using only cameras that are taking over the primary role of radars or still expensive LIDARs. Once the raw information from the sensors is at hand, algorithms help us make sense of it all. On the recognition layer, low-level sensor fusion is needed for fusing raw information from various sources, then multiple detection and classification algorithms provide a basis for high-level sensor fusion - an associa-tion of object instances to each other over multiple time frames. The output of the recognition layer is an abstract environment model, containing all relevant information about the surroundings. Figure 2. The next layer is responsible for the absolute localization of the vehicle globally, including routFigure 3 ing, map-ping, odometry and local positioning. A real-time, photorealistic simulation environment, modeling the famous Las Vegas Strip. Simulators Knowing the exact, absolute location of the car provide a structured training environment and a reproducible testing platform for individual algois an essential com-ponent for motion planning, rithms, as well as for the complete system testing. where HD- and sparse feature maps provide useful

10 | RTC Magazine JULY 2017


Ultra-High Bandwidth Recording Solutions

Get There in Record Time! Turnkey Recording Solutions Record at up to 10 GB/s Up to 96 TB SSD Storage

StoreRack Low Cost Turnkey Prototype Platform

StoreBox Compact, Rugged, Deployable

StorePak & StoreEngine VPX Blades for Customized Recording Platforms

www.criticalio.com


1.2 TECHNOLOGIES DRIVING FUTURE CARS benchmarking and verification tools for the very specific problem of self-driving. The complexity of the components and building blocks, and the vast variety of possible driving scenarios, does not allow developers to have a thorough field testing. This gives rise to the market of com-plex, photorealistic simulation environments and open-sourced computer games. These platforms not only allow us to test functionalities and reproduce scenes, but also provide a training environment for motion planning modules on maneuvering and accident simulation. Figure 3.

Hardware: running in real-time

The downside of using AI-based deep learning (DL) algorithms is the relatively high computaFigure 4 tional ca-pacity required. We are in the phase of One of AImotive’s FPGA evaluation kits, running AI inference on low-power computing hardware. hardware development where the utilized neural The self-driving technology is in urgent need of customized application specific hardware for Neural Networks inference, aiming for a production level. network architec-tures still need to be downsized and optimized for a real-time inference (data processing), and as a trade-off, the precision and car development: the car has to carry out this maneuver in any reliability of algorithms suffer. driv-ing scenario, with its decisions affecting the behavior of Furthermore, the only widely commercially available techother participants of the traffic and vice versa, playing a multinology for running such algorithms is provided by graphical agent game, where every player needs to win. processing units (GPUs), but these are general processing hardTools from the drawer: how to train your AI ware not inherently optimized for self-driving specific networks. Today, AI is a general tool for solving various self-driving As limited processing capabilities are now posing a bottleneck related tasks, however, its everyday use has been narrowed in the productization of these vehicles, in answer to the demand down to just a couple of fields. Sophisticated AI-based image a new era has started where chip providers are rethinking chip recognition techniques using deep convolutional networks design to focus on the hardware acceleration of NN inference, (DCNs), proved themselves against traditional computer vision which will ultimately lead to an increased performance density (CV) algo-rithms while neural networks (NNs) also provide and allow automotive safety integrity level (ASIL) compliance. superior performance in decision making and planning through Figure 4. recurrent network structures. A combination of these structures The transparency of the self-driving ecosystem is crucial: the into a vast black box network and letting the car learn how to three components—algorithms, tools and hardware— cannot be drive in a virtual environment is today referred to as end-to-end separated. This requires the simultaneous development of these learning. components, and the industry is striving for this structured apEither way, one thing is common for all use cases: for the vast proach. Otherwise, what remains is just building blocks without amounts of training data, both positive and negative examapplication, and an unsatisfied technological demand. ples are needed. In order to let such a system enter the roads, evolving from a prototype of limited technological readiness to Author Bio: a verified fail-safe vehicle, a structured set of development tools Árpád Takács is an outreach scientist for AImotive. AImotive is should be provided to aid the algorithms and the software. These the leader in AI-powered motion, and the first company who will tools should account for data handling, includ-ing data collecbring Level 5, self-driving technology with a camera first sensor tion, annotation (data labeling), augmented data generation, approach to the global market. Árpád’s fields of expertise include pre- and post-processing, and sensor calibration. There is also analytical mechanics, control engineering, surgical ro-botics and a need, however, for a constant algorithm support by flexible machine learning. Since 2013, Árpád has served as a research training environments for the various AI algorithms, specialized assistant at the Antal Bejczy Cen-ter for Intelligent Robotics at frameworks for the optimization of neural net-work architecÓbuda University. In 2016, he joined the R&D team of the Austritures and the previously mentioned high-level sensor fusion. an Center for Medical Innovation and Technology. Árpád received Once algorithms are trained, individual and complex his mechatronics and mechanical engineering modeling degree component testing is required to meet automotive standards from the Budapest University of Technology and Economics. for safety and reliability, which requires objective measures of www. aimotive.com precision, recall or false rejection rates, setting a demand for

12 | RTC Magazine JULY 2017


BRING THE FUTURE OF DEEP LEARNING TO YOUR PROJECT. With unmatched performance at under 10W, NVIDIA Jetson is the choice for deep learning in embedded systems. Bring deep learning and advanced computer vision to your project and take autonomy to the next level with the NVIDIA Jetson™ TX1 Developer Kit.

®

Ready to get started? Check out our special bundle pricing at www.nvidia.com/jetsonspecials Learn more at www.nvidia.com/embedded © 2016 NVIDIA Corporation. All rights reserved. RTC Magazine JUNE 2017 | 13


1.3 TECHNOLOGIES DRIVING FUTURE CARS

How Neural Networks Are Changing the Advanced Driver Assistance Systems Embedded convolutional neural networks (CNNs) now provide the performance needed to enable real-time analysis of the streaming video from multiple cameras on a car, and to determine what to react to and what to ignore. It will change the future of the Advanced Driver Assistance Systems (ADAS). by Gordon Cooper, Embedded Vision Product Marketing Manager, Synopsys

With the increase in autonomous and semi-autonomous vehicles, the role of embedded vision has never been greater. Embedded vision gives an automobile a set of eyes, in the form of multiple cameras and image sensors, and the neural networks behind the vision is critical for the automobile to interpret content from those images and react accordingly. To accomplish this complex set of functions, embedded vision processors must have algorithms to run on these processors (based on neural networks), be hardware optimized for performance while achieving low power and small area and have robust tools to program the hardware efficiently. The significant automotive safety improvements in the past (e.g., shatter-resistant glass, three-point seatbelts, airbags), were

passive safety measures designed to minimize damage during an accident. We now have technology that can actively help the driver avoid crashing in the first place. Advanced Driver Assistance Systems (ADAS) are behind the semi-autonomous features we see today, and will help autonomous vehicles become an everyday reality. Blind spot detection can alert a driver as he or she tries to move into an occupied lane. Lane departure warning and Lane Keep Aid alerts the driver if the car is drifting outside its lane and actively steers the car back into their own lane. Pedestrian detection notifies the driver that pedestrians are in front or behind the car and Automatic Emergency Braking applies the brakes to avoid an accident or pedestrian injury. As ADAS features are combined, we get closer to autonomous

Figure 1 Cameras, enabled by high-performance vision processors, can “see� if objects are not in the expected place.

14 | RTC Magazine JULY 2017


vehicles—all enabled by convolutional neural networks (CNNs) and high-performance vision processing. Auto manufacturers are including more cameras in their cars, as shown in Figure 1. A front facing camera can detect pedestrians or other obstacles and with the right algorithms, assist the driver in braking. A rear-facing camera – mandatory in the United States for most new vehicles starting in 2018 – can save lives by alerting the driver to objects behind the car, out of the driver’s field of view. A camera in the cars cockpit facing the driver can identify and alert for distracted driving. And most recently, adding four to six additional cameras can provide a 360-degree view around the car.

HOG algorithm looks at the edge directions within an image to try to describe objects. HOG was considered a state-of-the art for pedestrian detection as late as 2014.

Emergence of Neural Networks for Object Detection CNNs are organized as a set of layers of artificial neurons, each of which undertakes a series of operations and commu-

Vision Processors + CNN for Object Detection

Since the driver is already facing forward, a front facing camera may seem unnecessary. However, a front facing camera that is consistently faster than the driver in detecting and alerting for obstacles is very valuable. While an ADAS system can physically react faster than a human driver, it needs embedded vision to provide real-time analysis of the streaming video and know what to react to. Vision processors are based on heterogeneous processing units. That means the programming tasks are divided into processing units with different strengths. Most of the code will be written using C or C++ for a traditional 32-bit scalar processor, which provides an easyto-program processor. The vector DSP unit will perform most of the computations, because its very large instruction word can handle a lot of parallel computations for pixel processing of each incoming image. Detecting a pedestrian in front of a car is part of a broad class of “object detection.” For each object to be detected, traditional computer vision algorithms were hand-crafted. Examples of algorithms used for detection include Viola-Jones and more recently Histogram of Oriented Gradients (HoG). The

Are Your OpenVPX Handles Breaking?

Superior Rugged Metal Claw If you are ready for a more robust handle/panel solution, come to Pixus! Our OpenVPX handles feature a metal engagement claw and rugged design that ensures the highest reliability. Ask about our new rugged horizontal extruded rails with thicker material for OpenVPX and high insertion force systems today!

sales@pixustechnologies.com pixustechnologies.com

RTC Magazine JULY 2017 | 15


1.3 TECHNOLOGIES DRIVING FUTURE CARS

Figure 2 DesignWare EV6x Embedded Vision Processors include scalar, vector and CNN processing units for both pre- and post-processing

art for efficiently implementing deep neural networks for vision. CNNs are more efficient because they reuse a lot of weights across the image. Early CNNs in the embedded space were performed using a GPU or using the vector DSP portion of a vision processor. However, it’s helpful to look at the task performed in terms of three different heterogeneous processing units. Early implementations of CNNs in hardware had a limited number of Multiply-Accumulator (MAC) units. For example, Synopsys’s EV5x, the industry’s first programmable and configurable vision processor IP cores implemented a CNN engine with 64 MACs. Running at 500 MHz, the EV5x could produce 32 GMACs/s or 64 GOPs/s of performance (a multiply-accumulator performs two operations in one instruction). That was not enough performance to process an entire 1MP (1280 x 1024) frame or image. However, it was enough processing power to perform a CNN on a portion of the image (say a 64x64 pixel patch). To process the entire image, a two-step process for pedestrian detection was needed. The vector DSP would perform a computationally intensive Region of Interest (ROI) algorithm on each incoming image of the video stream. ROI identifies candidates using a sliding window approach that could be a pedestrian (ruling out, for example, portions of the sky). Those “pedestrian” patches were then processed by the CNN to determine if it was in fact a pedestrian. CNN-based pedestrian detection solutions have been shown to have better accuracy than algorithms like HoG and perhaps more importantly, it is easier to retrain a CNN to look for a bicycle than it is to write a new hand-crafted algorithm to detect a bicycle instead of a pedestrian.

nicates its results to adjacent layers. Each type of layer offers different functions, e.g., input layers which take in the image data, output layers which deliver the specified results (such as recognition of objects in an image), and one or more hidden layers between the input and output which help refine the network’s answers. Although the concept of neural networks, which are computer systems modeled after the brain, have been around for a long time, only recently have semiconductors achieved the processor performance to make them a practical reality. In 2012, a CNN-based entry into the annual ImageNet competition showed a significant improvement in accuracy in the task of image classification over the traditional computer vision algorithms. Because of the improved accuracy, the use of neural network-based techniques for image classification, detection and recognition have been gaining momentum ever since. The important breakthrough of deep neural networks is that object detection no longer has to be a hand-crafted coding exercise. Deep neural networks allow features to be learned automatically from training examples. A neural network is considered to be “deep” if it has an input and output layer and at least one hidden middle layer. Each node is calculated from the weighted in3 puts from multiple nodes in the previous Figure Components required for graph training. The process is for the machine to look at the initial graphs and learn. layer. CNNs are the current state-of-the At the end of the process, the machine will have a set of graphs in its brain for future reference. 16 | RTC Magazine JULY 2017


Transform your business with the Internet of Things. Start with powerful solutions from Dell Designing Internet of Things (IoT) solutions can unlock innovation, increase efficiencies and create new competitive advantages. But in an emerging marketplace of mostly unknown and untested solutions, where should you start? Start with a proven leader in technology solutions: Dell. Leveraging over 32 years of IT expertise and 16 years of partnering directly with operational technology leaders, we’ve recently expanded our IoT portfolio to include Dell Edge Gateways and Dell Embedded Box PCs. Coupled with Dell data center, cloud, security, analytics and services capabilities, these powerful solutions can help you connect what matters and accelerate your IoT return on investment.

Dell Edge Gateway 5000

Dell Embedded Box PC 5000

Dell Embedded Box PC 3000

Learn More at Dell.com/IoT Today Š2016 Dell Inc. All rights reserved. Dell and the Dell logo are trademarks of Dell Inc. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.


1.3 TECHNOLOGIES DRIVING FUTURE CARS Larger CNNs for Whole Frame Object Detection

As embedded CNNs become more powerful, they no longer are restricted to processing patches of the incoming image. Synopsys’s latest vision processor, the EV6x, includes a CNN engine with 880 MACs – a significant performance leap compared to its predecessor. Running at 800MHz, this produces (880 x .8) = 704 GMACs/s or about 1400 GOPs/s. That performance is enough to process an entire 1MP image using CNN. The vector DSP is still valuable for pre-processing the images (e.g., reformatting and pyramiding) and performing post-processing tasks like non-maximum suppression (NMS). As shown in Figure 2, the EV6x still has scalar, vector and CNN units for heterogeneous processing. It was also designed with multicore features that allow it to easily scale to multiple vision cores. The benefit of processing the entire image frame is that CNN can be trained to detect multiple objects. Now, instead of just finding a pedestrian, the CNN graph can be trained to find a bicycle, other automobiles, trucks, etc. To do that with an algorithm like HoG would require hand-crafting the algorithm for each new object type.

Training and Deploying CNNs

As mentioned earlier, a CNN is not programmed. It is trained. A deep learning framework, like Caffe or TensorFlow, will use large data sets of images to train the CNN graph – refining coefficients over multiple iterations – to detect specific features in the image. Figure 3 shows the key components for CNN graph training, where the training phase uses banks of GPUs in the cloud for the significant amount of processing required. The deployment – or “inference” – phase is executed on the embedded system. Development tools, such as Synopsys’s MetaWare EV Toolkit, take the 32-bit floating point weights or

coefficients output from the training phase and scale them to a fixed point format. The goal is to use the smallest bit resolution that still produces equivalent accuracy compared to the 32-bit floating point output. Fewer bits in a multiply-accumulator means less power required to calculate the CNN and smaller die area (leading to lower the cost) for the embedded solution. Based on Synopsys calculations, 10-bit or higher in resolution is needed to assure the same accuracy of the 32-bit Caffe output without graph retraining. The MetaWare EV tools take the weights and the graph topology (the structure of the convolutional, non-linearity, pooling, and fully connected layers that exist in a CNN graph) and maps them into the hardware for the dedicated CNN engine. Assuming there are no special graph layers, the CNN is now “programmed” to detect the objects that it’s been trained to detect. To keep the size small, the CNN engine is optimized to execute for key CNN features such as 3x3 and 5x5 matrix multiples, but not so optimized that it becomes a hard wired solution. It’s important to be programmable to maintain flexibility. As CNNs continue to evolve – new layer techniques or pooling methods for example – the vector DSP can play another important role in the vision processing. Since the vector DSP and CNN engine are closely coupled in the Synopsys EV6x, it is easy to dispatch tasks from the CNN to the vector DSP as needed. OpenVX runtime, incorporated into the MetaWare EV tools, makes sure those tasks are scheduled with other vector DSP processing requirements. The vector DSP future-proofs the CNN engine. Figure 4 shows the inputs and outputs of an embedded vision processor. The streaming images from the car’s camera are fed into the CNN engine that is preconfigured with the graph and weights. The output of the CNN is a classification of the contents of the image.

Figure 4 Inputs and outputs of embedded vision processor is an embedded convolutional neural networks (CNN) which inckude a mapping tool, vector units and a CNN engine.

18 | RTC Magazine JULY 2017


No Application is PERFECTLY Secure. Making complex connected devices for the Internet of Things (IoT) secure is difficult. It's hard to know everything about cybersecurity. Let us help you understand the challenges and solutions with our Downloadable e-book called “Cybersecurity for Things�.

Download the e-book at www.intelligentsystemssource.com/has-ebook/


1.3 TECHNOLOGIES DRIVING FUTURE CARS Scene Segmentation and Navigation

Up to now, we’ve discussed object classification of pedestrians (or bicycles or cars or trucks) that can be used for collision avoidance – an ADAS example. CNNs with high enough performance can also be used for scene segmentation – the identifying of all the pixels in an image. The goal for scene segmentation is less about identifying specific pixels than it is to identify the boundaries between types of objects in the scene. Knowing where the road is compared to other objects in the scene provides a great benefit to a car’s navigation and brings us one step closer to autonomous vehicles. One scene segmentation example, running on a DesignWare EV61’s CNN, segmented the streaming images using 11 categories of objects (road, sky, buildings, pedestrians, etc.). With five channels of 1920x1080 images as input, the CNN, running at 800MHz, achieved 18fps. Scene segmentation is difficult for CNNs that don’t have the horsepower to process multiple instances of the full images (frames).

Future Requirements for Vision Processors in Automotive Vision

Vision processing solutions will need to scale as future demands call for more processing performance. A 1MP image is a reasonable resolution for existing cameras in automobiles. However, more cameras are being added to the car and the demand is growing from 1MP to 3MP or even 8MP cameras. The greater a camera’s resolution, the farther away an object can be detected. There are simply more bits to analyze to determine if an object, such as a pedestrian, is ahead. The camera frame-rate (FPS) is also important. The higher the frame rate, the lower the latency and the greater the stopping distance. For a 1MP RGB camera running at 15 FPS, that would be 1280x1024 pixels/frame times

20 | RTC Magazine JULY 2017

15 frames/second times three colors or about 59M bytes/second to process. An 8MP image at 30fps will require 3264x2448 pixels/frame times 30 frames/second times three colors or about 720M bytes/second. This extra processing performance can’t come with a disproportionate spike in power or die area. Automobiles are consumer items that have constant price pressures. Low power is very important. Vision processor architectures have to be as optimized as power and yet still retrain programmability.

Conclusion

As the requirements for ADAS in automotive applications continue to grow, embedded vision and deep learning technology will keep up. Object detection has evolved from small-scale identification to full scenes with every pixel accounted for, and flexibility will continue to be as important as performance, power and area. Synopsys’ DesignWare EV6x Embedded Vision Processors are fully programmable to address new graphs as they are developed, and offer high performance in a small area and with highly efficient power. Author Bio: Gordon Cooper is a Product Marketing Manager for Synopsys’ Embedded Vision Processor family. Gordon brings more than 20 years of experience in digital design, field applications and marketing at Raytheon, Analog Devices, and NXP to the role. Gordon also served as a Commanding Officer in the US Army Reserve, including a tour in Kosovo. Gordon holds a Bachelor of Science degree in Electrical Engineering from Clarkson University. www.sysnopsys.com


Flash Storage Array with 200TB capacity in four removable canisters

50TB data in each 7 Lb. removable canister

• 100Gb Infiniband or Ethernet connections • MIL-STD 810 and 461 tested • Two versions: airborne and ground • 4U rackmount unit

(877) 438-2724

www.onestopsystems.com


1.4 TECHNOLOGIES DRIVING FUTURE CARS

Seamless, Multi-user Wi-Fi Connection For Modern Cars Smart over-the-air upgrades and display sharing represent two key emerging applications for Wi-Fi connectivity in cars. Automotive designers must plan for concurrent operation of multiple applications over the 2.4GHz and 5GHz radio frequencies. by Richard Barrett, Automotive Senior Product Marketing Engineer, Cypress Semiconductor

People today have become used to having Internet access every minute of every day. At home, at the workplace, or when out on foot or on public transport, the Internet is instantly accessible to anyone carrying a smartphone. Accessing the Internet in your car, however, is often challenging and cumbersome. Car manufacturers are still trying to find the best way to integrate Internet connectivity into the vehicle’s user interfaces, such as the Infotainment system. They also need to provide a fast, robust wireless Internet connection for the smart devices that the driver and passengers bring with them into the car. Many consumers value (e.g. are willing to pay a premium) the ability to use communication, productivity and entertainment applications within their vehicle. By integrating wireless Internet access into the vehicle’s user interfaces, automotive OEMs can enable the driver and passengers to safely use many Internet ap22 | RTC Magazine JULY 2017

plications and functions while the vehicle is in motion. However, this requires more than just providing reliable, high-speed Wi-Fi access. Car Wi-Fi needs to be able to service multiple users seamlessly and simultaneously. It also needs to robustly manage system functions that utilize Wi-Fi bandwidth. This article describes the challenges of managing concurrent wireless connections and applications, as well as outlines the essential features of hardware architectures that overcome these problems.

Applications for In-vehicle Wi-Fi Networks

Modern vehicles already offer Bluetooth wireless connectivity as a standard feature for hands-free calling as well as audio/music streaming from mobile phones. Many infotainment systems can also synchronize with phones to support data services such as phone book access, messaging, and vehicle information uploads.


Many applications, however, require wireless Internet connectivity between devices and a router or access point. For this, Wi-Fi is the universally accepted standard technology and so is the primary wireless technology under consideration for this role in cars. To provide Internet connectivity to multiple end-user devices, vehicle manufacturers are integrating LTE (mobile phone network) modems to provide a pipe to the Internet. A Wi-Fi access point then provides a connection to smart devices within the vehicle. On the highway, Internet traffic will be routed via the LTE modem. In some locations, such as shopping mall parking lots or in city centers, the in-vehicle Wi-Fi may be able connect to the Internet via public Wi-Fi access points. Just like in mobile phones, offloading data from the LTE network via Wi-Fi can substantially reduce total Internet subscription costs as well as help manage cellular traffic. It is important to note that connecting user devices is not the only function that an in-vehicle Wi-Fi network will support. The high data transfer rates and standards-based connectivity that a Wi-Fi network provides enables a wide range of exciting new applications, including: • Smart Over-The-Air (OTA) Upgrades: Just like any other complex system, vehicles can offer improved efficiency, security, and functionality through regular updates. The ability to update over-the-air eliminates the need for vehicle manufacturers to bring vehicles into a service center for up-

dates. This substantially reduces the cost of critical updates. It also enables automotive OEMs to distribute such updates far more quickly than traditionally possible. OTA also opens the door to functional and informational updates that can improve the driving experience. • Display Sharing – services such as Apple’s CarPlay, Android Auto, and MirrorLink display the driver’s smartphone screen and functions to the vehicles infotainment display (see Figure 1). This allows the driver to use a smartphone’s applications and content while driving, accessing them through the touchscreen display or via voice commands. CarPlay, for instance, lets the driver use Apple’s Siri voice recognition software to control smartphone applications such as text messaging so he or she can keep both hands on the steering wheel and eyes on the road. Today, display sharing is a feature provided in high-end cars, or as an expensive option in mid-range vehicles using USB. With Wi-Fi in the car, display sharing can be deployed at the low end of manufacturers’ ranges as well. This can further reduce vehicle cost by enabling OEMs to implement functions such as navigation through the driver’s smartphone. In some applications, the OEM will be able to avoid integrating a dedicated navigation system into the vehicle itself, saving cost and simplifying the car’s design. Figure 1. Applications built around the in-vehicle Wi-Fi network will exhibit important differences compared to typical home uses.

Figure 1 The 2018 Honda Odyssey’s display includes support for Android Auto, as well as Apple’s CarPlay.

RTC Magazine JULY 2017 | 23


1.4 TECHNOLOGIES DRIVING FUTURE CARS For example, at home a Wi-Fi router primarily serves as a pipe to the Internet for multiple devices. An in-vehicle Wi-Fi router will have to simultaneously: • Provide Internet access • Serve as a high-bandwidth link between smartphones and the head unit for CarPlay, Android Auto or similar services • Support concurrent applications such as in-car remotes, controls, cameras, and speaker systems • Check the manufacturer’s cloud servers for software updates • Download updates when necessary Maintaining concurrency and quality of service across applications poses a challenge for developers. Traditional Wi-Fi applications typically support a single application use case and chips are designed and cost-optimized for this use case. Consider the challenge of supporting a live stream of an important football game on a tablet to a passenger in the rear seat. The viewing experience cannot tolerate periodic content buffering while the Wi-Fi network is busy mirroring mapping and navigation content from the driver’s smartphone to the head unit. Figure 2. Traditional Wi-Fi chips use a single MAC to switch across channels and bands. They become quite limited when multiple applications and multiple bands are required. The solution is to provide two separate Wi-Fi connections simultaneously in different frequency bands from the same Wi-Fi Access Point chip/ router. For example, the CYW89359 radio system-on-chip for automotive systems from Cypress Semiconductor supports two separate MACs to maintain Wi-Fi flexibility and performance through two independent, dedicated networks, also known as Real Simultaneous Dual Band (RSDB). With a dual-band, dual-MAC radio, the system can support simultaneous operation at both 2.4GHz and 5GHz. Each radio has a separate Media Access Controller (MAC) and physical lay-

er interface (PHY) to its own antenna (see Figure 2). This means that in a typical automotive implementation, a vehicle can run a display-sharing service on the 5GHz radio while providing an uninterrupted Internet connection for user devices on the 2.4GHz radio.

Coexistence

This problem of concurrency extends beyond the provisioning of Wi-Fi connectivity to include 2.4 GHz Bluetooth connections as well. In the scenario outlined above, a second passenger may be conducting a voice call over a Bluetooth audio link. In point of fact, the Bluetooth radio is active even when no device is paired to it since a Bluetooth host is continually advertising itself by broadcasting an ‘I’m available’ message to all devices in range. Different manufacturers implement Bluetooth advertising in different ways. Reducing the advertising duty cycle saves power, but risks extending the time before a Bluetooth end user device is recognized by the host and paired. Many car manufacturers tend to maintain a 100% Bluetooth advertising cycle for the best user experience. This was appropriate when Bluetooth was the only 2.4 GHz wireless technology in the car. With Car Wi-Fi, the effect of collocating a 100% advertising cycle Bluetooth radio saturates the 2.4 GHz band. This in turn greatly limits the use of 2.4G Wi-Fi if shared on the same antenna. For this reason, the most robust implementation will provide the Bluetooth radio its own dedicated RF path and antenna. Thus the host’s advertising broadcasts can be transmitted via one antenna while the 2.4GHz and 5GHz Wi-Fi radios operate via separate antennas. This provides for concurrent Bluetooth and Wi-Fi operation without interruption.

Securing the Internet interface

By installing a Wi-Fi radio into cars, manufacturers provide a communications channel capable of carrying large over-theair software updates. While this has tremendous value to both manufacturer and the vehicle owner, it also exposes the vehicle

Figure 2 Radio SoCs like the Cypress CYW89359 integrate multiple Wi-Fi and Bluetooth radios to simplify providing seamless Wi-Fi to multiple devices and systems simultaneously.

24 | RTC Magazine JULY 2017


The New Genie™ Nano. Better in every way that matters. Learn more about its TurboDrive™ for GigE, Trigger-to-Image Reliability, its uncommon build quality… and its surprisingly low price.

*starting at

USD *Taxes & shipping not included

» GET MORE GENIE NANO DETAILS AND DOWNLOADS: www.teledynedalsa.com/genie-nano


1.4 TECHNOLOGIES DRIVING FUTURE CARS to the risk of a malicious attack, in the same way that networked personal computers are vulnerable to hackers. The security of a connected car is a systems issue that includes all “pipes” that access the vehicle. These pipes include Wi-Fi, Bluetooth, LTE, and even hard-wired connections. Thus, securing the Wi-Fi channel is an important part of a complete security strategy for the car. Any Wi-Fi chip embedded in the vehicle must support the latest, proven standards for encryption and authentication. The CYW89359, for example, supports security technologies such as: • WPA and WPA2 for authentication • Chinese Wireless Authentication and Privacy Infrastructure (WAPI) standard • Advanced Encryption Standard (AES) • Temporal Key Integrity Protocol (TKIP) for encryption With the proper security capabilities, vehicles can be as secure as a user’s home network. This article has described the most likely use cases for an in-vehicle Wi-Fi network. It is clear from this description that any implementation of Wi-Fi in a passenger car needs to provide for seamless, uninterrupted, and concurrent operation of multiple applications over the 2.4GHz and 5GHz radio frequencies. It must also support Internet connectivity for end-user devices,

26 | RTC Magazine JULY 2017

display sharing services, and Bluetooth applications. To achieve this, automotive OEMs must implement wireless connectivity in a manner that integrates connectivity with the car as a whole and offers simultaneous operation of separate communications streams over the 2.4GHz and 5GHz Wi-Fi channels and over the Bluetooth 2.4GHz channel. Author Bio: Richard Barrett is an industry veteran with over 30 years in technology. He has been driving WLAN & Bluetooth technology innovation into automotive and mobility markets for the past 16 years. He is part of the automotive wireless technology team at Cypress Semiconductor, coming over as part of Cypress’ recent acquisition of the wireless IoT business of Broadcom, where he had worked since 2004. Currently, he is focused on bringing the latest wireless semiconductor technology to vehicle infotainment, telematics and V2X applications, as automobiles become an integral part of the Internet of Things. Barrett holds degrees in physics, mathematics and philosophy, as well as a Masters in Engineering Management from The George Washington University. www.cypress.com


Embedded/IoT Solutions Connecting the Intelligent World from Devices to the Cloud Long Life Cycle · High-Efficiency · Compact Form Factor · High Performance · Global Services · IoT

IoT Gateway Solutions

Compact Embedded Server Appliance

Network, Security Appliances

High Performance / IPC Solution

E100-8Q

SYS-5028A-TN4

SYS-5018A-FTN4 (Front I/O)

SYS-6018R-TD (Rear I/O)

Cold Storage

4U Top-Loading 60-Bay Server and 90-Bay Dual Expander JBODs

Front and Rear Views SYS-5018A-AR12L

SC946ED (shown) SC846S

• Low Power Intel® Quark™, Intel® Core™ processor family, and High Performance Intel® Xeon® processors • Standard Form Factor and High Performance Motherboards • Optimized Short-Depth Industrial Rackmount Platforms • Energy Efficient Titanium - Gold Level Power Supplies • Fully Optimized SuperServers Ready to Deploy Solutions • Remote Management by IPMI or Intel® AMT • Worldwide Service with Extended Product Life Cycle Support • Optimized for Embedded Applications

Learn more at www.supermicro.com/embedded © Super Micro Computer, Inc. Specifications subject to change without notice. Intel, the Intel logo, Intel Core, Intel Quark, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries. All other brands and names are the property of their respective owners.


2.1 THE GREAT MACHINE VISION IMPACT

The New Face of Machinery There are new ways of interacting with connected products. Why build instrumentation and controls into machines if every user will have a tablet or phone? Just run an app to see the displays and buttons, and operate the machine. Manufacturers will change their approach to development, operations and service. by Peter Thorne, Director, Cambashi

Smartphones as controllers I remember feeling mildly alarmed during a 2012 research interview with a medical equipment designer. The project was to estimate the potential cost savings of using the electronics and display of smart phones as part of the control system. The idea was for every user to dock their phone into the equipment. The design study was looking at user identification, login, and privacy. My instant reaction was hygiene since this is medical equipment. Also, would there be enough staff with phones to operate all the machines? Then the security gorilla reared its head - how could anyone be confident the phones were free of malware? Then also in 2012, I first became aware of Ecomove’s Qbeak electric vehicle design. At that time, it used a similar concept. The driver docks their phone into the car, and the phone becomes the instrument cluster, sat-nav, and the infotainment system. I don’t remember feeling alarmed by the Qbeak. This

28 | RTC Magazine JULY 2017

is a few years ago and the phone did not control the brakes or steering!

New IOT interaction with products The growth of technologies around the Internet of Things has made this kind of ideas just one part of a whole host of new ways of interacting with all kinds of products. Communication with a connected-product can be both ways - in and out. The communication can be with the product itself, and/or with its digital twin, and other variations to try out ‘what-if ’ scenarios. Cloud-connected products can be accessed from any Internet access point. The interaction can include any or all of the sensor readings and control settings. Data sources and systems external to the product can be fed into the interaction. For example: • in a production machine, visibility of customer orders helps • for agricultural machines, crop yield histories help farmers


to optimize their fertilizer application. • product sensor readings and cloud-based analytics enable predictive maintenance - the technician arrives with the right spare part just before the problem results in unplanned downtime

So who needs those dials and switches?

your machine, you are making some of your manufacturing colleagues’ tasks simpler - fewer parts, fewer display, switch and button cut-outs in the exterior panels … so generally simpler production.

…and rewrite existing business models

If remote control is possible, then what’s the point in having connected product with displays and instruments for local control? Why not remove these expensive components? The connectivity will allow any authorized user with the right app on their phone or tablet to stand beside the machine and use the app to check readings and adjust controls. And the software that provides this capability may offer more than you expect - for example, review of recent control inputs and sensor readings.

This view is just the beginning. Taking the visible controls and displays away from a product triggers the question “… who is monitoring and controlling this machine?” This is where your engineering initiative can help develop your organization’s business model. The new control concept makes it easy to see that your own company, or a third party, could manage and control the product - for example, from a central service center. Your organization could use possibilities to move from selling products, to selling the use - or even outcomes - of using these products.

Add a touch of augmented reality

The scope gets bigger, again

Augmented reality (AR) technologies add information to a live video of a product. The video feed could come from: • a camera built-in to the machine • a camera installed so that is has a view of several machines • the camera on an operator’s phone or tablet The value comes from breakthroughs like, for example, the ability to display an X-ray of the product, which can be used to highlight faulty components. In some use-cases, there’s not even any need for the product itself! Why should a distributor tie up capital in a showroom full of machines? Why not markers in place of the machines, and an AR application that provides a viewport for your customers to walk around and study a detailed product image from all angles? Since it’s AR, they could see alternative options and configurations, and call up specifications all at the touch of a button (or screen). The need to change development, operations, and service With barriers of distance and location eliminated, people, other machines, and external systems can observe a connected product (and its digital twin) and respond in new ways. Product developers for machinery have considered this – along with ever-present cost reduction – for product functionality and service. The whole product lifecycle needs to be considered now. What could your machine do to make itself easier to make, test, buy, configure, install, learn-to-use, and operate? You’ve probably run many initiatives focused on the design-to-manufacturing interface, from early days of developing the manufacturing concept, to creating the process, ramping up to volume, and managing the continuous change to handle manufacturing and field feedback. So the product development process is probably multi-disciplinary, bringing development, manufacturing (and perhaps even service engineers) together to improve decision-making by taking a broad view of the requirements. Of course, when you remove the switches and displays from

Removing product switches and displays makes some things simpler, but not enough to turn the tide of growing complexity. Handling the transition to a smart product is tough because of the multiple technologies involved: • Mechanical • Electrical • Electronic • Software Trade-off decisions are now even more complex, so much so that a systems-engineering discipline may be needed to avoid a committee vote for every decision! A smart connected product, sold with operation or service agreements, means much stronger connection of the engineering team to the product in operation. Instead of being largely isolated in the old ‘development’ and ‘production’ parts of the organization, data streams from the product provide a high fidelity view of the product in operation. This will help calibrate simulations. The new service team will be fiercer than any customer in feedback of any problems. Figure 1.

New life in the field Product function and performance depends on all its components (including the software), as well as the capabilities of the connected back-end systems. So, development engineers (and, of course, the sales and marketing teams) have a new method of providing new capabilities update the software (and remember to update the as-maintained records).

Caught in the dataflow? Imagine engineering teams getting caught out by the volume, frequency, scope and detail of even these new dataflow, and we haven’t even mentioned software configuration and support for

RTC Magazine JULY 2017 | 29


2.1 THE GREAT MACHINE VISION IMPACT resellers wanting to demonstrate the new capabilities, or coordinating a new software baseline with production and test. Fortunately, for most design and manufacturing organizations, this is familiar territory, given that engineering dataflow and processes have been getting more and more complicated for decades, for a range of reasons including: • distributed development teams • global supply chains • gaining regulatory approvals Software from the Product Lifecycle Management (PLM) stable has provided the tools needed to manage data, and manage workflows. PLM can handle the new dataflow. The new engineering software battlegrounds The transition of smart connected products from the special case (NASA has been building smart connected products for decades) to more widespread adoption is a shift in the tectonic plates of the engineering software landscape. There are also loads of opportunities for competing engineering software vendors to gain an edge over their rivals. Think of: Agile systems definition: Agile methods in software development are “just good engineering.” However, they often lack visibility and control for a complex supply chain. Configuration management, product line engineering and platform architectures all offer partial answers, but smart connected products will create demand for new agile systems definition tools to support concept and early stage architecture development, capable of driving consistent use of the many early stage simulations product architects will need. ALM or PLM or both? In software development, Application Lifecycle Management (ALM) tools play the role that PLM plays for the physical parts of a product. So how can integrated

software/hardware teams manage their work? One way is to separate out ‘management’ of everything into a higher level function that supports access control, versions, workflows, baselines, variants, dependencies … everything excluding the content of the object being managed. Others compete with this concept by creating integrated environments - the Integrated Development Environment (IDE) used in software development is an example - in which authoring and test tools are included, so the result manages the content as well as the status of the managed objects. The BoM boundaries. When talking about product definition, the problem has always been “Which Bill?” As designed, as planned, as manufactured, as installed, as maintained - they all have a claim. PLM has been secure in control of the engineering parts list. ERP has managed BoM (bill of materials) for production scheduling. Similarly, PLM has control of development of the manufacturing process, and the manufacturing process plan for each product, sometimes called the ‘Bill-of-Process.” But ERP providers can get involved as this gets translated into shop floor documentation and electronic work instructions. Adding embedded software as a component of the product will disrupt this battle. Service and Over-the-Air update: Most service organizations will want to make sure that engineering has no more than read-only access to products in the field. Similarly, service organizations will want control over the applications that handle data (especially alarms) from in-service products. The service organization will want their process of escalation and adherence to service-level-agreements, to take priority over engineering’s desire to identify root causes. This is a new and interesting area, because PLM systems already contain all the

Figure 1 The new face of machinery dataflow: Both remote service team and sensor will provide feedback to development, production and in-service. This will yield better quality overall.

30 | RTC Magazine JULY 2017


configuration dependencies. Could PLM be extended so that these match processes to costs, and that is often the message budget dependencies can drive service decisions in the field? Or do serholders want to hear. vices need their own as-maintained BoM and configurator rules? Test management: Some design methods start with ‘how Author Bio: can this capability be tested.’ It is also possible to parameterize Peter Thorne is the Director for research analysis and consulttests, and link these parameters to product parameters - so the ing company Cambashi. He focuses on addressing the business final choice of the product parameter in effect generates the test needs of engineering and manufacturing organizations through specification. information and communications technology. Peter has 30 years Look at how these concepts can manage and automate test of experience, holding development, marketing and management creation. Testing on the master version along with tests once positions for both user and vendor organizations. Prior to joining the software is loaded onto the smart product also needs to be Cambashi, he spent seven years as head of the UK arm of a global considered. IT vendor’s Engineering Systems Business Unit. He has a master’s Simulation. Simulation technologies have grown to handle degree in Natural Sciences and Computer Science from Cambridge multi-physics and interconnected sub-systems, software is a new University. technology to handle and critical to smart product performance. The simulation battleground for engineering software vendors www.cambashi.com is active on many fronts, including: • simulation data management • the practicality of flexible ways of enabling hardware (and software) “-in-the-loop” as the various prototypes of electronics, sensors, actuators become available • the feedback of actual test and product performance to calibrate and improve simulation models • making simulation accessible to a wider range of engineers In addition, as the role of the digital twin of a product becomes larger, there will be more demand for simulation to support product operation USING PICMG SPECIFICATIONS decisions.

MIL/RUGGED SOLUTIONS

Conclusion Getting used to a product with no visible means of control is just the start. Security, Internet access, the likely need to replace controllers with new generations of electronics during the lifetime of a machine, these are just some of the new factors for product developers to think about. As with previous new technologies, engineering processes and dataflow will adapt. For PLM vendors with ALM capability, this is a time of opportunity - the information their technology holds about a product now has even more value in manufacturing, as well as for operation and maintenance. But ERP vendors will point out that their systems help

COM Express

CompactPCI Serial

MicroTCA.3

AdvancedTCA

EASY-TO-USE, COST-EFFECTIVE, PROVEN PICMG-based products are deployed in high-performance Defense and Transportation applications and even in the far reaches of space! Come learn how rugged AdvancedTCA®/MicroTCA®, COM Express®, and CompactPCI® Serial products can solve your application’s most demanding requirements.

PICMG.ORG 20170602-PICMG-PrintAd-COTsJournal-11.indd 1

781-246-9318 6/5/17 11:22 AM

RTC Magazine JULY 2017 | 31


2.2 THE GREAT MACHINE VISION IMPACT

NBASE-T Brings New Bandwidth to Imaging System Design When faced with a challenge, we often seek advice from those who have successfully overcome similar hurdles to find out what they discovered or changed. Technology is no different, with designers often looking to adjacent markets for solutions to help solve a shared problem. The vision industry’s GigE Vision standard has created innovative solutions. by Ed Goffin, Marketing Manager, Pleora Technologies

In the vision market, advances from the telecommunications and networking market are driving continuing innovation that makes imaging technology more accessible and easier to use across a widening range of applications. One of the most significant “game changers” for the vision industry was the adoption of Ethernet. The vision industry’s GigE Vision standard, first ratified in 2006, borrowed from established networking standards to regulate the real-time transfer of low latency video over an Ethernet interface. Previously, designers were limited by legacy interface standards, adapted ill-fitted consumer or broadcast technologies, or were burdened by the costs of developing their own proprietary solution. Migrating to an Ethernet-based interface solution, designers solved key challenges related to cable length, multicasting, and component costs; paving the way for wide-scale adoption of real-time video analysis in industrial automation and inspection applications. As these imaging systems have become more complex, and real-time video analysis is increasingly adopted for

Figure 1 With NBASE-T technology, system designers can leverage inexpenisve, long-distance cabling to lower system costs and simplify installation and maintenance.

32 | RTC Magazine JULY 2017

sophisticated medical, security, and transportation applications, designers face a significant new challenge related to bandwidth. These fully networked, high-resolution multi-image source vision applications outputting millions of pixels of data for real-time processing are quickly outpacing the capabilities of 1 Gbps infrastructure. The vision market is again looking to the telecom and networking industries to help solve this capacity crunch, with new NBASE-T™ technology solving the bandwidth challenge without requiring a rip-and-replace network overhaul.

Introducing NBASE-T The NBASE-T specification defines a new type of Ethernet signaling that boosts the speed of twisted-pair cabling well beyond its designed limit of 1 Gbps to support 2.5 and 5 Gbps speeds at distances up to 100 meters. The specification supports autonegotiation between the new NBASE-T rates, and slower 1 Gbps rates, or – if the network infrastructure supports it – 10 Gbps. The specification is governed by the NBASE-T Alliance™, a consortium of over 40 companies representing all major facets of networking infrastructure including the vision market. Alliance members are focused on encouraging the widespread use and deployment of 2.5G and 5G Ethernet through promotion of the IEEE 802.3bz standard and testing and compliance programs to facilitate the development and deployment of interoperable products. NBASE-T technology was initially developed to help existing campus networks meet new bandwidth demands created by data-hungry mobile devices, Internet of Things (IoT) applications, high-definition video streaming and teleconferencing, and quality of service demands for multiple users. With new 802.11ac wireless access points aggregating up to 5 Gbps of throughput, designers struggled to find a cost-effective solution to connect access points and local networks. By boosting the bandwidth capability of the large installed base of Cat5e and Cat6 cabling, NBASE-T solutions enable users to accelerate their networks in the most cost-effective, least dis-


products in development scheduled for release including GigE Vision over NBASE-T embedded video interface solutions for X-ray panels and imaging devices.

New Bandwidth for Industrial Imaging Application NBASE-T technology promises to help imaging system manufacturers and designers meet increasing bandwidth requirements, while taking advantage of existing cabling in retrofit upgrades and less expensive, field-terminated cabling in new installations. For example, designers can upgrade quality inspection system to increase throughputs while employing GigE Vision over NBASE-T solutions to transmit higher bandwidth video over installed copper cabling. Video can be multicast from multiple imaging sources to reduce computing and component costs in distributed and pipeline processing systems. Figure 1.

GigE Vision over NBASE-T and Medical Imaging Figure 2 Embedded hardware solutions allow designers to easily integrate GigE Vision over NBASE-T connectivity into digital flat panel detectors.

ruptive manner. Beyond solving capacity challenges for wireless local area networks, the technology is also being used to connect client and desktop PCs to Ethernet switches, network attached storage devices to wired network infrastructure, and gateways for cable and telco triple-play voice, video, and data services.

NBASE-T and Vision Systems NBASE-T technology offers a natural evolution for high-performance imaging thanks to its bandwidth support, low-cost cabling, and compatibility with the GigE Vision standard. With GigE Vision over NBASE-T and IEEE 802.3, designers can transmit uncompressed images at throughputs up to 5 Gbps over low-cost Cat 5e copper cabling. The extended-reach, flexible, and field-terminated cabling can be easily routed through systems to ease installation and maintenance. One of the benefits of the GigE Vision standard is that it is agnostic to the physical layer. This has enabled manufacturers to create 10 GigE and 802.11 wireless interface solutions that communicate using the GigE Vision standard. Similarly, designers can create NBASE-T imaging devices and vision systems that are natively compatible with GigE Vision compliant software. An NBASE-T network interface card (NIC) and Gigabit Ethernet NIC are treated the same by Windows, Linux, and other operating systems. This means existing GigE Vision-compliant software and software development kits (SDKs) are compatible with NBASE-T without any modifications. System-level, PHY, and component products compatible with the NBASE-T specification are already shipping, with more

While real-time video is driving beneficial changes in how healthcare is delivered, medical imaging systems represent a significant investment for hospitals both in terms of initial capital costs and ongoing maintenance. As medical imaging applications multiply – from image-guided surgery to diagnostic systems – GigE Vision over NBASE-T allows equipment manufacturers to more economically and efficiently meet higher bandwidth video requirements over existing or low-cost infrastructure. Digital radiography, which uses digital X-ray sensors instead of traditional film, was one of the first markets to embrace imaging technologies developed for industrial vision system networking. Migrating to GigE Vision-enabled digital flat panel

Figure 3 Images from a digital FPD and lamp head camera are converted to GigE Vision over NBASE-T and multicast to an operating room dashboard and computing platforms used for image processing, storage, and monitoring in a control room.

RTC Magazine JULY 2017 | 33


2.2 THE GREAT MACHINE VISION IMPACT detectors (FPDs), system designers can more easily transfer images between Ethernet networked devices and processing units used to enhance, analyze, and display images. Numerous manufacturers have integrated GigE Vision interface hardware into FPDs that fit into existing systems as a direct digital drop-in replacement for film-based panels. GigE Vision over NBASE-T provides a straight forward upgrade path for these manufacturers, enabling the design of next-generation higher bandwidth FPDs for networked and multi-panel radiography applications. Figure 2. By boosting the bandwidth capabilities of low-cost, extended-reach Cat 5e cabling for a fully networked medical imaging application, processing and analysis equipment can be located outside the sterile operating room. This reduces the cost of sterilizing equipment, lowers the risk of patient infection, and allows data to be easily shared across multiple departments. One of the key advantages of GigE-based distributed network architectures is the ability to integrate previously isolated image sources and patient data onto a common network and aggregate the information to a single dashboard. In an operating room, for example, the single screen dashboard displays real-time patient data from different imaging devices and systems. The surgeon can easily switch between imaging sources, such as white light and fluoroscopic cameras and pre-operative and real-time images, without configuring hardware or software. The image feed from a lamp head camera can also be converted into the same GigE Vision-compliant image stream for easy networking with other imaging sources. Figure 3. At the transport layer, the imaging device sends only one copy of the data to a network switch. The Ethernet switch replicates the data for distribution to displays and processing platforms. This ensures video distribution doesn’t impact server performance. Leveraging Ethernet’s multicast capabilities, display and

processing functions can be distributed from a single device to multiple devices to help ensure reliability. Per-frame metadata, such as a precise timestamp of image acquisition and sensor settings, is transmitted with the images over the Ethernet link for easy integration with DICOM-compliant software and hardware. Advances in high-bandwidth imaging transport are also helping reduce radiation doses for patients. This is especially beneficial in fluoroscopy, which provides real-time X-ray images of a patient’s anatomy using radiation exposure over time. The process, however, resulted in a greater cumulative radiation exposure. Innovative fluoroscopy systems minimize a patient’s exposure by using multiple moving X-ray sources to irradiate tissue from numerous incremental angles in just seconds. Traditional interfaces would be uneconomical and too cumbersome for this application. Figure 4.

Continuing Evolution NBASE-T technology joins a growing list of recent technology advances, including GigE, 10 GigE, USB 3.0, and wireless, that are now playing a key role machine vision. For imaging system manufacturers, these new capabilities are helping simplify design, lower costs, and enhance performance for traditional machine vision applications, while supporting the migration of vision expertise into new markets. Author Bio: Ed Goffin is the marketing manager with Pleora Technologies, a leading provider of video interfaces for real-time medical, security & defense, and machine vision applications. Ed has worked in the technology industry for 20 years, and has managed marketing, corporate communications and investor relations for telecommunications, semiconductor, and video companies. www.pleora.com

Figure 4 Higher bandwidth GigE Vision over NBASE-T interfaces enable the easier design of multi-panel X-ray systems.

34 | RTC Magazine JULY 2017


IoT Know-How Starts Now Build development skills for the Internet of Things. Start your 4-week online class “A Developer’s Guide to the IoT” to earn your certification from COURSERA® and gain a free trial of the Watson IoT Platform, hands-on access to sophisticated analytics, industry-leading security technologies, and multi-device connectivity. Start Learning Now ibm.com/iot/coursera

Watson IoT

TM

IBM and its logo and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. See current list at ibm.com/trademark. Other product and service names might be trademarks of IBM or other companies. © International Business Machines Corp. 2016. COURSERA is a trademark or registered trademark of Coursera, Inc.


ADVERTORIAL

“The Company got half-the-performance at more than twice-the-price than they would have with SkyScale.”

High Performance Cloud Computing: A Better Approach The Deep Learning Future

Google CEO Sundar Pichai, called deep learning, “a core, transformative way by which we’re rethinking how we’re doing everything.” Deep learning is here – and it is changing everything. Tomorrow’s disruptive technologies and applications are being driven by deep learning. These advancements have already revolutionized autonomous driving cars, genome-based personalized medicine, fraud prevention, energy conservation, earthquake detection, deep space exploration, and many more life-enhancing pursuits. All made possible by exponential speeds, machine learning algorithms at the center of AI, and, really smart people.

Deep Learning Requirements

There are challenges. Deep learning and the associated massive data sets require high performance computing and storage. NVIDIA Tesla P100 GPUs deliver up to 50X greater performance than traditional data center platforms. Plus, the projected growth of GPU usage outside traditional graphic acceleration is expected to increase by 5X over the next five years. For some time, the industry has viewed Tesla P100s as the benchmark for

high-performance GPU accelerated computing. New offerings from AMD and IBM are being released, as well as the imminent availability of NVIDIAs own Volta GPU. Custom FPGAs and ASICs are also getting attention among the deep learning crowd. Nevertheless, high performance computing comes at a price.

The Traditional Path to Deep Learning

This level of computer performance requires significant capital investment, with systems starting at $150,000 US for a simple 8 GPU node like the Volta GPU-enabled DGX-1. A small GPU cluster of several multi-GPU systems costs upwards of $1M US,


MAJOR CLOUD PROVIDER

SKYSCALE

Technology:

Two generations removed

Newest versions & updates

Performance:

50% of latest tech

World’s fastest HPC cloud

Resources:

Shared / virtual

Dedicated servers, GPUs, & SSD storage

Security:

As secure as shared resources allows

World class physical & cyber security

Pricing:

Elastic; changes with demand without notice

Fixed; no surprises

Configuration:

Complicated

Simple interface

Communication:

Online form or email

Phone call with an engineer

Commitment:

Two minute shut-down notice

Customer has total control

Table 1

and larger clusters can run several million dollars each. On-demand, virtualized cloud computing offered by Amazon, Google and Microsoft, optimistically promise ubiquitous data crunching: access to vast, scalable resources for everyone. Yet, with the high cost of cutting-edge technology, the major cloud services aren’t offering the latest-generation multi-GPU solutions and clusters to all users; virtualized or not. It is understandable that companies feel held captive to capitalizing the equipment they need. Forecasts for deep learning software and hardware services alone are on a long-term 55-60% CAGR (compound annual growth rate). There is a better approach

A Better Approach

There is a movement to address the shortage of on-demand, state-of-the-art deep learning cloud computing. SkyScale offers world-class cloud-based, dedicated multi-GPU hardware platforms and fast SSD storage for lease to customers desiring the fastest performance available. SkyScale builds, configures, and manages high-reliability, dedicated systems in secure and strategic facilities.

Why a Dedicated and Secure Environment is Critical

Virtualization is a key security and productivity concern. Virtualized environments allocate resources of single nodes to multiple users managed by the system. Servers, storage, even individual GPU modules all get shared. In this scenario, security and the integrity of a job that’s running can be compromised. Here’s a quick case study: Case Study (True Story) Company X initiated HPC service using one of the major cloud computing service providers. The HPC technology was the best they offered, but was not the latest generation. Configuration was complicated and under supported. Time was of the essence so Company X continued forward despite the deficiencies. Thirty days into the project they were hit with large unexpected bills due to the elastic and demand-based pricing structure of

the service. It was a virtualized service. When more users log into the system, the pricing rises without the user’s knowledge. Virtualization can also lead to users being locked out of the system. Company X was kicked off the service with minutes notice while their program was running so available resources could be shared by all comers and revenue could be maximized by the cloud supplier. The end result: The Company got half-the-performance at more than twice-the-price than they would have with SkyScale. (See Table 1 for comparison) SkyScale was founded by the leading manufacturer of super reliable, ultra-fast multi-GPU accelerators and storage. They create no-holds-barred computers for government contractors, oil and gas exploration, financial analysis, product and software development, radar and sonar defense, genomics, drug development, medical imaging, and many more. Whether you intend to use an HPC environment online or are looking to purchase a complete solution, SkyScale provides a quick and economical path toward a solution.

How much more deep learning does it take to choose SkyScale?

Whether you’re a developer in a small, cash constrained company needing to get results quickly enough to keep up with the frenetic pace of your team, or a manager in a large company with its own data center struggling with maintenance and constant upgrades, SkyScale has created ultra-fast, secure solutions to address any situation, allowing you to focus on no-compromise results, while minimizing capital equipment investment.

www.SkyScale.com | (888) 236-5454 sales@SkyScale.com


2.3 THE GREAT MACHINE VISION IMPACT

Putting Eyes on Machines: Embedded Vision 101 With the right microprocessor under the hood, embedded vision capabilities can easily be added to industrial and consumer products today. Modern vision-based techniques for motion detection and feature identification add paramount value to many customers. We have come a long way from the days of the “electric eye.” But how did we get here? by David Olsen, Senior Manager of Product Marketing and George Stoykov, Senior Firmware Engineer, Renesas Electronics America

It All Started with the “Electric Eye”

Embedded computer vision means something very different today than it did when the “electric eye” was the vanguard of vision-processing technology. Still, that’s where it all began. Going as far back as the 1930s, simple photodetectors were used to automate car counting on freeways (Figure 1). Today, optical trip wires are still used in banks and jewelry boutiques, and motion-sensitive floodlights line the shelves of most hardware stores. Nonetheless, it is modern software algorithms and computing power that sets contemporary embedded vision systems apart from their predecessors. Because of the valuable real-time “intelligence” that cameras provide, more and more manufacturers are integrating them into their products. Figure 1.

Figure 1 “Autos are Counted by Electric Eyes” (Source: Popular Science, January 1937)

38 | RTC Magazine JULY 2017

Key Concepts in Embedded Vision

Key concepts in embedded vision include image acquisition, image processing, and software design frameworks. ARM® Cortex®-based microcontrollers (MCUs) and microprocessors (MPUs) are an excellent fit for this arena, and many semiconductors companies are hoping to grab hold of the market. There are several software frameworks available that facilitate the processing of video on graphics processing units (GPUs), central processing units (CPUs), and hardware accelerators, like the ARM NEON coprocessors engine. These include the open source computer vision language (OpenCV) and deep learning frameworks like Caffe and Google’s TensorFlow. Tasks like age and gender estimation, eye tracking, and facial expression prediction can all be performed on entry-level MPUs, while face and object identification are readily handled by higher-end devices.


Figure 2 Examples of applications for CMOS cameras suitable for Renesas RZ/G MPUs (Source: Renesas Electronics America Inc.)

Camera-Enhanced Industrial and Consumer Products

With the growth of the Internet of Things, CMOS camera-based embedded applications are becoming more commonplace. Many businesses use CMOS cameras to read QR codes, which let the user access company inventory data in real time or quickly complete a transaction. Likewise, cameras in vending machines can gather detailed consumer usage data to improve supply chain management. In security equipment, camera modules increase safety and reduce theft and fraud. With a single high-end MPU today, smart fridge manufacturers can integrate embedded vision functionality with other value-adding capabilities, like Wi-Fi connectivity and sophisticated graphical user interfaces (GUI), which dramatically elevate end-user experience against traditional solutions. Figure 2.

higher load on a CPU, so it typically demands GHz-class ARM application-class processors.

Feature Detection

Feature detection is frequently performed using calculation-based techniques, template matching, or cascade classification. Edge detection is one well-known example of a

Motion Detection

There are a number of motion-detection techniques that are made possible through computer vision. For example, background subtraction compares incoming frames to known background images, temporal difference detection checks for differences between two successive video frames, and optical flow is an advanced temporal difference detection technique that interpolates or extrapolates motion using a displacement vector – which contains direction, gradient, and magnitude information – for every pixel in every frame. Background subtraction utilizes relatively simple mathematical computations and — depending on resolution — can run on ARM M-class CPUs. This technique is only effective in relatively stationary environments. Temporal difference detection is good for more dynamic environments and can also potentially run on MCUs, but higher-end MPUs can handle higher-resolution images at faster frame rates. Edge blurring is an example technique that leverages temporal difference detection to dampen accidental motion triggering from background noise or camera vibration. Optical flow processing imposes a RTC Magazine JULY 2017 | 39


2.3 THE GREAT MACHINE VISION IMPACT calculation-based technique in which discontinuities in image brightness are used to isolate the boundaries of objects. This process facilitates image segmentation within a video frame and can reduce the amount of data that needs to be evaluated at each step of a large computation. Common edge detection algorithms include Sobel, Canny, Prewitt, Roberts, and fuzzy logic. Template matching, by contrast, matches small parts of an image to a pre-defined template by using a similarity criteria (or “metric”) to quantify correlation. It is commonly used in motion and object detection, manufacturing quality control, and in robot navigation algorithms like visual Simultaneous Localization and Mapping (vSLAM). Cascade classification is used for detecting more complex shapes and incorporates machine learning. Cascade functions are trained from many positive images such as faces, as well as negative images like pictures without faces.

Feature Identification

Feature identification is more computationally demanding than feature detection because the algorithm must try to figure out the actual identity of an object or person. Techniques for feature identification include the constellation models, Artificial Neural Networks (ANN), and Convolutional Neural Networks (CNN). The constellation model is an advanced template-matching approach in which classes of objects are represented by a set of parts related by geometric constraints. There is no need to train a constellation model. Instead, model parameters are estimated using an unsupervised learning process. In so doing, an object class can be extracted from an unlabeled set of images to provide a flexible vision solution. A simple ANN, by contrast, requires training via supervised regressions before it can be deployed in the field. Neural networks, used for solving complex predictive and analytical problems, are a machine-learning framework that is loosely modeled after how the brain works. ANNs have exhibited success in solv-

ing complex problems like natural language interpretation, Big Data analysis, and embedded vision. The elements of the model – like the name “neural network”— draw their inspiration from biology. For example, the lines in Figure 3 are called “synapses” and the circles are called “neurons.” The synapses are simple weighting functions, which apply multiplier coefficients (W) to the input variables (X). The neurons sum the outputs of all the incoming synapses (Z) and apply an activation (or “transfer”) function (f) to compute a resulting “activation” for that node in the network. Data is passed forward through the neural network via matrices, which makes for fast and flexible computation. Additional inputs can be accommodated by simply adding more rows to the matrices. While network topology is static and depends on the number of inputs and the pre-defined arrangement of the “hidden units,” the weighting values are initially unknown. Therefore, neural networks must be trained to “learn” the most appropriate values for a particular application. This seemingly mysterious training process is accomplished by selecting some initial weight values with which to evaluate the network and then computing a “cost function” to see how good or bad that set of coefficients was at predicting the expected output. For example, did the algorithm identify a fruit as an orange or a nectarine? Through a supervised regression, the training algorithm iterates to a cost-function minima using a mathematical gradient descent process. In simple neural networks, each neuron is connected to every other neuron in the previous layer, and each connection has its own weight. In other words, it is a “fully connected” network. This general-purpose connection pattern makes no assumptions about the features in the data. As a result, it is expensive in terms of memory (weights) and computation (connections). CNNs are much more specialized and efficient. Each neuron is only connected to a few nearby (i.e., local) neurons from

Figure 3 Artificial Neural Network (ANN) example (Source: Welch Labs)

40 | RTC Magazine JULY 2017


Figure 4 CNNs are used in real products today for functions such as pedestrian and road sign-detection (Source: Renesas Electronics America Inc.)

the previous layer. Moreover, the same set of weights and local connection layout is used for every neuron. A lower number of connections and weights make convolutional layers relatively inexpensive in terms of memory and compute resources. The convolutional layer is but one of the building blocks in a CNN; others include Pooling, Relu, fully connected, and loss layers. CNNs are a popular deep learning framework in embedded vision today, and software running on CPUs with NEON acceleration achieves high-quality results with low latency without the need for GPU acceleration. There are various approaches to implementing feature identification: edge-heavy computing, cloud-only computing, and hybrid (or “fog�) computing. In the latter, some of the embedded vision intelligence is partitioned to run at the IoT endpoint, while the heavier lifting is done in the cloud. There are pros and cons to cloud and edge methodologies, but both can work together seamlessly in a distributed computing framework. Figure 4.

Future Outlook and Summary

Interest in machine vision is growing fast, dragging us into the realm of artificial intelligence (AI). Embedded vision functions like motion detection, feature identification, and gesture recognition can easily be added to industrial and consumer products today to add value for consumers. Vision algorithms

have a wide range of computational requirements and are suited for many different classes of MPUs. It is helpful to find a semiconductor vendor that can support all such embedded vision needs. What some do with AI can help make the world a better, safer, and more efficient place. Please use responsibly! Authors Bios: David Olsen: As a Senior Manager of Product Marketing at Renesas Electronics America, David is responsible for managing the Embedded Microprocessor Team in the Americas and for global Product Marketing Management of the RZ/G Series of MPUs. Mr. Olsen received an MBA from the University of California, Berkeley and an MS from the University of Alberta in Electrical and Computer Engineering. Georgi Stoykov: As a Senior Firmware Engineer at Renesas Electronics America, Georgi Stoykov is pivotal in exploring new technology in the areas of embedded software, motion control, and semiconductor tool automation. Before joining Renesas, Mr. Stoykov held several software engineering positions with both start-up companies and large corporations, including Genmark Automation and Elekta Impac Medical Systems. www.renesas.com

RTC Magazine JULY 2017 | 41


3.1 FUTURE TRANSPORTATION Figure 1 Key initiatives to drive transportation in smart cities include autonomous and connected cars, electrification, parking space tracking and infrastructure.

Transportation in Smart Cities: What Needs to Happen? Smart Cities offer a huge range of opportunities for a brighter future, none more so than fully integrated multimodal transportation systems that address several of the major challenges facing us as a society. In our urban areas, a few of these challenges include traffic congestion, air pollution, and the increase in urban populations. by Bryce Johnstone, Director of Segment Marketing, Imagination Technologies

To address such a wide range of issues there will be a raft of wired and wireless technologies brought to bear, along with a range of services that promise trillions of dollars in revenues for businesses. In this article we will examine the architectures that enable these services and what are required to enable semi and fully autonomous driving in urban areas. Additionally, we will explore the impact of new wireless technologies, drivers and inhibitors of potential markets.

Key drivers for smarter transportation systems within urban centres • Increasing Urbanisation. We are currently at a point where 60% of the world’s population are living in cities. With overall population growth heading towards nine

42 | RTC Magazine JULY 2017

billion and above and increased numbers of people living in cities, this urbanisation will continue to increase over the next few decades. According to McKinsey, 60% of the world’s GDP will be generated by 600 cities, so ensuring more efficient transport systems will be key to high productivity. • Congestion. As urban populations increase and the size and aspirations of the global middle class continue to grow, there are more cars than ever on our roads. The amount of time spent in traffic jams in the US is now an average of 42 hours a year per motorist. In some large urban sprawls the average speed of traffic in rush hour is often below that of the days of the horse drawn carriage! One of the main contributors to congestion


is that caused by the aftermath of accidents in cities. With an eye on avoiding accidents and unclogging congested urban streets, we will soon see many cities place restrictions on the level of autonomy of cars allowed within city limits.

no need for maintenance, no road taxes and no vehicle depreciation, the travelling person could significantly cut costs. Also the potential for reduction in accidents is massive and will enable better traffic flow through less traffic holdups and lower costs for emergency services.

• Pollution. As a result of the reasons above, the levels of pollution are getting out of control in many conurbations. Cities like LA, Shanghai, Beijing and Paris are taking extreme measures to reduce the amount of smog and particulate matter in the air. The impact of pollution is huge; respiratory diseases are on the increase and the death toll purely due to high levels of pollution is increasing rapidly. So what needs to be done? There are a range of proposals for transportation within Smart Cities that involve multiple technologies. Figure 1.

•E lectrification of vehicles. Many governments and car manufacturers are making noise in this area. France, for one, has come out to say that all new cars in France from 2040 will be electric. Mercedes have committed to making all models electric/hybrid electric by 2030 and Geely/Volvo have been even more aggressive, stating that they will have an electric part in all cars by 2019 (Hybrid and pure EV). The Chinese government is also pushing local companies and technologies in order to hasten the move to a more electric future. Indeed, the only way to guarantee getting a number plate in certain cities in China is by buying electric.

Key initiatives for transportation in smart cities • Autonomous cars and ride share. By increasing the use of autonomous vehicles, we can potentially better control traffic speeds and flow, Decrease parking time and also change the level of ownership of cars leading to a reduction in the overall number in cities. Today the economics of owning a car still means it largely makes sense in the absence of an efficient mass transit system. Once autonomous cars are widely deployed, services such as ride share (BlaBlaCar, Lyft, Uber) will be able to remove the cost of drivers and drive down the car cost per mile to a level which will largely mitigate the need for individual car ownership. With no insurance to pay,

•P arking space tracking. One of the issues of congestion in major cities is the number of journeys that are purely to find parking spaces. Today approximately 30% of journeys in towns and cities are taken up with this function. By utilising technologies such as surround view as cars are driving through cities, this information can be sent to the cloud and analysed to create a near real-time view of parking spaces available. This information can be sent to the drivers and particular slots allocated nearest to where the driver wants to go. Similarly, with autonomous self-driving cars, the driver/passenger could get dropped off at their destination and then the car could link up to the

Figure 2 ADAS: Levels of processing range from sensor to actuator. It starts from low to mid to high level processing and end up with control logic and actions.

RTC Magazine JULY 2017 | 43


3.1 FUTURE TRANSPORTATION city systems and be allocated a slot to in which to park autonomously. • Connected Car. In the smart cities scenario the use of vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) as well as V2Internet and telematics means a range of possibilities become available. V2V would allow direct negotiation between cars at junctions ultimately leading to the removal of traffic lights. Surround view camera information can be crowd-sourced to the cloud and analysed for free parking spaces. Services would then be delivered to cars by LTE/5G connections. Other infrastructure such as pay gates, gas station transaction links etc. would also lead to faster passage of traffic through urban environments. The connected car could have functions such as speed and geolocation controlled remotely by a smart city system in order to improve flow control; however there are those who would argue that this has privacy implications. Figure 2. • Smart City Infrastructure. This will take a long time to roll out. Firstly there is the cost of that infrastructure and who it is borne by. Will it be the city itself? Will it be the Wi-Fi operators? Will it be largely the mobile operators who deliver it? McKinsey have done a report stating that the savings to the US economy through widespread adoption of Semi/Autonomous vehicles could be as much as $1trn which is about as much as is being forecast for the upgrade of the roads and bridges network. Key technologies here are Wi-Fi (802.11p) and LTE/5G as well as the sensors embedded in cars and

vehicles that effectively source real time information as to the state of traffic, roads, incidents etc. 802.11p is being proposed for roadside infrastructure boxes that would be able to connect with cars travelling up to 160kph and relaying road conditions, updated maps, and accident information in real time. LTE and 5G will be at the core of telematics where car information (position, speed etc.) will be melded with other sourced information to create new sets of services which can then be sold back to the car owners.

The role of governments To make autonomous cars and smart cities successful, governments will have to help manage the transition from a non AV world to a fully autonomous world. Such legislation will cover the likes of liability, insurance, infrastructure support and funding and many other legal aspects. One area is the addition of a software check in the annual car check (or MOT in UK). If updates are not in place or haven’t been updated within the last 20 or so days, the DVLA should be able to immediately take the car off the road. What happens in a situation where a car hadn’t been updated with a patch that could have prevented an accident they were subsequently in? Who is at fault? What needs to be made mandatory? Who has ultimate liability of an autonomous car going wrong? What geocaching requirements are going to be put onto cars? There is also the matter of government agencies such as NHTSA in US and NCAP in Europe who will increasing mandate ADAS features to the point where there will be a non-autonomous car scrappage scheme to get rid of the older vehicles without any driver aids. Figure 3.

Figure 3 ADAS functions and requirements include night vision, pedestrian detect, lane departure warning, radar/LIDAR/IR sensors, deep learning, auto emergency braking and more.

44 | RTC Magazine JULY 2017


Author Bio:

IP at the heart of smart transportation systems From an Imagination perspective as licensors of silicon IP, we are looking 3-5 years ahead to ensure that we are delivering the right type of IP optimised for the task. Key technologies such as Convolutional Neural Networks (CNNs) for ADAS functions are being addressed by increasingly powerful PowerVR GPUs (up to Tera-Flops of FP16/32) and dedicated hardware accelerators reducing area and power whilst increasing performance by many factors. For the sensor portion of the autonomous car, the image processing IP blocks will deliver sensor data to the sensor fusion portion of the processing. MIPS microprocessors are being successfully deployed in Mobileye EyeQ2/3/ 4/5 SoCs which are at the heart of the majority of today’s ADAS systems. For the next-generation Eye5 designed for fully autonomous vehicles, MIPS CPUs provide the central control of the real time image processing. Led by key technologies, smart finance of both public and private money, sensible government legislation and key IP from the likes of Imagination Technologies, the smart city of the future heralds a more satisfying experience for the commuting motorist and city dweller

Bryce Johnstone is a Director responsible for promoting the company’s Ecosystem of third parties across all technologies including PowerVR, MIPS and ENSIGMA. This role involves identifying and engaging with key third parties to work with collaboratively on Imagination’s products as well as general outreach to the whole developer community. Additional responsibilities include Director for Automotive which encompasses working on relationships throughout the automotive value chain including Tier 1s and Car manufacturer as well as key third parties to understand future requirements. Johnstone holds an Electronics and Electrical Engineering degree from the University of Edinburgh and an MBA from the Open University. www.imgtec.com

SMX RTOS ®

Ideal for Your Project. • Integrated platforms for 150+ boards • Progressive MPU security • TCP/IPv4/6, mDNS, SNMPv3, SNTP, smxrtos.com/mpu SSH, TLS/SSL, HTTPS, SMTPS • Advanced RTOS kernel smxrtos.com/special WiFi 802.11n, P2P, SoftAP, WPA2 • • USB host and device • Broad ARM & Cortex support smxrtos.com/processors Flash file systems • • Drivers, BSPs, Bootloader, • Full source code. No royalty. GUI, IEEE754 Floating Point • Custom source eval and free trial Y O U R

R T O S

P A R T N E R

www.smxrtos.com RTC Magazine JULY 2017 | 45


ADVERTISER INDEX GET CONNECTED WITH INTELLIGENT SYSTEMS SOURCE AND PURCHASABLE SOLUTIONS NOW Intelligent Systems Source is a new resource that gives you the power to compare, review and even purchase embedded computing products intelligently. To help you research SBCs, SOMs, COMs, Systems, or I/O boards, the Intelligent Systems Source website provides products, articles, and whitepapers from industry leading manufacturers---and it's even connected to the top 5 distributors. Go to Intelligent Systems Source now so you can start to locate, compare, and purchase the correct product for your needs.

intelligentsystemssource.com

Company...........................................................................Page................................................................................Website Acrosser.................................................................................................................................39........................................................................................................ www.acrosser.com Critical I/O............................................................................................................................. 11......................................................................................................... www.criticalio.com Dell..............................................................................................................................................17..................................................................................................................... www.dell.com Elma........................................................................................................................................... 4...................................................................................................................www.elma.com Green Hills Software..................................................................................................... 2......................................................................................................................www.ghs.com High Assurance Systems......................................................................................... 19...................................................................................................www.highassure.com IBM.............................................................................................................................................35....................................................................................................................www.ibm.com Intelligent Systems Source.....................................................................................26............................................................... www.intelligentsystemssource.com Micro Digital.......................................................................................................................45.........................................................................................................www.smxrtos.com NVIDIA....................................................................................................................................13............................................................................................................... www.nvidia.com One Stop Systems..........................................................................................................21..................................................................................... www.onestopsystems.com Pentek.....................................................................................................................................48............................................................................................................www.pentek.com PICMG......................................................................................................................................31................................................................................................................. www.picmg.org Pixus Technologies........................................................................................................15.................................................................................www.pixustechnologies.com Skyscale............................................................................................................................36-37................................................................................................... www.skyscale.com Supermicro..........................................................................................................................27................................................................................................. www.supermicro.com Teledyne Dalsa.................................................................................................................25.......................................................................................... www.teledynedalsa.com TQ...............................................................................................................................................47................................................................................www.embeddedmodules.net WinSystems......................................................................................................................... 9.................................................................................................www.winsystems.com

RTC (Issn#1092-1524) magazine is published monthly at 940 Calle Negocio, Ste. 230, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to RTC-Media, 940 Calle Negocio, Ste. 230, San Clemente, CA 92673.

46 | RTC Magazine JULY 2017


Experience Real Design Freedom

Only TQ allows you to choose between ARM®, Intel®, NXP and TI • Off-the-shelf modules from Intel, NXP and TI • Custom designs and manufacturing • Rigorous testing • Built for rugged environments: -40°C... +85°C • Long-term availability • Smallest form factors in the industry • All processor functions available

For more information call 508 209 0294 www.embeddedmodules.net


Unfair Advantage. 2X HIGHER performance

4X FASTER development

Introducing Jade™ architecture and Navigator™ Design Suite, the next evolutionary standards in digital signal processing.

Kintex Ultrascale FPGA

Pentek’s new Jade architecture, based on the latest generation Xilinx® Kintex® Ultrascale™ FPGA, doubles the performance levels of previous products. Plus, Pentek’s next generation Navigator FPGA Design Kit and BSP tool suite unleashes these resources to speed IP development and optimize applications. •

Streamlined Jade architecture boosts performance, reduces power and lowers cost Superior analog and digital I/O handle multi-channel wideband signals with highest dynamic range

Built-in IP functions for DDCs, DUCs, triggering, synchronization, DMA engines and more

Board resources include PCIe Gen3 x8 interface, sample clock synthesizer and 5 GB DDR4 SDRAM

Navigator Design Suite BSP and FPGA Design Kit (FDK) for Xilinx Vivado® IP Integrator expedite development

Applications include wideband phased array systems, communications transceivers, radar transponders, SIGINT and ELINT monitoring and EW countermeasures

Jade Model 71861 XMC module, also available in VPX, PCIe, cPCI and AMC with rugged options.

Navigator FDK shown in IP Integrator.

See the Video!

www.pentek.com/go/rtcjade or call 201-818-5900 for more information

All this plus FREE lifetime applications support! Pentek, Inc., One Park Way, Upper Saddle River, NJ 07458 Phone: 201-818-5900 • Fax: 201-818-5904 • email: info@pentek.com • www.pentek.com Worldwide Distribution & Support, Copyright © 2016 Pentek, Inc. Pentek, Jade and Navigator are trademarks of Pentek, Inc. Other trademarks are properties of their respective owners.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.