RTC Magazine

Page 1

Real World Connected Systems Magazine. Produced by Intelligent Systems Source

Vol 17 / No 4 / APRIL 2016

Machine Vision performs better than humans VISION CAMERAS ACHIEVING GIGABIT ETHERNET SPEED DATA STORAGE AND MANAGEMENT SOLUTIONS SUPERCOMPUTER IN A BOX CAPABLE OF 170 TERAFLOPS CORPORATE PROFILE: NIVIDIA CORPORATION

An RTC Group Publication


SAFE RELIABLE SECURE

T R U STED S O F T WAR E F OR E M B E DDED D E V I CE S For over 30 years the world’s leading companies have trusted Green Hills Software’s secure and reliable high performance software for safety and security critical applications. From avionics and automotive, through telecom and medical, to industrial and smart energy, Green Hills Software has been delivering proven and secure embedded technology. To find out how the world’s most secure and reliable operating systems and development software can take the risk out of your next project, visit www.ghs.com/s4e

Copyright © 2016 Green Hills Software. Green Hills Software and the Green Hills logo are registered trademarks of Green Hills Software. All other product names are trademarks of their respective holders.


CONTENTS

Real World Connected Systems Magazine. Produced by Intelligent Systems Source

NVIDIA COMPANY PROFILE

Machine Vision Performs Better Than Humans

18

2.0: NVIDIA Introduced the World’s First

19

2.0 CEO Profile

21

2.0 Vital Statistics - NVIDIA

22

2.1 One-on-One With the Product Manager of Autonomous Machines

by John Koon, Editor-in-Chief

Jen-Hsun Huang co-founder of NVIDIA Company Snapshot

Interview with Jesse Clayton

GOOD DATA MANAGEMENT

06

24

3.0: Linux-based Baseboard Management Controllers Cost-effectively Enable Advanced Server Management Features

by Mark Overgaard, Pentair Electronics Protection

30

DEPARTMENTS 05

by David Brook, HCC Embedded

EDITORIAL

With Deep Learning, Machine Vision Can Perform Wonders

THE NEW USB 3.1 WILL CHANGE THE WORLD AGAIN

NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS 06

3.1: Using Security Protocols is not Enough to Protect the IoT’s “Small Data”

34

4.0: Implementing USB Type-C Data Support for Embedded Systems by Morten Christiansen, Synopsys

1.0: Mission-critical Machine Vision in an Insecure IoT World

by Dr. Lars Asplund and Dr. Fredrik Bruhn, Unibap AB

10

1.1: Vision Cameras Break Through the Gigabit Ethernet Speed Ceiling by Eric Carey, Teledyne DALSA

12

1.2: Smart Camera empowers Factory Automation by Fabio Perelli, Matrox Imaging

14

1.3: Video Interfaces Provide a Clear View Forward for Machine Vision by John Phillips, Pleora Technologies

30 Using Security Protocols is Not Enough to Protect the IoT’s “Small Data” RTC Magazine APRIL 2016 | 3


RTC MAGAZINE

WHY CHOOSE NOVASOM? NOVAsom Industries provides the added value of design creativity, offering tailormade solutions to both industrial and multimedia markets. We specialize in proposing innovative options to improve productivity, time to market, and reach a truly competitive advantage.

PUBLISHER President John Reardon, johnr@rtcgroup.com Vice President Aaron Foellmi, aaronf@rtcgroup.com

EDITORIAL

In addition to the embedded computing industry, NOVAsom is involved in the newest high level video technologies, including 4K displays. The 2 key differences that make us stand out are our 32/64 bit full architecture and the ability to provide interface to ANY display/sensor combination.

5

Editor-In-Chief John Koon, johnk@rtcgroup.com

ART/PRODUCTION Art Director Jim Bell, jimb@rtcgroup.com Graphic Designer Hugo Ricardo, hugor@rtcgroup.com

8

single board computer

ADVERTISING/WEB ADVERTISING

single board computer

6

Western Regional Sales Manager John Reardon, johnr@rtcgroup.com (949) 226-2000

Z

single board computer

Eastern U.S. and EMEA Sales Manager Ruby Brower, rubyb@rtcgroup.com (949) 226-2004

single board computer

7

BILLING

single board computer

Controller Trudi Walde, trudiw@rtcgroup.com (949) 226-2021

7b

smart delivery

5

8

full development kit

full development kit

NOVAsom8© is a module card designed with a System On Module (SOM) architecture full development kit full development kit based on quad core ARM Cortex-A9 from 512MB to 4GB of 64 bit DDR3 Memory.

6

Z

• Processor CortexA9 Freescale • IMX6 Quad Core full development kit • 4GB RAM Memory • 32GB FLASH Memory (eMMC) • USD memory slot • SATA II • Ethernet 10/100/1000 • USB host/device and OTG • HDMI (High-Definition Multimedia Interface)

7

TO CONTACT RTC MAGAZINE: Home Office The RTC Group, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050 Web: www.rtcgroup.com Published by The RTC Group Copyright 2016, The RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of The RTC Group. All other brand and product names are the property of their holders.

www.novasomindustries.com

5

single board computer

4 | RTC Magazine APRIL 6 2016 single board computer

8

single board computer

Z

single board computer


EDITORIAL

With Deep Learning, Machine Vision Can Perform Wonders by John Koon, Editor-In-Chief

By now we are all familiar with the term M2M or machine-to-machine. Generally, it refers to the communication between machines or computing devices (computers, robots/automation or other devices with a processor in it) with other machines. Why do I bring this up? I am about to tackle the topic relating to Machine Vision (MV). Everyone has a different opinion on what this should be called. Some of the terms used include Computer Vision, Embedded Visual Computing and Intelligent Vision. For now, let us use Machine Vision (MV) as it is close to the idea of M2M. MV is more than applying a HD camera to capture images and compare it with the predetermined parameters. Figure 1. Its design can be as simple as detecting a defective part on a conveying belt. Or it can be as complicated as the demo I saw at the GPU Technology Conference (GTC) produced by NVIDIA recently. In this demo, a machine was trained to count how many cars are passing through an intersection; not buses, not trucks or bicycles. The machine was trained with 100,000 samples of what a car would look like. Then the machine was set up at the corner of the intersection to start counting. Keep in mind in this case, not every moving object is a car. The machine’s job is to pick out only cars, not bus or truck though they all have wheels and may look like a car. Bicycles or motorcycles were also excluded in this case. Additionally, these moving cars may not match the samples shown to the machine. It has to be smart enough to figure out which one is a car to treat it as a count. In this case it involves Artificial Intelligence (AI) and Deep Learning and it is a lot more complex than simply comparing a good image part with a defective one. A great deal more computing power will be needed to achieve AI and Deep Learning solution.

Smart cameras can do more than image capturing. Image courtesy of Matrox.

After attending GTC and visiting NVIDIA, I learned that there is a great deal of science behind these two subjects. We will explore more in the future. In this edition, the following will be discussed:

• Nvme over Fabric Technology enables new levels of storage efficiency in today’s data centers (Xilinx)

• A special corporate profile focusing on NVIDIA and super computing • Mission-critical machine vision in an insecure IoT world (Unibap AB)

• Linux-based Baseband Management Controllers cost-effectively enable advanced server management features (Pentair Electronics Protection)

• Vision Cameras break through the gigabit Ethernet speed ceiling (Teledyne)

– Lets dive in.

• Using security protocols is not enough to protect the IoT’s “Small Data” (HCC Embedded)

• Smart camera for Factory automation (Matrox) • Video interfaces provide a clear view forward for Machine Vision (Pleora) • As Machine Vision involves Big and Small Data storage and management, three experts will share their experiences.

RTC Magazine APRIL 2016 | 5


1.0 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

Mission-critical Machine Vision in an Insecure IoT World We are on the threshold of the next industrial revolution where machine vision will be the major game-changer, as intelligent vision can now even incorporate deep-learning algorithms. These enable cooperative work environments between humans and machines or machine vision that is part of critical-control feedback loops. And these algorithms are most efficiently executed on heterogeneous system architectures. by Dr. Lars Asplund and Dr. Fredrik Bruhn, Unibap AB

Machine vision moving to “sense-plan-act’”

In early applications, machine vision was used with frame grabbers and Digital Signal Processors (DSP). Today, with the development of reasonably priced high performance sensors - one of three major enablers for the new robotics revolution - we can see examples of applications in which recognition is not simply just a means of identifying well-known schematics in a ‘sense-compare-decide’ manner. Today, robotics – starting with simple stationary systems right up to autonomous vehicles - are transforming towards more sophisticated ‘sense-plan-act’ behavior. In this respect, a vision system is the most powerful eye of a robot which informs it of its position and its environment. And the computing power of Heterogeneous System Architecture-based embedded processors like the AMD G-series SoC provides the brain that understands and interprets the environment. The second enabler is the processor which delivers the required high performance with moderate power consumption. The final part of a smart robot is the act component. Acting robots require high power density in the batteries and high efficiency motors. So state-of-the-art batteries and BLDC (Brushless DC motors) are enabler number three. The combination of all these three enablers, i.e., their enhanced technologies, makes vision systems and robotics so revolutionary today.

6 | RTC Magazine APRIL 2016

New intelligent vision systems

So let’s take a closer look at the vision part of this industrial revolution. Human eyes are connected via nerves to the ‘visual cortex’ in our brain. Out of our five senses, the visual cortex accounts for the largest section of the brain. Machine vision systems, such as the IVS-70 (Figure 1) based on parallel computing offered by heterogeneous SoCs, are the enablers of Artificial Visual Cortex for machine vision systems. Their eyes are lenses and optical sensors. Their optic nerves to the Artificial Visual Cortex are high speed connections between the sensors and the compute units. These systems not only provide high speed and high resolution to compete with our human vision, they also provide accurate spatial information on where landmarks or objects are located. To achieve this, stereoscopic vision is the natural choice. Industrial applications for this type of stereoscopic vision system can be found, for example, in item-picking from unsorted bins. Mounted on a robot arm, a vision system can carry out ‘visual servoing’ with 50 fps and identify the most suitable item to pick at the


allows deep-learning algorithms to be used efficiently (e.g. with the Deep Learning Framework Caffe from Berkley). x86 technology is also interesting for intelligent stereoscopic machine vision systems due to its optimized streaming and vector instructions developed over a long period of time and very extensive and mature software ecosystem, vision system algorithms and driver base. Plus, new initiatives like Shared Virtual Memory (SVM) and the Heterogeneous System Architecture (HSA) now offer an additional important companion technology to x86 systems by increasing the raw throughput capacities needed for intelligent machine vision.

HSA enables efficient use of all resources Figure 1 Unibap’s mission-critical stereo Intelligent Vision System (IVS) with 70 mm baseline features advanced heterogeneous processing. Extensive error correction is enabled on the electronics and particularly on the integrated AMD G-Series SoC and Microsemi SmartFusion2 FPGA.

same time the gripper of the robot arm is approaching the bin. This makes scanning - which can take a couple of seconds – and reprogramming the robot arm superfluous. Autonomous cars are another obvious application for vision technologies, as well as a whole range of domestic robot applications.

The artificial visual cortex

So how does this process work in detail? The first stages of information handling are strictly localized to each pixel, and are therefore executed in a FPGA. Common to all machine vision is the fact that color cameras think in RGB (and the pixels are Red, Green and Blue) just like the human eye, but this method is not suitable for accurately calculating an image. Thus, firstly RGB has to be transferred into HIS (Hue, Saturation and Intensity). Rectifying the image to compensate for distortion in the lenses is the next necessary step. Following this, stereo matching can be performed between the two cameras. These steps are executed within an FPGA that is seconding the x86 core processor. All the following calculations are application-specific and best executed on the integrated, highly flexible programmable x86 processor platform which has to fulfill quite challenging tasks to understand and interpret the content of a picture. To understand how complex these tasks are, it is necessary to understand that interpreting picture content is extremely complex for software programmers and that, until recently, the human visual cortex has been superior to computer technology. These days, however, technological advancements are, quite literally, changing the game: An excellent example of computer technology improvement is Google’s AlphaGo computer which managed to beat the world’s best Go player. (Figure 2) And this was achieved by executing neural network algorithms. Today such algorithms can be executed much faster compared to the nineties. Recent methods use also even more layers in building the neural networks and today the term deep-learning means a neural network with many more layers than were used previously. Plus, the heterogeneous system architecture of modern SoCs

With the introduction of latest generation AMD SoCs, a hardware ecosystem is now in place which accelerates artificial intelligence algorithms in distributed, highly integrated sensor logic. Thus, software developers can now also take advantage of a powerful processing component that has been sitting on the sidelines and woefully underused – the graphics processor. (Figure 3) In fact, the graphics processor can accomplish parallel compute-intensive processing tasks far more efficiently than the CPU, which is important for increased parallel computational loads. The key to all this is the availability of Heterogeneous System Architecture, which in terms of x86 technology has mainly been driven by AMD but has also been joined by many industry leaders. HSA supporting microarchitectures seamlessly combine the specialized capabilities of the CPU, GPU and various other processing elements onto a single chip – the Accelerated Processing Unit (APU). By harnessing the untapped potential of the GPU, HSA promises to not only boost performance – but deliver new levels of performance (and performance-per-watt) that will fundamentally transform the way we interact with our devices. With HSA, the programming is also simplified, using open standards tools like MATLAB® or OpenCL/OpenCV libraries.

Figure 2 Modern computer vision and machine learning systems using x86 processors can analyze each pixel.

RTC Magazine APRIL 2016 | 7


1.0 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

Figure 3 HSA provides a unified view of fundamental computing elements, allowing a programmer to write applications that seamlessly integrate CPUs with GPUs while benefiting from the best attributes of each.

The AMD G-series System-on-Chip (SoC) perfectly matches all the points discussed above. It offers HSA combining x86 architecture with powerful GPU, PCIe and a wealth of I/Os. On top of this, AMD G-Series SoCs have an additional benefit, which is not at all common but extremely important for the growing demands of application safety: an extreme high radiation resistance for highest data integrity:

圀攀 猀椀洀瀀氀椀昀礀 琀栀攀 甀猀攀 漀昀 攀洀戀攀搀搀攀搀 琀攀挀栀渀漀氀漀最礀

Guaranteed data integrity is one of the most important preconditions to meet the highest reliability and safety requirements. Every single calculation and autonomous decision depends on this. So, it is crucial that, for example, data stored in the RAM is protected against corruption and that calculations in the CPU and GPU are carried out conforming to code. Errors, however, can happen due to so-called Single Events. These are caused by the background neutron radiation which is always present and originates when high energy particles from the sun and deep space hit the earth’s upper atmosphere and generate a flood of secondary isotropic neutrons all the way down to ground or sea level. The Single Event probability at sea level is between 10-8 to 10-2 upsets per device hour for commonly used electronics. This means that within every 100 hours one single event could potentially lead to unwanted, jeopardizing behavior. This is where the AMD embedded G-Series SoCs provides the highest level of radiation resistance and, therefore, safety. Tests performed by NASA Goddard Space Flight Center (note 1) showed that the AMD G-Series SoCs can tolerate a total ionizing radiation dose of 17 Mrad (Si). This surpasses the requirements by far, when comparing it to current maximum permissible values: For humans, 400 rad in a week is lethal. In standard space programs usually components are required to withstand 300 krad. Even a space mission to Jupiter would only require a resistance against 1 Mrad. Additionally, AMD supports advanced error correction memory (ECC RAM) which is a further crucial feature to correct data errors in the memory caused by Single Events. (Figure 4) Note 1: Kenneth A. Label et al, “Advanced Micro Devices (AMD) Processor: Radiation Test Results”, NASA Electronic Parts and Packaging Program Electronics Technology Workshop, MD, June 11-12, 2013

挀漀渀最愀ⴀ吀䌀㄀㜀 䌀伀䴀 䔀砀瀀爀攀猀猀 䌀漀洀瀀愀挀琀

ⴀ 㘀琀栀 䜀攀渀攀爀愀琀椀漀渀   䤀渀琀攀氀글 䌀漀爀攀∡ 唀䰀嘀 倀爀漀挀攀猀猀漀爀猀 ⴀ 䤀渀琀攀氀글 䜀攀渀㤀 䠀䐀 䜀爀愀瀀栀椀挀猀  ⴀ 唀瀀 琀漀 ㌀㈀䜀䈀 䐀甀愀氀ⴀ䌀栀愀渀渀攀氀 䐀䐀刀㐀

8 | RTC Magazine APRIL 2016

Figure 4 Susceptibility of common electronics for the background neutron radiation cross-section Single Event Ratio (Upset/device*hour). In order to compare different technologies, the SER values have been normalized to a size of 1 GByte for each relevant technology.


Small Form Factor Computers Intel® Atom™ E3800 and i.MX6 CPUs Fanless -40° to +85°C Operation Mini PCIe and IO60 Expansion

PC/104 Single Board Computers Rugged, Stackable Form Factor I/O Modules and Power Supplies

Industrial Computer Systems Off-the-shelf and Custom Solutions Fanless -40° to +85°C Operation

Single Board Computers COM Express Solutions Power Supplies I/O Modules Panel PCs

Accelerate Your Product Development Cycle

Speed up time-to-market with embedded solutions from WinSystems. Our industrial computers include expansion options, so customers can expedite prototyping and integration without the burden of CPU or carrier board design. These proven hardware platforms also provide the building blocks to create customized, application-specific designs. Products are in stock and available for immediate shipment from our Arlington, Texas facility. Let our factory Application Engineers accelerate your capabilities.

New Functionality, Exclusive Content, Fresh Design The NEW www.winsystems.com 715 Stadium Drive I Arlington, Texas 76 011 Phone: 817-274-7553 I Fax: 817-548-1358 info@winsystems.com RTC Magazine APRIL 2016 | 9


1.1 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

Vision Cameras Break Through the Gigabit Ethernet Speed Ceiling Patent-pending TurboDrive data encoding technology lets GigE Vision cameras go far beyond current bandwidth limitations—increasing throughput by as much as 150% while retaining 100% image data. by Eric Carey, Teledyne DALSA

Figure 1 Teledyne DALSA Linea GigE cameras deliver high speed and flexibility at low cost for inspection applications. Linea uses Teledyne DALSA’s own advanced CMOS line scan sensors with high QE and low noise for better image quality. The Linea cameras are available in resolutions from 2k to 16k.

Rethink GigE Vision throughput

Since its debut in 2006, GigE Vision has received widespread acceptance as a camera interface standard because of its convenience and cost effectiveness. Initially, this frame grabber-less interface provided sufficient bandwidth for transmitting images from the majority of sensors. A decade later, however, the gigabit Ethernet network has become a bottleneck as today’s high-resolution and high-speed CMOS image sensors exceed the capabilities of the GigE Vision interface. Today, the challenge facing the machine vision industry is how to address this lack of throughput while still retaining GigE Vision’s considerable benefits, which include low-cost, ease-of-use, long cable lengths, and popularity across many industrial and consumer applications.

Smash the gigabit Ethernet speed ceiling

Technology now exists that allows cameras to transmit pixel

10 | RTC Magazine APRIL 2016

information at a rate exceeding the constraints imposed by gigabit Ethernet. TurboDrive™ is a proprietary, patent-pending innovation from Teledyne DALSA that breaks through the gigabit Ethernet speed ceiling, letting a GigE Vision camera send pixel information at a rate in excess of 115 MB/s, and speeding up line and frame rates beyond the nominal link capacity. Depending upon the image, throughput can increase as much as 150% while 100% of image data is retained—the images transmitted to system memory are identical to the images acquired from the camera sensor.

Transmit 100% image data

TurboDrive uses advanced data encoding techniques that look at the redundancy in the data coming out of the sensor. It uses image entropy-based encoding to model pixel information with no loss of information¬—data integrity is always maintained.


This enables faster data transmission on the link as each pixel is comprised of fewer bits for encoding. Traditionally, machine vision cameras have used absolute encoding over 8- to 16- bits to transmit image information. TurboDrive relies instead upon localized relative encoding to examine each pixel in its context before it is encoded. This permits more compact encoding of the pixel information and improves efficiency by packing the same information into fewer bits.

Work with any reliable transmission link

Due to the nature of this data packing, TurboDrive requires a reliable transmission link—specifically one with error correction. This ensures that the decoding engine on the host always sees an error-free digital signal since errors in these protocols are handled at the transmission link layer. In addition to GigE Vision, USB3 Vision and Camera Link HS are candidates for TurboDrive. This technology is not suitable for Camera Link, however, since any errors that may occur with Camera Link are not corrected at the transmission protocol layer.

Support burst and cycling modes

TurboDrive works in conjunction with other features like burst mode, cycle mode or a combination of cycle and burst modes to accelerate the transmission of images and increase overall system throughput. Burst mode is the ability to buffer many images in internal camera memory during peak acquisition times and to transmit them over the GigE Vision network during slow periods. Cycle mode is the ability to acquire several images in close sequence, changing acquisition parameters—like exposure times and region of interests—between each image acquisition. Once completed, the cycle restarts until the overall process is stopped. Cycle mode can be combined with burst mode to acquire images at faster-than-link speed, storing images in local memory while changing parameters between each acquired image.

Efficiently combine multiple image streams

TurboDrive is also used to aggregate information from multiple cameras onto a single physical link. For example, image streams coming from two cameras can go onto the same network interface card (NIC). In certain machine vision applications, this can be more cost-effective than using multiple NICs.

Seamless and transparent operation

Designed to work transparently with Teledyne DALSA products, TurboDrive does not require any hardware or software changes to a network or an application. The technology is available with select Teledyne DALSA cameras and requires the company’s free Sapera LT 8.0 software, since it is activated as a feature within this SDK’s GigE Vision driver. Because the degree of performance improvement is extremely dependent upon an image, Teledyne DALSA also offers the software simulator, TurboDrive Performance Tool, which lets users test their images to predict increases in throughput. In spring 2015, TurboDrive became available for Teledyne DALSA’s low-cost Linea GigE line scan cameras and it will support area cameras scheduled to launch in autumn 2015.

Conclusion

As the output from CMOS sensors push the boundaries of established camera interface standards, the machine vision industry recognizes that customers need to go beyond these current bandwidth limitations—without affecting the integrity of their image data. New technologies like TurboDrive help customers respond to the ever-present need for their vision and inspection systems to become faster and more efficient.

Model

Competitor’s 4K Line Camera Linea GigE 4K

Linea GigE 4K

TurboDrive capability

NO

OFF

ON

Max Internal Acquisition Line Scan Rate

26 KHZ

80 KHZ

80 KHZ

GigE Link Speed % Utilization

90%

100%*

100%*

Actual Line Scan Rate Received at Host

26 KHZ

28.7 KHZ

71.8 KHZ

Effective Bandwidth Received at Host

104 MB/SEC

115 MB/SEC

287 MB/SEC

Figure 2 Chart shows a comparison of line scan rate with and without TurboDrive using a Linea GigE 4k camera. As seen in the highlighted row, the line scan rate is more than doubled from 28.7 KHZ to 71.8 KHZ with TurboDrive capability.

RTC Magazine APRIL 2016 | 11


1.2 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

Smart Camera Empowers Factory Automation Compact, economical and easy to use, smart cameras are able to combine image sensing, embedded processing and I/O capabilities in one device. They help system integrators, machine builders and OEMs develop automated visual inspection applications with decreased complexity. Several factors have been listed in smart camera design. by Fabio Perelli, Product Manager, Matrox Imaging

The small form factor of a smart camera hides somewhat all the components, features and functionalities that it comes packed with, which could mislead you about the complexity that is involved in designing one. The following are some design considerations for a smart camera that could help you make that important make or buy decision.

Processor – To start with, a good processor capable of

handling most vision applications is needed. For example, Intel® embedded processors allow inspections to take place on fast moving lines or perform more inspections in the allotted time.

Housing – An IP67-rated housing is vital for factory floor

use. The housing and M12 connectors ensure that smart cameras are dust-proof, immersion-resistant and extremely rugged, essentially right at home in dirty industrial environments. Smart

12 | RTC Magazine APRIL 2016

cameras can also be available without the housing as a board set for an even tighter integration into an existing machine.

Lens – A wide range of lenses are available to choose from depending on the application need. A sealed lens cap and an interface for focus adjustment are also required. A dedicated interface for controlling auto focus lens will facilitate setup, use and maintenance by enabling focus position adjustments via an application’s user interface. Varioptic Caspian C-mount auto-focus lens is an emerging liquid lens technology that is being made available for smart cameras at a reasonable price. Varioptic lenses operate through the interaction of two liquids. Unlike a traditional lens where linear, mechanical focusing takes place, the focusing happens directly and almost instantaneously through the intermingling of the liquids.


Image sensors - The market offers a wide choice of monochrome and color image sensors with resolutions from VGA to 5 Mpixels. CMOS image sensors have the added benefit of high readout rates. RTIO – Real-time I/O is a feature that manages the timing counts and positioning from rotary encoders for interaction with vision and automation devices. This RTIO is handled better in the hardware than the software. If timing counts are handled using software, there is associated variability that is not ideal for many applications. Some camera manufacturers support hardware-based RTIOs that give the means to tightly follow and interact with fast moving production lines and equipment. Communication protocols – A smart camera needs

to communicate using mainstream industrial protocols to work with other automation devices like PLCs and HMIs. Getting the required certification however has cost and time implications. And a pure software implementation of PROFINET is not going to meet the tight timings required for real-time control. In this case a hardware-assisted implementation is necessary. A Gigabit Ethernet interface would also allow the smart camera to efficiently output data including images over factory networks.

Intensity controller – Built-in circuitry to control lighting intensity is a plus. Some smart cameras have a dedicated LED intensity control interface that simplifies the setup and use of the overall machine vision system by allowing the illumination to be regulated via an application’s user interface.

Application development –

Last but not the least, a compatible and complete software development kit is necessary for developing the application with a smart camera. An SDK with a comprehensive set of programming functions for image capture, processing and analysis is useful. It takes considerable expertise and many man years to put together an SDK with an extensive set of tools. Just as smart phones have eliminated the need to carry a watch, calendar, notebook, address book and camera, smart cameras, by combining all of the functions mentioned above take away the need for separate camera, PC, I/O board, light intensity controller, lens focus controller and industrial communication card. They thus minimize the components used for automation. By being a device that is compact, economical and easy to use, they prove invaluable to OEMs. “We’ve taken feedback and packed the new Matrox Iris GTR with the features that OEMs need to tackle demanding projects within tight budgets,” said Fabio Perelli, Product Manager, Matrox Imaging. “At SVIA, our goal is to support manufacturers in their efforts to make themselves more competitive through automation using our standardized, flexible, user-friendly, robotic solutions”, said Matthias Grinnemo, Technical Manager, SVIA. “As a long-time user of Matrox smart cameras, we look forward to the new Iris GTR, which is sure to help us further our commitment to helping customers minimize the cost per part produced in their factories.” (Figure 1)

Size – The size of the smart cameras is getting smaller while the feature and performance set keeps expanding. It takes expertise to come up with a small form packed with robust, capable features. Some smart cameras measure just about the size of a person’s palm (75mm x 75mm x 54mm) allowing them to be mounted into tight spaces. Development environments –

A smart camera needs to run different operating systems (such as Microsoft Windows Embedded or Linux), giving developers a choice among prominent environments for their vision application software.

Figure 1 Smart cameras are smaller, faster, cost-effective and ideal for factory automation applications

RTC Magazine APRIL 2016 | 13


1.3 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

Video Interfaces Provide a Clear View Forward for Machine Vision As machine vision migrates from the factory automation market into new medical, defense, and transportation applications, system manufacturers are under increasing pressure to simplify design, lower costs, and improve performance. by John Phillips, Pleora Technologies

Figure 1 GigE Vision and USB3 Vision video interfaces bring unique advantages that simplify imaging system design and performance benefits for end-users, particularly in comparison to proprietary and legacy products.

The feature attraction at a machine vision tradeshow is often a robot that wows the crowd by sorting nuts and bolts at superhuman speeds. Behind the blur of the mechanical arm, uncompressed high-bandwidth video moves from image sources to a processor at near zero latency. The video interface may not have the same wow factor as a robot, but the hardware and software that captures and transmits data between imaging sources, a processing unit, and display panels is playing an important role in the evolution of machine vision and its migration into new markets. As imaging becomes more

14 | RTC Magazine APRIL 2016

sophisticated, with networks of cameras, sensors, software, and processing platforms operating seamlessly in real-time, choosing the right interface helps designers ensure imaging applications support more detailed analysis, while meeting increasing budget pressures and demand for intuitive, easier-to-use systems.

Making the Video Connection

When machine vision moved from research labs onto manufacturing floors in the 1980s, video interfaces were often based on proprietary designs that met performance demands but often


posed cost and integration challenges. Even today, manufacturers often underestimate the time and knowledge required to develop an interface solution supporting real-time, high-resolution image transfer. Existing interfaces from the consumer, telecom, and broadcast markets were also adapted for machine vision. Camera Link, introduced in 2000, was the industry’s first purpose-built interface standard. As imaging systems perform more complex tasks, and end-users focus on cost and ease-of-use, the limitations posed by these interfaces are apparent. Each of these interfaces require a dedicated connection between the image source and endpoint. In multi-screen applications cabling becomes costly and difficult to manage and scale. These interfaces also need a PCIe frame grabber at each endpoint to capture data. This limits the types of computers that can be used, drives up component costs, and increases complexity. In addition, expensive switching is required to support real-time video networking. Recognizing these limitations, the vision industry introduced a new standards that regulate real-time video transmission over Ethernet and USB 3.0.

The Basics of GigE Vision

GigE Vision, launched in 2006, standardized video transfer and device control over Gigabit Ethernet (GigE). With GigE Vision interfaces, imaging data is transmitted directly to the Ethernet port on a computing platform. There is no need for a PCIe frame grabber, meaning any type of computer can be used, including laptops and tablets. Longer reach Ethernet cables – up to 100 meters over standard copper cabling – allow processing and image analysis equipment to be located away from harsh environments.

While GigE Vision was initially valued for its extended cabling, designers are now taking advantage of Ethernet’s inherent networking flexibility to build real-time switched video networks. GigE Vision brings a whole new dimension to imaging applications, allowing one camera to send video to multiple endpoints, multiple cameras to send video to one endpoint, or combinations of the two. Today GigE Vision is the most widely deployed video interface standard for industrial applications, and is gaining a strong foothold in defense and medical applications. Off-the-shelf external frame grabbers are available that convert existing imaging sources into GigE Vision-compliant devices, and embedded hardware eases the design of GigE connectivity for cameras, X-ray panels, and other imaging products. Since its introduction, the GigE Vision standard has been expanded to encompass 10 GigE and wireless Ethernet. With 10 GigE interfaces, multiple image sources can be transmitted simultaneously for 3D image generation. GigE Vision over an 802.11 wireless link enables untethered imaging in applications where cabling poses usability challenges, such as X-ray systems for immobile patients. More recently, GigE Vision over NBASE-T interface solutions are in development to address 2.5 Gb/s and 5 Gb/s application requirements.

GigE Vision on the Battlefield

Imaging is playing a growing role in defense applications, with networked cameras, sensors, and software providing detailed analysis to improve surveillance and safety. For ground vehicle local situational awareness (LSA) applications, GigE Vision helps designers deliver a higher performance, lower cost solution. In this application, real-time video from cameras and sensors is used to navigate the windowless vehicle and survey surroundings. Designers can retain expensive, specializing imaging

Figure 2 In a local situational awareness military vetronics application, GigE Vison interfaces support size, weight, and power (SWaP) design objectives while allows cameras, displays, and processing computers to be integrated seamlessly into a networked real-time vision system.

RTC Magazine APRL 2016 | 15


1.3 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS equipment, while upgrading to a more flexible Ethernet network by employing GigE Vision external frame grabbers. The external interfaces convert the images into a GigE Vision video stream that is transmitted directly to lower cost, smaller form factor computing platforms. Video, control data, and power are transmitted over the single cable; lowering component costs, simplifying installation and maintenance, and reducing “cable clutter” in the vehicle. At display panels, external frame grabbers receive the GigE Vision video and output it in real time over an HDMI/DVI interface. With all devices connected to a common infrastructure and straightforward network switching, multiple video streams can be transmitted to mission computers and displays. Troops can decide “on the fly” which video streams they need to see, without changing cabling or software configurations, or use the on-board mission computer to combine images for use by others in the vehicle.

Embedded Processing Advantages of USB 3.0

While USB has been widely adopted in consumer and PC applications, the standard failed to deliver the bandwidth required for real-time imaging applications until the introduction of USB 3.0. Taking advantage of this raw speed and building on the concepts developed for GigE Vision, the machine vision industry

16 | RTC Magazine APRIL 2016

standardized the transport of imaging and video data over USB 3.0 with the release of USB3 Vision in February 2013. With USB3 Vision, video and data is transmitted from cameras and sensors over a single cable directly to existing ports on a computer, laptops, tablet, or single-board system. The USB 3.0 bus delivers sustained throughput approaching 3 Gb/s, surpassing the performance of Camera Link Base configurations and rivalling Medium configuration but without requiring multiple cables or specialized PCIe frame grabbers to capture data. A fast-growing opportunity for USB 3.0 video interfaces is in service robots, where manufacturers are employing smaller form-factor computing for processing and analysis. In this application, external frame grabbers convert images from existing cameras used for inspection and navigation into USB3 Vision video streams. Alternatively, camera manufacturers can integrate USB 3.0 video connectivity directly into their products with off-the-shelf embedded Interface hardware. Video is transmitted over high-bandwidth, flexible, lowcost USB cables directly to ports on an integrated single-board computing platform. By eliminating PCIe frame grabbers within the robot, designers can reduce system complexity, component count, and costs. In addition, decreasing the weight and power consumption of the robot extends battery life, translating into longer operation between charges. Where a typical PC is designed with the flexibility to sup-


port a range of functions for end-users, an embedded system is dedicated to one particular task, often with little or no user interface. In robotics most tasks or processes that are automated and repeated, including image and video processing within the vision system, are good candidates to be handled by an embedded processor. The evolution of USB3 and GigE Vision interfaces is helping support a growing shift from traditional PC computing architectures to embedded systems across a range of vision applications. This design approach allows processing intelligence to be located at different points in the network to enable faster decision making. In addition, power efficiencies help lower operating costs and reduce heat output to prevent the premature failure of other electronic components and increase reliability. The end result is increased system design flexibility, an upward shift in intelligence at various points in the network and performance and cost advantages.

What’s Next?

Figure 3 A fast-growing opportunity for USB3 Vision video interfaces is service robots, including medical telepresence systems, where the visual, auditory, and tactical technologies require instantaneous transfer of imaging data from a camera to a processor.

Machine vision initially liberated humans from repetitive, often dangerous manufacturing tasks. Today, image-guided surgical systems augment our capabilities with new levels of precision, accuracy, and intelligence in applications spanning the hospital operating room to the battlefield. It’s in our living rooms, with sports broadcasters using 3D image reconstruction to put us in the middle of the action, and our cars as manufacturers integrate vision expertise perfected for industrial applications into advanced driver assistance systems. To help drive the continuing advancement of machine vision into new markets, manufacturers are under pressure to deliver lower cost solutions that deliver higher levels of performance. Choosing the right video interface is a key step in meeting these challenges.

DevicePort Ethernet enabled Port Replicator “A game changing solution, simplifies your I/O system.” ・ Quick Expansion

・ Easy Maintenance

◆ SUNIX Patented Powered COM Design • USA 12/761,408 • China 200910054897.5 • Taiwan 91000303

◆ Data Buffer When Ethernet Disconnected Dat

Pin 9 Barcode scanner

5V

・ I/O Redirection

・ Cost Saving

Card Reader

Bank VM management

Supermarket checkout

◆ Permission Management Keeps System Secured Password:

a

DevicePort Manager

PC

Fits various commercial system structures

Post office front desk

・ COM Off-Line Mapping

・ Powered COM Feature

Small retailer counter

DevicePort in local area network

Medical facility solution

Access point

PC with 10/100 ethernet

Factory automation

Entrance management

RTC Magazine APRIL 2016 | 17


2.0 COMPANY PROFILE

NVDIA introduced the world’s first Deep Learning Supercomputer capable of 170 teraflops

Figure 1 The “Five Miracles” announced by NVIDIA which include the Pascal Architecture, the new16 nm finFET, CoWoS with HBM2, NVLink and the New AI Algorithm to make the supercomputer fast

At the recent GPU (Graphic Processor Unit) Technology Conference, also known as the GTC, Jen-Hsun Huang, CEO of NVIDIA, in his iconic leather jacket, announced the “Five Miracles” (Pascal Architecture, 16 nm finFET, CoWoS with HBM2, NVLink and the New AI Algorithm) to an enthusiastic audience in Santa Clara, California. Figure 1. What is the significance and why the enthusiasm? The new Pascal architecture is able to pack the equivalent of 250 servers into the DGX1 (Figure 2); the world’s first Deep Learning supercomputer capable of 170 teraflops (a trillion floating point operations per second) with a node bandwidth of 768 GB/s. The new 16 nanometer FinFET fabrication technology almost doubles the transistor density to 150 billion from the previous generation. Along with the NVIDIA NVLink™ high-speed bidirectional interconnects, the high performance HBM2 memory design and the new AI algorithm, PASCAL is able to deliver unprecedented performance. At peak FP32, the new Pascal architecture almost doubles the performance of its previous Maxwell architecture while at peak FP64; it delivers an impressive 25 times the performance. It is designed from the ground up to be

18 | RTC Magazine APRIL 2016

the perfect machine for deep learning. Figure 3. NVIDIA outperforms Intel by a wide margin. Based on a benchmark of combined performance of GPU and CPU, DGX-1 is able to out preform Intel’s dual Xeon multiple folds (Intel delivers 3 teraflops and 76 GB/s). At $129,000 the DGX-1, is now a market price leader. To help the ecosystem move forward, NVIDIA offers a complete solution of GPU accelerators (GTX

Figure 2 Supercomputer in a box: The new Pascal architecture is able to pack the equivalent of 250 servers into the DGX-1


1080 for consumers and Tesla P100 for enterprises), modules (Jetson TX1) as well as the supercomputers (DGX-1) depend on the applications. Currently, Cisco and Dell have been offering high-end servers housing multiple NVIDIA GPU’s. (I imagine they would offer similar solutions as the DGX-1. )

NVIDIA Embedded solutions

In the past, machines were thought to be best in doing repetitive tasks. Not anymore, the new autonomous machines see, learn, make decisions based on a set of parameters. These embedded applications need AI, deep learning capability and computing power. A lot of computing power. To accomplish embedded visual computing, NVIDIA introduced the Jetson TX1 (Figure 4) last year and is now gearing up for volume production. This compact credit-sized module is able to pack a one-teraflop 256-core Maxwell chip, a 64-bit ARM A57 CPU, 4GM memory and 16 GB of storage (25.6 GB/s) into a single unit to save space in an OEM design. Additionally, it supports both Gigabit Ethernet and 802.11, yet consumes less than 10 watts. Compare with Intel’s i7-6700K (Skylake), Jetson TX1 has the advantage of 5 times the performance per watt. It also provides SDK, a full software library and development tools. The product is gaining traction among third party suppliers and OEMs. For example, third party supplier like Canadian based Connect Tech starts offering a full line of NVIDA Jetson TX-1 carrier to help OEMs develop end product faster. Figure 5. At GTC, I saw quite a few demos based on the Jetson TX-1. Here are two Jetson TX-1 based applications.

CEO Profile Jen-Hsun Huang is the co-founder of NVIDIA. He is the chief executive officer, president and a board member. He has received the Dr. Morris Chang Exemplary Leadership Award from the Global Semiconductor Association for his exceptional contributions in the fabless semiconductor industry. Additional awards include the Daniel J. Epstein Engineering Management Award from the University of Southern California, the EB Lemon Distinguished Alumni Award and an honorary doctorate from Oregon State University. Additionally, Huang was recognized by Harvard Business Review as one of the world’s 100 best-performing CEOs over the lifetime of their tenure. He holds a BSEE degree from Oregon State University and an MSEE degree from Stanford University.

Figure 3 Tesla P100 Compared to Prior Generation Tesla products. It is able to pack over 15 billion transistor in single silicon almost double its density of prior offering.

RTC Magazine APRIL 2016 | 19


2.0 COMPANY PROFILE Demo 2: Remote-site monitoring with live video streaming

Figure 4 Third party supplier Connect Tech offers a full line of NVIDA Jetson TX-1 carrier to help OEMs develop end product faster. Shown here is the latest low cost version

Percepto, based in Israel, offers a drone solution with autonomous capability for remote site monitoring and it can be managed remotely. Its core technology is the PerceptoCore™ module with proprietary application software. The module consists of the NVIDIA TK1 (an earlier version of the TX1) and FPGA designs. Here is how it works. Suppose you have a remote site requiring a 24 hour monitoring. You can install this solution onsite. When a local alarm/sensor detects an intruder, it would automatically alert the drone to fly around autonomously to collect, analyze and communicate the data to the control center anytime, day and night. Additionally, it will stream video -- live. So the security personnel can monitor the situation via a remote monitor. Usually, the unit will be in a stand-by, charged mode ready to spring into action if need-

Demo 1: The Jetson TX-1 based ZED system provides depth-map in surveillance

Stereolabs offers a dual-camera product, ZED. Figure 6. To demo ZED, they configured a drone to include the Jetson TX-1, ZED and their proprietary software. The drone unit flew around a man-made landscape to simulate a surveillance application. Jetson TX1 was the main engine processing in real-time the two synchronized video streams from the left and right cameras to create a depth map. Additionally, their software supports any CUDA-enabled GPU. ZED isn’t the first device to create real-time depth maps. But since it doesn’t rely on projecting infrared light, it has the advantages of working outdoors while maintaining a range of up to 20 meters. While Stereolabs’ own booth used it for 3D scanning, ZED could be found in other booths on the tops of autonomous robots and rovers, acting as the “eyes” for various sense-and-avoid solutions.

Figure 6 The ZED from StereoLabs is a dual camera housed in a single unit able to capture 3D map while the drone flies over the landscape to be surveyed, much like the human eyes.

ed. Additionally, Percepto provides cloud-based software for remote access and multiple drone management. This is only one application idea and Percepto can provide consulting and design support to users who have a different drone application. Figure 7.

Transformation

Figure 5 This compact credit-sized module is able to pack a one-teraflop 256-core Maxwell chip, a 64-bit ARM A57 CPU, 4GM memory and 16 GB of storage (25.6 GB/s) into a single unit to save space in an OEM design

20 | RTC Magazine APRIL 2016

Founded in 1993 as a PC graphic chip company, NVIDIA has transformed itself into a GPU (graphic processor unit) supercomputer power house. With Visual computing as its main focus, NVIDIA targets four vertical market segments that needed super computing power; Gaming, Professional Visualization, Datacenter and Auto. In 1999, NVIDIA invented the GPU. Today, with the combination of computational power and parallel processing capability of the GPU based platform, NVIDIA has created accelerated computing power to solve very complex problems unsolvable before. What used to take weeks to solve can now be done in days. Some of these problems include simulation of virus behavior in medical


Figure 7 The Percepto module includes a built-in Jetson TX-1. With its proprietary software, the drone is capable to doing remote monitoring.

applications and weather forecast. No wonder the scientific and AI (artificial intelligence) community is so excited about NVIDIA’s solutions. Its contribution to the medical field is also significant. The GE Revolution CT scanner can now produce a high quality image with reduced radiation dosage up to 82%.

NVIDIA is expanding

NVIDIA is flying high these days. With new products, new visions and revenue reaching new height of $5 billion, NVDIA is expanding and is designing a new headquarter which will be built at the corner of San Tomas Expressway and Walsh Avenue in Santa Clara, California. Debora Shoquist, executive vice president, is overseeing the $380 million project. This 500,000 square foot dazzling multi-level building with unique triangular shapes as roof top represents a symbol of graphics and design. “When completed in late 2017, the building will accommodate 2,500 employees and provide two levels of underground parking. We’ll have leveraged our own revolutionary technologies, like Iray, to craft a structure unlike any other in the world,” according to Hector Marino, a NVIDIA spokesman. Figure 8. Deep learning and AI have come a long way. One study indicated machine learning and vision outperformed humans. In this study, humans achieved 94.9% accuracy while machine achieved 96.4%. Solutions using GPU such as those made by NVIDIA will have a great future. With the new product line up and technology strength, NVIDIA is well positioned for the new wave of deep learning and autonomous applications in the years to come.

Figure 8 The future NVIDIA headquarters will be a 500,000 square foot dazzling multi-level building with unique triangular shapes as roof top which represents a symbol of graphics and design

Vital Statistics Snapshot: Invented the GPU in 1999. Began designing graphics processing units for the gaming market, NIVIDIA is now a maker of the supercomputer in a box focusing on Gaming, Visual Computing, Datacenter, Deep Learning, AI and Autonomous cars. Product lines include GPU accelerators, modules and boxes. Founders: Jen-Hsun Huang, Curtis Priem, Chris Malachowsky CEO: Jen-Hsun Huang Founded: 1993 Headquarters: Santa Clara, CA Revenue: $5.01 billion in FY16 Employees: 9,200 worldwide Patents: M ore than 7,000 (See more at: http://www. nvidia.com/object/visual-computing.html#sthash.27ekVsFg.dpuf ) Market Cap: $ 22.2BB (source: Google); NASDAQ symbol NVDA Stock Price: $42.19 (source: Google) URL: http://www.nvidia.com

RTC Magazine APRIL 2016 | 21


2.1 NEW SECURITY EMPOWER MACHINE VISION FUNCTIONS

One-on-one with Jesse Clayton, Product Manager, Autonomous Machines at NVIDIA 1. From talking with a few of NVIDIA customers at GTC, Jetson TX1 seems to be gaining traction. One of the main applications is embedded visual computing. For the benefit of our readers, can you elaborate what embedded visual computing is? Visual computing is the art and science of creating and understanding computer graphics and visual information. Because of intense graphics and images are involved, supercomputing power is usually required. 2. What was embedded visual computing like before Jetson TX1? Jetson is NVIDIA’s platform for embedded visual computing and artificial intelligence. Prior to the release of Jetson, developers and researchers had few options when it came to developing advanced technology for autonomous machines. The options available to the mobile autonomous machines before Jetson were time-consuming, expensive to develop on or too power-hungry for the performance required. 3. I understand you had an earlier generation before Jetson TX1, what improvements have you made? Jetson TX1 adds a 2-3x performance improvement, a more comprehensive SDK for visual computing, and a module-based form factor. These changes make it easier for developers to bring more advanced capabilities to their customers in a shorter amount of time. 4. The credit-card-size Jetson TX1 module uses the Maxwell GPU and ARM A57. What prevents your competitors come along and do the same thing? Very simple. NVIDIA is the world leader in GPU technology. We invest billions of dollars annually in development of new architectures, and because NVIDIA leverages GPU architectures across all product lines, that means that all NVIDIA products, including Jetson, benefit from that multi-billion dollar investment. The same architecture that is in your top-of-the-line gaming system, in your world-class product design workflow, in the Titan Supercomputer at Oakridge National Labs, is also in Jetson. Jetson is like a supercomputer that fits in the palm of your hand and consumes less than 10W. I don’t know anybody else who can do that. 22 | RTC Magazine APRIL 2016

5. Finally, Jesse, what do you hope to achieve with Jetson TX1 family in 5 years? My goal is that we solve world-changing problems. There is no question in my mind that advances in robotics, medical diagnosis, smart homes and smart cities are going to change the way we live. I’d like to look back in 5 years and know that NVIDIA was pivotal in making those ideas a reality. 6. During GTC, NIVDIA has announced some impressive products like the Pascal architecture and Tesla, a supercomputer in a box. Are those mainly for server applications? For embedded applications, is Jetson TX1 the main product? What is the product life cycle you are committing to? NVIDIA shares processor architectures across its entire business. Jetson TX1 is based on the Maxwell architecture, and the next generation of Jetson will be based on the Pascal architecture that was recently launched. NVIDIA is committed to the embedded space and will continue to invest in new products. Jesse Clayton is the Product Manager for Autonomous Machines at NVIDIA. He has over 20 years of experience in technology spanning software, GPU computing, embedded systems, and aeronautics. His current focus is bringing advanced computer vision and deep learning solutions to autonomous machines and intelligent devices. He holds a B.S. in Electrical and Computer Engineering from the University of Colorado, Boulder.


BRING THE FUTURE OF DEEP LEARNING TO YOUR PROJECT. With unmatched performance at under 10W, NVIDIA Jetson is the choice for deep learning in embedded systems. Bring deep learning and advanced computer vision to your project and take autonomy to the next level with the NVIDIA Jetson™ TX1 Developer Kit.

®

Ready to get started? Check out our special bundle pricing at www.nvidia.com/jetsonspecials Learn more at www.nvidia.com/embedded © 2016 NVIDIA Corporation. All rights reserved. RTC Magazine APRIL 2016 | 23


3.0 GOOD DATA MANAGEMENT REQUIRES GOOD CONTROLLERS AND SOFTWARE DESIGNS

Linux-based Baseboard Management Controllers Cost-effectively Enable Advanced Server Management Features Baseboard management controllers (BMCs) are critical to achieving the manageability and serviceability requirements for scaled out servers, where installations can include dozens, hundreds or even thousands of servers. Cost-effective remote management is crucial to such installations and Linux-based BMCs can support this necessary feature and, with highly integrated BMC silicon, they can do so cost-effectively, as well. by Mark Overgaard, Pentair Electronics Protection

A BMC is defined in the Intelligent Platform Management Interface (IPMI) architecture as a separately powered controller that can manage the operation of the server CPU(s) on its baseboard, even when those CPUs are powered down. Management in this case includes monitoring key parameters such as temperatures and voltages, logging threshold excursions of such parameters for later investigation, providing access to inventory data, such as serial numbers or product names, powering on or off the main CPUs, alerting higher level management layers of issues on the baseboard and aiding in recovery when a server baseboard or operating system goes down. IPMI standardizes interfaces for these services so that higher level management can handle a heterogeneous mix of server hardware, which is likely to be present in a 24 | RTC Magazine APRIL 2016

large Linux-based data center. A supplementary specification, the Data Center Management Interface (DCMI), defines a variant of IPMI that is optimized for servers designed for large data centers. The AdvancedTCA architecture includes a rich IPMI-based hardware platform management layer that has successfully delivered the above sorts of services for demanding telecom and other applications around the world for more than a decade. “Applying ATCA Hardware Platform Management to IoT Backend Systems,� in the April 2014 issue of RTC, shows how that management architecture can be applied to the powerful systems, most of them Linux-based, that implement the backend of the Internet of Things, whether those systems use the ATCA architecture or not. Scaled out server complexes that do not use the ATCA man-


Figure 1 Key communication interfaces of a BMC and the System CPU(s) it manages, along with the sideband interface that allows the BMC to share the Ethernet connection(s) that primarily serve the main CPU(s). Serial console interfaces of the main CPU(s) can be routed to the BMC, enabling remote access for them, as well.

agement architecture typically still equip each server with an IPMI-based BMC and use Ethernet for remote access to those BMCs.

How Can Remote BMC Access be Enabled?

Figure 1 shows a typical approach for integrating Ethernet access to the BMC with Ethernet access to the main CPU(s) of the server. It is usually highly preferable to share the Ethernet connection(s) needed by the main CPU(s) with IPMI-based remote management traffic to avoid the cost and logistic challenges of maintaining a separate physical network for management. The most widely used sideband interface between the BMC and the network controller(s) or NC(s) is the Network Controller Sideband Interface (NC-SI), an open standard developed by the Distributed Management Task Force (DMTF). Most Ethernet NCs targeting server markets implement an NC-SI port and internal switching to allow IPMI traffic to share the NC with main CPU traffic. As mentioned above, a key BMC benefit is management access when the main CPU(s) of a server are down, potentially allowing much quicker diagnosis of a failure. Obviously, in the

Game Changing Performance for Data-Intensive/ Latency-Sensitive Enterprise Applications Accelerate Application Response Times with Industry’s Fastest PCIe MX6300 SSDs from Middle Canyon and Mangstor. Key Performance Attributes: Innovative software-based host offload design utilizes a highly efficient 100-core processor located on the SSD: • Delivers leading performance / low host CPU utilization • Offloads flash management and application acceleration operations to the SSD locally reducing system power while freeing the host resources for application processing • Includes high-performance algorithms and software that runs on the flash controller • Handles all data management at very high speed to and from the host CPU and SSDs

Performance: delivers industry-leading NVMe SSD IOPS and latency via software configurable flash controller Manageability: provides a suite of management features specific to PCIe flash memory and the NVMe specification

your fast, flexible and responsive partner.

13469 Middle Canyon Rd., Carmel Valley, CA 93924 (408) 718-7854 • sales@middlecanyon.com • middlecanyon.com

Reliability: supports end-to-end data protection optimizing the entire data path from network to flash in the event of data corruption and power loss Interoperability: provides seamless integration with server operating systems using standard in-box drivers

RTC Magazine APRIL 2016 | 25


3.0 GOOD DATA MANAGEMENT REQUIRES GOOD CONTROLLERS AND SOFTWARE DESIGNS shared Ethernet architecture of Figure 1, such access is only possible if the NC(s) and the BMC are powered by separate management power. The extended management power domain in the figure highlights this aspect of the architecture. The hardware design for this subsystem needs to be done carefully to ensure that the extended management power domain can be powered while the connected System CPUs are not. One popular way to implement a BMC is to find a specialist company that provides a reference hardware design and corresponding firmware, both adaptable to the needs of a particular server baseboard and customer requirements. Pentair is one source for such a solution, based in particular on specialized BMC silicon from ASPEED Technologies: the AST2500 or its subset variant the AST2520. Any such solution should provide detailed coverage at both the hardware and firmware levels, for topics like the extended management power domain. Another key topic in many BMC applications is remote access to the serial ports on the System CPU(s). Especially in scaled out configurations with massive server counts, but even in smaller configurations, it can be highly preferable to avoid connecting one or more physical serial cables to each server or each CPU within each server. IPMI defines a Serial over LAN (SoL) architecture that is useful in this context. System CPU serial ports can be connected to the BMC and a remote network-connected client of the BMC can interact with those serial ports without needing any physical serial port connections. Such serial port access can be crucial, for instance, in diagnosing a malfunctioning server remotely. The remote client can see all serial traffic as the System CPU(s) boot, starting from the very first character, since the SoL session(s) with the BMC can be established before the System CPU(s) are even powered on.

What Additional Advanced Features Can BMCs Deliver?

For some server architectures, key subsystems may rely on a richer human interface than simple serial ports. Remote access to these subsystems may require the ability to virtually attach a remote keyboard, video screen and mouse (KVM) to an arbitrary server in a large configuration. The remote system administrator in this case needs to be able to use his or her remote KVM facilities as if they were physically attached to that server and then be able to transfer that virtual attachment instantly to some other server, perhaps in a completely different physical location. Figure 2 shows how an advanced BMC can address these needs by redirecting KVM-related connections over the network to a remote console. Media redirection, a related feature also covered in the figure, allows a remote installation image to function as if it were a drive physically attached to a server. The server can boot from the remote drive image as part of a diagnosis or recovery operation. The key idea in the redirection facilities shown in Figure 2 is that the System CPU can interact with the redirected devices exactly as if they were physically attached, not redirected to corresponding physical devices attached to the remote console. This capability is independent of the operating system running on the System CPU. Of course, the considerable compute power and good network performance in an AST2500-based BMC are necessary to make this practical. Furthermore, specialized video capture and compression hardware is critical to an effective implementation of video redirection. In the above model, the System CPU would be configured to treat that hardware as if it were the system video card. The powerful features of Linux running on the BMC (such a rich I/O subsystem) are valuable in integrating these redirection facilities. This is especially true for media redirection, where Linux already includes support for applicable USB and mass storage protocols. Furthermore, a Linux foundation potentially allows other applications, such as a web interface or other services, to co-exist with the main BMC application on the BMC CPU. Normal Linux inter-process protections help to avoid inter-application interference. Of course, total resource requirements of the various co-resident applications much be planned carefully so there’s always enough capacity for the critical applications.

Figure 2 Key communication interfaces supporting KVM and media redirection in an example BMC based on ASPEED’s AST2500. Two USB ports and a PCI Express (PCIe) port on a System CPU interact with the BMC as if the remote keyboard/mouse, installation drive and PCIe-accessed video were directly attached. In reality, all those devices are attached to a remote console that communicates with the BMC via Ethernet.

26 | RTC Magazine APRIL 2016


Figure 3 An example reference design for a Linux-based BMC, in this case using the AST2500 or (without the advanced redirection features) the AST2520. The need for additional chips beyond the AST device is minimized by the high integration level of the SoC.

Industry’s First NVMe over Fabric Flash Array Dramatically Increases Application Performance Middle Canyon and Mangstor delivers the industry’s first flash storage array supporting NVMe over Ethernet or Infiniband fabrics and packaged via an RDMA cluster scale-out architecture that delivers lower latency and higher IOPS performance than traditional SAN solutions. The NX Series provides the highest performance storage tier for business analytics and HPC applications as well as a caching storage layer for Big Data applications. It also provides high concurrent Read/ Write bandwidth for video storage and delivery. The NX Series flash arrays are based on Mangstor’s MX6300 SSDs and its TITAN software stack. TITAN provides industry-leading performance and latency by tightly integrating NVMe SSDs with a high-performance, low latency network and efficient use of x86

your fast, flexible and responsive partner.

server capabilities. The array appears as local Direct Attached Storage (DAS) to any attached servers for seamless integration with existing applications and storage infrastructures, and has all of the management and serviceability benefits of centralized storage.

• Delivers up to 10x higher bandwidth and 10x lower latency versus ISCSI/FC flash arrays • Accesses data at nearly identical latencies as accessing local PCIe-based SSDs

13469 Middle Canyon Rd., Carmel Valley, CA 93924 (408) 718-7854 • sales@middlecanyon.com • middlecanyon.com

• User configurable into separate storage volumes and shareable across multiple hosts • 2015 Best of Show winner at Flash Memory Summit

RTC Magazine APRIL 2016 | 27


3.0 GOOD DATA MANAGEMENT REQUIRES GOOD CONTROLLERS AND SOFTWARE DESIGNS

How Can These BMC Facilities be Delivered Cost-effectively? One key to achieving cost-effectiveness is combining hardware support for key management features with traditional System on Chip (SoC) facilities in a single device. The AST2500 is one example of such BMC-optimized SoCs. Figure 3 shows a possible BMC reference design based on the AST2500. The design takes advantage of specialized hardware features, including

the video controller and multiple USB ports for redirected KVM and media. Also included is a 16-input analog to digital controller (ADC) for monitoring electrical parameters and a 14-port I2C controller (with only a fraction of either of these resources used in the base reference design here). Another key to cost-effectiveness is using the power of Linux (running on an ARM11 processor at 600 MHz on these AST devices) and its rich open source ecosystem of development tools and device support as the foundation layer for management applications.

High-Performance, High-Speed

FPGA Computing

Acromag.com/FPGAs

Acromag’s high-performance XMC & PMC

FPGA modules feature a user-customizable Xilinx® FPGA. These modules allow you to develop and store your own instruction sets in the FPGA for a variety of adaptive computing applications.

▪ 32M x 16-bit parallel flash memory for MicroBlaze® FPGA program code storage ▪ High-speed serial interface on rear P15 connector for PCIe Gen 1/2 (standard), Serial RapidI/O, 10Gb Ethernet, Xilinx® Aurora

Select from several models with up to 410K logic cells optimized for logic, DSP, or PowerPC. The high-speed memory and fast bus interfaces rapidly move data.

▪ High-speed interfaces on rear P16 connector for customer-installed soft cores

An FPGA engineering design kit and software utilities with examples simplify your programming and get you started quickly.

▪ DMA support provides data transfer between system memory and the on-board memory

▪ 34 SelectI/O or 17 LVDS pairs plus 2 global clock pairs direct to FPGA via rear P16 port

▪ Support for Xilinx® ChipScope™ Pro interface

Embedded Computing & I/O Solutions

VPX Carriers

I/O Modules

VME SBCs

SFF Embedded Computers

Acromag.com/VPXcarriers

Acromag.com/EmbeddedIO

Acromag.com/Boards

Acromag.com/ARCX

www.acromag.com | solutions@acromag.com | 877-295-7085

28 | RTC Magazine APRIL 2016

What’s the Best Way to Get Started on a BMC-equipped Server Baseboard?

The best approach is to partner with a specialist for a proven BMC reference design and firmware, to avoid the need to develop lots of technology from scratch. The relevant specifications that would govern an interoperable BMC implementation encompass hundreds of pages. Look for a specialist company that is already intimately familiar with those specifications and includes a field-validated IPMI subsystem in their BMC offering. One such possibility is the Schroff Pigeon Point BMR-AST-BMC reference design from Pentair. The IPMI subsystem is compliant with the most recent 2.0 specification and has been intensively field-validated over the last decade in tens of thousands of baseboards used in demanding telecommunications and other applications around the world. Key features from the DCMI specification are included as well. The Schroff reference design comes with a bench top reference implementation that allows designers and software engineers to start immediate hands-on familiarization with IPMI-based management and the advanced features of this BMC solution. The solution also includes an easy-to-use configuration architecture that allows many server-specific adaptations to be implemented without extensive programming. However, full source code of the BMR management application is included as well. The source code may simply be used as a supplementary educational resource or more aggressively, to do extensive customizations, if needed. A Linux-based BMC is a fine way to add cost-effective hardware management to a server baseboard, yielding a wide range of manageability and serviceability benefits. These benefits are particularly relevant to server baseboards targeting scaled out data centers, which are, themselves, often based on Linux.


Embedded/IoT Solutions Connecting the Intelligent World from Devices to the Cloud Long Life Cycle · High-Efficiency · Compact Form Factor · High Performance · Global Services · IoT

IoT Gateway Solutions

Compact Embedded Server Appliance

Network, Security Appliances

High Performance / IPC Solution

E100-8Q

SYS-5028A-TN4

SYS-5018A-FTN4 (Front I/O)

SYS-6018R-TD (Rear I/O)

Cold Storage

4U Top-Loading 60-Bay Server and 90-Bay Dual Expander JBODs

Front and Rear Views SYS-5018A-AR12L

SC946ED (shown) SC846S

• Low Power Intel® Quark™, Intel® Core™ processor family, and High Performance Intel® Xeon® processors • Standard Form Factor and High Performance Motherboards • Optimized Short-Depth Industrial Rackmount Platforms • Energy Efficient Titanium - Gold Level Power Supplies • Fully Optimized SuperServers Ready to Deploy Solutions • Remote Management by IPMI or Intel® AMT • Worldwide Service with Extended Product Life Cycle Support • Optimized for Embedded Applications

Learn more at www.supermicro.com/embedded © Super Micro Computer, Inc. Specifications subject to change without notice. Intel, the Intel logo, Intel Core, Intel Quark, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries. All other brands and names are the property of their respective owners.

RTC Magazine APRIL 2016 | 29


3.1 GOOD DATA MANAGEMENT REQUIRES GOOD CONTROLLERS AND SOFTWARE DESIGNS

Figure 1 As embedded devices become increasingly networked, there is a growing risk that poor software quality could affect the quality of the final product as well as the security of customers’ data. HCC Embedded’s advanced process-driven file systems and communications solutions are designed to ensure the reliability and security of the IoT’s small data.

Using Security Protocols is not Enough to Protect the IoT’s “Small Data” Small data matters when it comes to the Internet of Things (IoT). Security is a major concern when coming to the connected world. Even if these IoT devices aren’t considered mission- or safety-critical, the impact of their data being lost, stolen, or exposed to unauthorized sources is far from small. Embedded developers must do more than engage security protocols when developing IoT devices. by David Brook, Director of Marketing, HCC Embedded

The Internet of Things (IoT) is dragging embedded developers into the network security debate, but with a twist. While IT is focused on cloud computing and big data, embedded engineers need to be concerned with small data—the data that is stored locally and communicated over the Internet by the billions of “things” that are now interconnected. Even if these IoT devices aren’t considered mission- or safety-critical, the impact of their data being lost, stolen, or exposed to unauthorized sources is far from small. It could be devastating for the end user, could damage the reputation of the manufacturer, or could even—in

30 | RTC Magazine APRIL 2016

the case of network security breaches—cause loss of confidence in an entire industry. It’s only a matter of time before customers or consumers take an IoT company to court to ask what steps it took to assess the risk of data theft, loss, or exposure. So where does this risk leave embedded software developers? While using things like TLS, IPSec, and encryption techniques are a logical first line of defense; the success of these algorithms requires a fundamentally different approach to software development.


TLS, IPSec, and Encryption Aren’t the Whole Answer

Security industry discussions tend to focus primarily on the integrity of encryption algorithms and protocols. And although these algorithms have evolved over time, there is little evidence that major security breaches can be attributed to breaking the algorithms themselves. In fact, most high-profile security breaches have come from three main sources: insiders divulging secrets, poor system management, and badly or inappropriately written software. The first two sources can only be dealt with by the organizations responsible for the security of information and, clearly where humans are involved, there are no easy solutions. But the third source—software quality and the application of formal development processes —is often neglected in security verification suites that check that the algorithm is implemented correctly. As embedded devices become increasingly networked, there is a growing risk that poor software quality could affect the quality of the final product as well as the security of customers’ data. For instance, a recent spate of high-profile network-security breaches of devices using OpenSSL software has highlighted serious risks that IoT device manufacturers may be exposing themselves and their customers to. Many of the defects discovered occurred as a consequence of a lack of rigor in the software development process. The Heartbleed Bug, which resulted from a defect in the OpenSSL cryptography library, is a case in point. The information publically available states that the software did not check the scope of a protocol variable and then processed it blindly. The standard V development model (as seen in Figure 2) includes unit testing and boundary case analysis/testing that would have instantly alerted developers to the issue, and the detection would have been reinforced by other requirements of the lifecycle process.

Of course, this risk isn’t limited to open source software and many organizations use open source software in applications that support millions of users’ valuable data. Indeed, open source providers are transparent about the processes used to develop the software. The responsibility for security and quality, however, is with the developing organization. They must ask the question if the software they are proposing to use has been developed using an appropriate process—regardless of who developed it. The V model process would have picked up these kinds of issues even in professionally developed solutions. For example, a well-designed static analysis tool would have detected Apple’s recent issue with its TLS software. Ultimately, while a variety of industry groups have been established to ensure that algorithms and protocols are robust, encryption is not a security panacea. (See Figure 2)

“The Heartbleed Bug, which resulted from a defect in the OpenSSL cryptography library” What Drives Quality Standards?

Software quality is generally implemented according to the needs of vertical industries such as medical, automotive, aerospace, industrial, and transportation that have standards for developing software in order to meet requirements for product safety and reliability. These industries use software processes that are defined by IEC 61508 and other similar standards that

Figure 2 The basic concepts of the standard V development model show that everything must be defined, and after implementation, all the tests have to be verified against the requirements. Research data shows that not only does this reduce defects significantly, but in many cases, it can reduce the cost of software management over its life cycle.

RTC Magazine APRIL 2016 | 31


3.1 GOOD DATA MANAGEMENT REQUIRES GOOD CONTROLLERS AND SOFTWARE DESIGNS are based on the V development model. Research shows that not only does this approach reduce defects significantly, but also in many cases it can reduce the cost of software management over its lifecycle. While there are certainly costs involved in using this rigorous software development approach, the financial impact to fix software problems after a breach is staggering, not to mention the impact these breaches have on a company’s reputation. Even though specific industries have control over software processes with a variety of proven standards, there is no particular software development standard that applies across the many vertical markets that make up the IoT. But in the absence of any body to set quality standards for network security software, there is no reason why standards—or relevant parts—can’t be adopted as appropriate. For example, the standards used in some parts of the medical industry originated in the standards created for industrial control systems. So what standards should be expected of software that protects our personal data? Full safety processes, like those used in the aerospace industry, are probably not appropriate although many aspects are relevant, including those in Table 1. The set of measures used would vary to meet different needs, but the general principles of creating high-quality software are similar. See Table 1 Method

Relevance

Requirements Specifications

Ensures the product does what the customer expects

Coding Standard

Ensures code is written to best industry practices

Static Code Analysis

Applies metrics and tests to ensure code is of quality that will foster long-term stability

Dynamic Code Analysis

Ensures all code is tested and executes correctly and excludes all redundancy

Table 1 Minimum Suggested Methods to Ensure Software Quality and Security

Security problems such as Heartbleed, Apple’s “goto fail,” and GnuTLS were caused by defects in software, not necessarily in the protocols or design. In response, HCC offers a TLS/SSL suite designed for use in deeply embedded systems that was developed using a complete V development model with a high degree of design verification. This includes a full requirements specification mapped directly to the protocol specifications from which a full UML design is derived and with a full static and dynamic analysis together with complete test suite to verify operation in place. The software was developed using an appropriate coding standard—MISRA C: 2004—and is supplied with analysis reports and a comprehensive test suite. For a critical area such as security we believe this should be a minimum objective.

Address the Big Picture of System Design

Creating high-quality software is clearly the first step in developing secure systems, but software can’t be treated in isolation. The whole system design must be considered. Even if it

32 | RTC Magazine APRIL 2016

System Design Issue

Mitigation

Software Authentication

Ensures the hardware only operates with authenticated software and ensures that new releases of software will only work with authentic hardware

Software Integrity

Ensures the software has not been modified by any external party

Software Secure

Ensures the software cannot be read by a third party (this makes constructing attacks more difficult)

System Complexity

Minimizes risk of back doors and unforeseen consequences by having the security component do security— no more or less than it needs to—and minimizes the effort required to develop secure software to the required level

Table 2 Guiding Principles and Rationale for System Security

was possible to create a perfect TLS implementation, if a defect is located elsewhere in the target system, such as in the TCP/ IP stack, then it could possibly expose memory. Certainly, it is less likely this kind of fault would yield sensitive data, but such a system cannot be considered completely secure. At least one recent major failure was caused when a point of sale (POS) computer was reverse-engineered and used to access a central database. Regardless of the quality of the software deployed, this type of attack can only be protected against by a well-thought-out system design. In this case, the quality of the algorithm was not particularly relevant to the security breach experienced. Addressing this problem is further complicated by the variety of skill sets required. For example, a security assessment is carried out on equipment that is complex in its own right and is not the natural domain of a security expert. Some of the guiding principles and rationale for system security are summarized in Table 2.

Secure Foundation Reduces the Possibility of a Breach

It is probably impossible to make a completely secure system that is usable, but having a strong foundation to a security plan reduces the possibility of a security breach. If the industry continues to stumble from one crisis to another without addressing quality and security, it undermines the trust individuals have in the system, which could have far-reaching consequences. Developers of IoT applications must adjust their approach to quality as soon as possible to avert a confidence crisis. The implementation of the software itself cannot introduce weaknesses, and to achieve that, developers must apply known, tried, and tested methods to construct high-quality software that reduces risks to corporations, their brand, and their clients.


The New Genie™ Nano. Better in every way that matters. Learn more about its TurboDrive™ for GigE, Trigger-to-Image Reliability, its uncommon build quality… and its surprisingly low price.

*starting at

USD *Taxes & shipping not included

» GET MORE GENIE NANO DETAILS AND DOWNLOADS: www.teledynedalsa.com/genie-nano


4.0 THE NEW USB 3.1 WILL CHANGE THE WORLD AGAIN

Implementing USB Type-C Datapath Support for Embedded Systems Understanding the pros and cons of using an external datapath switch or two USB ports for SoCs that don’t natively support USB Type-C by Morten Christiansen, Synopsys

The advantages of USB Type-C for embedded systems are not “hype-C.” USB Type-C connections are much more user friendly. There is no difference between the host and device side connector; one common cable works for all products; the connector can be flipped and inserted either way; and the receptacle is small and robust, making it ideal for front-panel, space-constrained embedded systems. The addition of DisplayPort Alternate Mode and HBR-3 bitrates to the specification supports 4K/60Hz monitors simultaneously with USB 3.0 or USB 3.1 on the same connector. Designers of embedded systems with USB Type-C must add support for two SuperSpeed datapaths matching either one or the other orientation of the USB Type-C connector. The USB Type-C connector is symmetrical and duplicates most of the signals to support flippability, as shown in Figure 1. This duplication requires datapath multiplexers for SuperSpeed USB products and datapath crossbar switches for Alternate Mode products. Designers have two implementation choices for USB Type-C datapath if the system-on-chip (SoC) doesn’t natively support USB Type-C: using an external datapath switch or using two USB ports.

External datapath switch

External datapath switches (or multiplexers, external to the SoC or USB chipset; see Figure 2) are commonly used in commercial products supporting USB Type-C. Existing high frequency analog switches originally designed for PCIe, Ethernet, SATA, DisplayPort and other standards have been repurposed for USB Type-C. The main advantage of the external switch solution is faster time-to-market. The disadvantages are cost, PCB area and (implementation dependent) reduction of signal quality. Signal loss in the datapath switch affects the channel loss budget from the USB port to USB Type-C connector. The channel loss budget for USB 3.0 Type-C is 6.5dB, minus the switch loss. The switch loss (including package losses) is typically 1.5dB. The remaining 5dB channel loss budget reduces maximum length of PCB routing from USB port to connector to 6 to 7 inches, depending on the quality of the PCB and its layout. The channel budget for USB 3.1 (SuperSpeed USB 10 Gbps) is 8.5dB from die to connector for all connector types. The typical PCB routing distance is 4 inches, but adding an external switch can reduce the length of the route to 2 inches or less.

Figure 1 USB Type-C pinout and rotational symmetry: The USB Type-C connector is symmetrical and duplicates most of the signals to support flippability.

34 | RTC Magazine APRIL 2016


Figure 2 Logical model for SuperSpeed data path routing across USB Type-C-based ports External datapath switches or multiplexers, external to the SoC or USB chipset are commonly used in commercial products supporting USB Type-C.

It is important to remember that USB 3.0 with the standard-A connector was allowed a 10dB channel loss budget; direct conversion of an existing USB 3.0 standard-A embedded system design to USB Type-C might not be possible. External USB Type-C specific switches with built-in analog re-drivers can compensate for some switch and PCB routing loss. This is appropriate for USB 3.0; however, the specified re-driver for 10 Gbps operation is a complete re-timer (Figure 3). This re-timer consists of two complete PHYs plus some digital circuitry and will increase the cost and power consumption of SuperSpeed USB 10 Gbps solutions that require a compliant re-driver.

Two USB ports solution

Using two USB 3.0 or USB 3.1 ports is an alternate solution to the USB Type-C switch. In this case, one port is active, de-

pending on the USB Type-C connector orientation, while the other port is in a low power state. Since there is no datapath switch, there is no loss of signal quality. Multiple commercial products with a multiport USB host capability take advantage of this solution; two existing ports for Standard-A can be used to create one USB Type-C port. Figure 4 shows a common 4-port USB 3.0 host controller chip used ‘as is’ in a 2-port USB Type-C host controller plug-in board. Two SuperSpeed USB RX and TX pairs are used for one USB Type-C connector. To preserve signal integrity, the SuperSpeed signals are routed directly from the controller to the USB Type-C connector. The routing is clearly visible on the visible side of the PCB. Two USB 2.0 signal pairs are also routed from the controller to the USB Type-C connector. The

Figure 3 Re-timer architecture for SuperSpeed USB 10Gbps operation: Supporting USB 3.1 10 Gbps operation requires a re-timer consisting of two complete PHYs plus some digital circuitry.

RTC Magazine APRIL 2016 | 35


4.0 THE NEW USB 3.1 WILL CHANGE THE WORLD AGAIN lower USB 2.0 data rate allows the PCB routing to be continued on the opposite side of the PCB, allowing optimal routing of the SuperSpeed USB PCB traces. The USB Type-C specification requires that the Vbus be enabled only when a device is attached. This prevents the 5V power supplies of two USB hosts from being shorted together because the USB Type-C connector allows two hosts to be physically connected even if no operation is possible.. In Figure 4, a load switch and large capacitor can be seen next to the USB Type-C connector. The host has one Pull-up resistor Rp on each Configuration Channel (CC) pin. When a device is connected, the device Pull-down resistor Rd reduces the voltage on one of the CC pins. This causes the load switch to enable the Vbus. When a host is connected, the load switch is not enabled. Normally for USB Type-C, CC pin detection is used to determine orientation. For this design in Figure 4, no orientation detection is required. Both ports are active, but only one port will detect the device. The other port stays active but unused. Single-port SuperSpeed USB hosts can use a USB 3.0 hub

Figure 4 Multi-port USB Type-C Host Controller PCB: Normally for USB Type-C, CC pin detection is used to determine orientation. For this design, no orientation detection is required. Both ports are active, but only one port will detect the device.

36 | RTC Magazine APRIL 2016

chip to create two internal SuperSpeed USB ports to connect, for instance, a USB to Ethernet controller and a memory card reader. The remaining two ports can implement an external USB Type-C connection, as described above.

Summary

Until SoCs with native USB Type-C support become available, embedded system designers can adapt their existing designs for USB Type-C using an external datapath switch or two USB ports. For most designs, the required Type-C Configuration Channel circuitry can be easily be implemented with discrete components, and Type-C Port Manager (TCPM) hardware or Type-C software is normally not required. When sourcing or specifying the SoC for next-generation embedded systems, keep in mind that an optimized USB Type-C PHY or USB/DisplayPort Alternate mode PHY IP such as those available from Synopsys eliminates the need for expensive external switches or the need to use multiple ports, add-on USB host controller or hubs to solve the SuperSpeed dual datapath challenge.


RTC Magazine APRIL 2016 | 37


ADVERTISER INDEX GET CONNECTED WITH INTELLIGENT SYSTEMS SOURCE AND PURCHASABLE SOLUTIONS NOW Intelligent Systems Source is a new resource that gives you the power to compare, review and even purchase embedded computing products intelligently. To help you research SBCs, SOMs, COMs, Systems, or I/O boards, the Intelligent Systems Source website provides products, articles, and whitepapers from industry leading manufacturers---and it's even connected to the top 5 distributors. Go to Intelligent Systems Source now so you can start to locate, compare, and purchase the correct product for your needs.

intelligentsystemssource.com

Company...........................................................................Page................................................................................Website Acromag................................................................................................................................28....................................................................................................... www.acromag.com congatec................................................................................................................................. 8.............................................................................................................www.congatec.us Green Hills Software..................................................................................................... 2......................................................................................................................www.ghs.com Middle Canyon..............................................................................................................25, 27...................................................................................... www.middlecanyon.com Novasom Industries...................................................................................................... 4................................................................................www.novasomindustries.com NVIDIA...................................................................................................................................23.............................................................................................................. www.nvidia.com One Stop Systems..................................................................................................... 16, 37................................................................................ www.onestopsystems.com Pentek.................................................................................................................................... 40............................................................................................................www.pentek.com Sunix..........................................................................................................................................17.................................................................................................................www.sunix.com Supermicro..........................................................................................................................29................................................................................................. www.supermicro.com Teledyne Dalsa.................................................................................................................33.......................................................................................... www.teledynedalsa.com TQ...............................................................................................................................................39................................................................................www.embeddedmodules.net WinSystems......................................................................................................................... 9.................................................................................................www.winsystems.com

RTC (Issn#1092-1524) magazine is published monthly at 905 Calle Amanecer, Ste. 150, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to The RTC Group, 905 Calle Amanecer, Ste. 150, San Clemente, CA 92673.

38 | RTC Magazine APRIL 2016


Experience Real Design Freedom

Only TQ allows you to choose between ARM®, Intel®, NXP and TI • Off-the-shelf modules from Intel, NXP and TI • Custom designs and manufacturing • Rigorous testing • Built for rugged environments: -40°C... +85°C • Long-term availability • Smallest form factors in the industry • All processor functions available

For more information call 508 209 0294 www.embeddedmodules.net


Capture. Record. Real-Time. Every Time. Intelligently record wideband signals continuously...for hours Capturing critical SIGINT, radar and communications signals requires hardware highly-optimized for precision and performance. Our COTS Talon® recording systems deliver the industry’s highest levels of performance, even in the harshest environments. You’ll get extended operation, high dynamic range and exceptional recording speed every time! •

High-speed, real-time recording: Sustained data capture rates to 8 GB/sec

Extended capture periods: Record real-time for hours or days with storage up to 100+ TB

Exceptional signal quality: Maintain highest dynamic range for critical signals

Flexible I/O: Capture both analog and digital signals

Operational in any environment: Lab, rugged, flight-certified, portable and SFF systems designed for SWaP

Out-of-the-box operation: SystemFlow® GUI, signal analyzer and API provide simple instrument interfaces

Intelligent recording: Sentinel Intelligent Scan and Capture software automatically detects and records signals of interest ™

Eight SSD QuickPac™ canister, removable in seconds!

Download the FREE High-Speed Recording Systems Handbook at: www.pentek.com/go/cotstalon or call 201-818-5900 for additional information.

Pentek, Inc., One Park Way, Upper Saddle River, NJ 07458 Phone: 201-818-5900 • Fax: 201-818-5904 • email: info@pentek.com • www.pentek.com Worldwide Distribution & Support, Copyright © 2016 Pentek, Inc. Pentek, Talon, SystemFlow, Sentinel and QuickPac are trademarks of Pentek, Inc. Other trademarks are properties of their respective owners.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.