Industrial Artificial Intelligence with Resource Guide 2019

Page 1

EMBEDDED-COMPUTING.COM

EXPLORING EMBEDDED MACHINE LEARNING PG 8

THE PERPLEXITIES OF PREDICTIVE MAINTENANCE PG 22

2019 RESOURCE GUIDE PG 38

Development Kit Selector

http://embedded-computing.com/designs/iot_dev_kits/

SMARTER IoT ENDPOINTS ENABLED BY AI AT THE FARTHEST EDGE PG 14

APPLYING MACHINE LEARNING ON MOBILE DEVICES PG 32

ELECTRONIC SERVICE REQUESTED

AVNET INTEGRATED

PG 45

SimpleFlex – the new standard in embedded industrial computing has arrived

OpenSystems Media

PRST STD U.S. POSTAGE

1505 N. HAYDEN RD. #105, SCOTTSDALE, AZ 85257


20foPr free admission 2erw e-code You

Nürnberg, Germany

February 25 – 27, 2020

DISCOVER INNOVATIONS Over 1,000 companies and more than 30,000 visitors from 84 countries – this is where the embedded community comes together. Don’t miss out! Get your free ticket today!

Your e-code for free admission: 2ew20P

embedded-world.de/voucher @embedded_world

#ew20 #futurestartshere

Exhibition organizer

Media partners

ucher

orld.de / vo

-w embedded

NürnbergMesse GmbH T +49 9 11 86 06-49 12 visitorservice@nuernbergmesse.de Conference organizer Fachmedium der Automatisierungstechnik

WEKA FACHMEDIEN GmbH T +49 89 2 55 56-13 49 info@embedded-world.eu


Take AI TO THE

Edge with FORCE RUGGED. SCALABLE. SECURE. AI applications such as facial recognition, video analytics and geospatial modeling demand reliable, high-performance computing in the field – where traditional computers fail. Crystal Group FORCE™ rugged 3U server is designed to fuse massive amounts of data while delivering secure, real-time, continuous performance in the most extreme and unpredictable operating conditions. Along with impressive computing power, the system supports deep learning applications (CNN) with maximum efficiency and speed using Intel® OpenVINO™, and eliminates cloud-based throughput variations.

FORCE™ features • Rugged, lightweight chassis • 2 Intel® Xeon® scalable processors • Integrated 16-port layer 2+ switch • NVMe storage • FPGA graphics accelerator • Chassis intrusion switch

In addition, multiple module options for cooling, audio, anti-tamper, networking and memory enable fast, flexible customization. Reliability matters. Crystal Group FORCE™ delivers.

SERVERS | DISPLAYS | STORAGE | NETWORKING | EMBEDDED | CARBON FIBER sales@crystalrugged.com | 800.378.1636 | crystalrugged.com


CONTENTS FEATURES 10

30

Artificial intelligence and machine learning in a startup ecosystem

2019 | Volume 2 | Number 1

By Jain Swanubhooti, Grand View Research

14

opsy.st/ECDLinkedIn

Smarter IoT endpoints enabled by AI at the farthest edge By Chris Shore, Arm

20

COVER

MLPerf: The new industry benchmark for AI Inferencing at the edge

Artificial intelligence and machine learning continue their foray into the realm of industry: Engineers faced with untold amounts of data are opting for smarter tools to handle tasks both during production and for use at the edge. The 2019 Industrial AI & Machine Learning Resource Guide also features hardware and software products of interest for the embedded design engineer.

By Brandon Lewis, Editor-in-Chief

22

@Industrial_ai

Developing high-performance DNN object detection/recognition applications for FPGA-based edge devices By Farhad Fallah, Aldec

26

The perplexities of predictive maintenance

30

Applying machine learning on mobile devices

32

Demystifying AI and machine learning for digital health applications

34

Low battery self-discharge: The key to long-life remote wireless sensors

By Seth Deland, MathWorks

By Igor Markov, Auriga Inc.

By Arvind Ananthan, MathWorks

WEB EXTRAS

By Sol Jacobs, Tadiran Batteries

Ą How AI & ML are being Used to Relieve Traffic Congestion

38 2019 RESOURCE GUIDE

By Raphaël Gindrat, Bestmile https://bit.ly/2Q8ftzS

14

Ą QuickLogic, SensiML Deliver Edge AI Without A Data Scientist By Brandon Lewis, Embedded Computing Design https://bit.ly/2DMdPzt

Published by:

2019 OpenSystems Media® © 2019 Embedded Computing Design © 2019 Industrial AI and Machine Learning All registered brands and trademarks within Embedded Computing Design and Industrial AI and Machine Learning magazines are the property of their respective owners.

COLUMNS 7

True AI Edge Compute Arrives in Highly Integrated Processors, IP By Brandon Lewis, Editor-in-Chief

4

enviroink.indd 1

8

Exploring Embedded Machine Learning By Curt Schwaderer, Technology Editor

Industrial AI & Machine Learning RESOURCE GUIDE 2019

10/1/08 10:44:38 AM

To unsubscribe, email your name, address, and subscription number as it appears on the label to: subscriptions@opensysmedia.com www.embedded-computing.com/ai-machine-learning


ROBUST IIOT SOLUTIONS

Embed Success in Every Product WINSYSTEMS’ rugged, highly reliable embedded computer systems are

designed to acquire and facilitate the flow of essential data at the heart of your application so you can design smarter solutions. We understand the risk and challenges of bringing new products to market which is why technology decision makers choose WINSYSTEMS to help them select the optimal embedded computing solutions to enable their products. As a result, they have more time to focus on product feature design with lower overall costs and faster time to market. Partner with WINSYSTEMS to embed success in every product and secure your reputation as a technology leader.

817-274-7553 | www.winsystems.com 715 Stadium Drive, Arlington, Texas 76011 ASK ABOUT OUR PRODUCT EVALUATION!

Single Board Computers | COM Express Solutions | Power Supplies | I/O Modules | Panel PCs

EBC-C413 EBX-compatible SBC with Intel® Atom™ E3800 Series Processor EPX-C414 EPIC-compatible SBC with Intel® Atom™ E3800 Series Processor PX1-C415 PC/104 Form Factor SBC with PCIe/104™ OneBank™ expansion and latest generation Intel® Atom™ E3900 Series processor


AD LIST PAGE

ADVERTISER

16

ACCES I/O Products, Inc. – PCI Express mini card/mPCIe embedded I/O solutions Avnet Integrated – SimpleFlex – the new standard in embedded industrial computing has arrived Critical Link – MitySOM-A10S: Arria 10 System on Module and development kits Crystal Group – Take AI to the edge with FORCE Digi-Key – Development Kit Selector embedded world Exhibition & Conference – … it’s a smarter world Lauterbach – Trace32 debugger for RH850 from the automotive specialists Lauterbach – Trace 32: Trace-based code coverage PEAK System – You CAN get it Sintrones – Intelligent transportation systems Tadiran Batteries – You probably already use Tadiran batteries, but just don't know it! Vector – VME/VXS/cPCI chassis, backplanes & accessories WinSystems – Robust IIoT solutions

1 19 3 1 2 11 31 17 9 48 13 5

SOCIAL

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass rich.nass@opensysmedia.com EDITOR-IN-CHIEF Brandon Lewis brandon.lewis@opensysmedia.com ASSOCIATE TECHNOLOGY EDITOR Laura Dolan laura.dolan@opensysmedia.com ASSISTANT MANAGING EDITOR Lisa Daigle lisa.daigle@opensysmedia.com SENIOR TECHNOLOGY EDITOR Alix Paultre alix.paultre@opensysmedia.com TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com DIRECTOR OF E-CAST LEAD GENERATION AND AUDIENCE ENGAGEMENT Joy Gilmore joy.gilmore@opensysmedia.com ONLINE EVENTS SPECIALIST Sam Vukobratovich sam.vukobratovich@opensysmedia.com CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com SENIOR WEB DEVELOPER Aaron Ganschow aaron.ganschow@opensysmedia.com WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com CONTRIBUTING DESIGNER Joann Toth joann.toth@opensysmedia.com EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

SALES/MARKETING SALES MANAGER Tom Varcie tom.varcie@opensysmedia.com (586) 415-6500 MARKETING MANAGER Eric Henry eric.henry@opensysmedia.com (541) 760-5361 STRATEGIC ACCOUNT MANAGER Rebecca Barker rebecca.barker@opensysmedia.com (281) 724-8021 STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838 STRATEGIC ACCOUNT MANAGER Kathleen Wackowski kathleen.wackowski@opensysmedia.com (978) 888-7367 SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582

Facebook.com/Embedded.Computing.Design

@Industrial_ai

LinkedIn.com/in/EmbeddedComputing

SOUTHWEST REGIONAL SALES MANAGER Barbara Quinlan barbara.quinlan@opensysmedia.com (480) 236-8818 STRATEGIC ACCOUNT MANAGER Glen Sundin glen.sundin@opensysmedia.com (973) 723-9672 INSIDE SALES Amy Russell amy.russell@opensysmedia.com ASIA-PACIFIC SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com EUROPEAN MARKETING SPECIALIST Steven Jameson steven.jameson@opensysmedia.com +44 (0)7708976338 BUSINESS DEVELOPMENT EUROPE Rory Dear rory.dear@opensysmedia.com +44 (0)7921337498

WWW.OPENSYSMEDIA.COM PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com

youtube.com/user/VideoOpenSystems

EVENTS ĄĄĄembedded world Exhibition & Conference February 25-27, 2020 Nuremburg, Germany www.embedded-world.de/en

ĄĄEmbedded Technologies Expo & Conference June 9-11, 2020 San Jose, CA www.embeddedtechconf.com/dvcon.org

6

Industrial AI & Machine Learning RESOURCE GUIDE 2019

EXECUTIVE VICE PRESIDENT John McHale john.mchale@opensysmedia.com EXECUTIVE VICE PRESIDENT Rich Nass rich.nass@opensysmedia.com CHIEF FINANCIAL OFFICER Rosemary Kristoff rosemary.kristoff@opensysmedia.com GROUP EDITORIAL DIRECTOR John McHale john.mchale@opensysmedia.com VITA EDITORIAL DIRECTOR Jerry Gipper jerry.gipper@opensysmedia.com ASSOCIATE EDITOR Emma Helfrich emma.helfrich@opensysmedia.com SENIOR EDITOR Sally Cole sally.cole@opensysmedia.com CREATIVE PROJECTS Chris Rassiccia chris.rassiccia@opensysmedia.com PROJECT MANAGER Kristine Jennings kristine.jennings@opensysmedia.com FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com SUBSCRIPTION MANAGER subscriptions@opensysmedia.com CORPORATE OFFICE 1505 N. Hayden Rd. #105 • Scottsdale, AZ 85257 • Tel: (480) 967-5581 REPRINTS WRIGHT’S MEDIA REPRINT COORDINATOR Wyndell Hamilton whamilton@wrightsmedia.com (281) 419-5725

www.embedded-computing.com/ai-machine-learning


True AI Edge Compute Arrives in Highly Integrated Processors, IP By Brandon Lewis, Editor-in-Chief

Brandon.Lewis@opensysmedia.com

The image processing subsystems of almost every high-end smartphone now integrate neural networks. Voice rec and speech processing experts will argue that what we now call AI has been running at the edge for years. For the most part, however, these applications have leveraged SoCs and DSPs that were not designed for modern AI workloads. And as AI technology and deployments have progressed, several new engineering challenges have presented themselves: › The need for always-on, ultra-low-power systems that can run on battery power for extended periods and offer quick response times for inferencing › The requirement for integrated security that protects machine learning graphs from tampering or theft › The demand for flexible solutions that can adapt to rapid changes in AI models and algorithms These trends have raised the stakes for IP and processor vendors looking to serve an embedded AI market that is now projected to be worth $4.6 billion by 2024. These companies are now delivering highly integrated, purpose-built compute solutions to capture a share of this business. Dialing Down the Power With AI being deployed in devices as constrained as hearing aids, power consumption has become a first-order consideration for inferencing platforms. Eta Compute has incorporated a patented dynamic voltage frequency scaling (DVFS) technology into its multicore SoCs to serve these use cases. To conserve power, many conventional processors include a sleep function that wakes up a core when a load is present. However, most of these devices run the core at peak rate, which of course requires additional power. With DVFS, Eta Compute devices continuously toggle the voltage supply based on the current workload, but only to the minimum power needed to execute tasks in a sufficient amount of time. The company’s ECM3531, which is based on an Arm Cortex-M3 and NXP CoolFlux DSP, are therefore able to deliver 200 kSps of 12-bit resolution at the expense of just 1 µW. Data Set Lockdown On-chip training data sets that are referenced during inferencing operations have been found to be exploitable. These data sets represent highly valuable intellectual property for most AI companies that can potentially be stolen. But altering pixels in an image recognition data set can make an inferencing engine misidentify objects or not identify them at all. IP blocks like Synopsys’ DesignWare EV7x processor include a vision engine, DNN accelerator, and tightly coupled memory to deliver up to 35 TOPS of energy-efficient performance. However, an understated feature of the EV7x processor is an optional AES-XTS encryption engine that helps protect data passing from on-chip memory to the vision engine or DNN accelerator.

www.embedded-computing.com/ai-machine-learning

Flexibility for Future Models From DNNs to RNNs to LSTMs, dozens of neural network types have emerged in just the last few years. While these symbolize exciting innovation in the world of AI software and algorithms, they also reveal a significant problem for compute devices that are being optimized for specific types of workloads. An ASIC can take anywhere from six months to two years from design to tapeout, potentially accelerating obsolescence for highly specialized solutions. FPGAs have gained significant traction in the AI engineering space for precisely this reason. Xilinx devices like the popular Zynq and MPSoC platforms are hardware- and software-reprogrammable. This means that logic blocks can be optimized for today’s leading neural nets, then reconfigured months or years down the line when algorithms have evolved. But a feature called Dynamic Function eXchange (DFX) permits a system to download partial bit files that modify logic blocks on the fly. This can occur while a device is deployed and operational, essentially adding, removing, or changing the functionality of a single Xilinx device. Production-Ready AI at the Edge The expectations for AI edge computing are now similar to what we projected from the IoT only a few years ago. Just like trillions of “things” will be connected, we assume the vast majority of those will be (artificially) intelligent. While previous-generation solutions laid the foundation, next-generation solutions require a new suite of features to ensure commercial success. Processor and IP vendors are responding by integrating more and more capability into AI edge compute devices.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

7


Exploring Embedded Machine Learning By Curt Schwaderer, Technology Editor

Curt.Schwaderer@opensysmedia.com

In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on neurons and how they work. Building on their work, a model was created using an electrical circuit and the neural network came into being. Seventy years on from McCulloch and Pitts, these beginnings have evolved into a number of large-scale projects by some of the top technology companies and technology communities around the globe – Google Brain, AlexNet, OpenAI, and Amazon Machine Learning Platform are examples of some of the most well-known initiatives relating to AI and machine learning. Enter IoT. And its embedded emphasis. And monetization dependencies on (near) real-time analysis of sensor data and taking actions on that information. These leading initiatives assume massive amounts of data can be fed into the cloud environment seamlessly where analysis can be performed, directions distributed, and actions taken, all within the time deadlines required for every application. Qeexo (pronounced “Keek-so”) CTO Chris Harrison believes that machine learning belongs at the edge and Qeexo is developing solutions to do just that. Mobile sensors and AI Like many paradigm shifting initiatives, this particular initiative started with a challenge: How to design more sophisticated interaction with touch for a mobile device? This quest led to the exploration of the fusion of the touch screen data with an accelerometer to measure taps against the screen. The result was the ability to distinguish between finger, knuckle, nail, and stylus tip and eraser, which broadens the interaction between the user and the device. “If we’re going to put in sophisticated multitouch, we need to do some smart things in order to resolve ambiguous user inputs,” Chris mentioned. “The way to do this is with machine learning. The machine learning software behind our FingerSense product differentiates between finger, knuckle, and nail touches. These new methods of input allow for access to contextual menus. This brings a right-click functionality as opposed to touch-and-hold.” Mobile device machine learning challenges The power and latency budget for machine learning on a mobile device was tiny. It took almost three years before the requirements were met. “As a mobile application developer, you have two choices on a mobile device: you can do things fast at higher power, or slower at lower power. This led to a key capability we call ‘Hybrid Fusion.’ The machine learning software needs to be very clever about access to and processing of the sensor data in order to fit within the power and latency budget,” Chris said. FingerSense became very good at doing edge- and device-optimized machine learning – something that traditional machine learning cloud environments don’t have to consider.

8

Industrial AI & Machine Learning RESOURCE GUIDE 2019

“Most companies are thinking about deep learning from a gigantic servers and expensive CPUs perspective. We took the opposite path. The IoT goal is ‘tiny’ machine learning that can effectively operate with limited resources and maintain near-real-time deadlines of the application. By cutting our teeth in the mobile industry, it gave us the skills and technologies to apply machine learning to edge IoT and embedded devices.” One of the most exciting frontiers is bringing what Chris calls “a sprinkle of machine learning” to IoT and small devices. For example, your light bulb doesn’t have to be able to do a web search for the weekly weather but adding a touch of machine learning that allows it to sense movement and temperature to make on/off decisions has actual real-world value. Embedded machine learning architecture The machine learning environment is written in C/C++ and Arm assembly to optimize efficiency and operating system portability. Most of the operation is within a kernel driver component. The software must deal with power management for battery-powered devices. Using the main CPU in the device for the embedded machine learning can be very power-consumptive, so instead of hooking accelerometer and motion sensors to the main CPU, a lowpower microcontroller sits in-between the sensor and the main CPU acting as a “sensor hub.” The sensor hub is more power-efficient and is specialized for the heavy lifting of sensor communication. The sensor hub also can execute a little bit of logic to allow the main CPU to be off for a much longer period of time. This tiered design optimizes the power and latency budgets, making the embedded machine learning environment possible on mobile device and IoT sensors.

www.embedded-computing.com/ai-machine-learning


“Accelerometer data is constant streams of data with no logic being applied, so this needs to be continually sampled,” Chris said. “This is where the machine learning logic starts (and perhaps ends). There may be additional machine learning logic that can be done on the main CPU. You may decide that the sensor hub can filter out or prechoose the data, so fewer amounts of data go to the main CPU.”

WHENEVER POSSIBLE, YOU WANT TO REDUCE THE

machine learning and actions exclusively at the edge, eliminating the internet connection altogether. “At CMU [Carnegie Mellon University] we would occasionally get calls from law enforcement telling us our cameras were being used to send emails,” Chris said. “And these attacks were happening with security experts running the network! When possible, don’t connect your system to the internet. If we can get away from that trend [leveraging cloud processing for everything], we should be able to achieve a far more secure, private, and efficient system. There is a time and place for cloud connections, but engineers need to stop jumping immediately to that resource.” Given how fast these processors are improving, it certainly seems achievable. There is also a cost benefit: Most current smart devices are priced out of the mass market. If we can sprinkle intelligence into these devices and bring down the costs and provide real value, adoption will accelerate.

ATTACK SURFACE. SOME APPLICATIONS MAY BE ABLE TO DO MACHINE LEARNING AND ACTIONS EXCLUSIVELY AT THE EDGE, ELIMINATING THE INTERNET CONNECTION ALTOGETHER.

One example is when bursts of traffic occur: If sensor information is idle, but then generates a burst of information and this burst moves into main memory or ties up the bus, things can go badly. Alternatively, if the coprocessor provides a vector representation of the information to the main processor, this can streamline efficiency while still being able to interpret the information. Summary One must be careful not to assume perfect, high-bandwidth network connectivity and infinite machine learning resources on the way to a successful IoT system. Chris warns against the cloud environment being used as a crutch. “If you take the time to properly analyze, gather requirements, and design the IoT system, you can absolutely perform machine learning at the edge. This minimizes network requirements and provides a high level of near-real-time interaction.” Of course, security considerations are also at the forefront. Whenever possible, you want to reduce the attack surface. Some applications may be able to do www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

9


GETTING STARTED WITH AI

Artificial intelligence and machine learning in a startup ecosystem By Jain Swanubhooti, Grand View Research

Artificial intelligence (AI) and machine learning (ML) technologies deliver intelligent functionalities to pre-existing solutions across various verticals including automotive, health care, and business analytics. AI and ML collectively offer an optimal solution to the existing problem, and further impart self-learning logic in the system to instinctively address any similar complication in the future. Statistic abilities of machine learning to develop capabilities and enhance the performance of models have delivered significant traction for the technology in startups across the world. Companies are highly attracted to the investment opportunities for integrating ML in their services and products across the AI landscape. These unprecedented stages of investment are emblematic of the hype surrounding AI and ML and have also accentuated the value that technology leaders believe AI and ML can bring to the world. While recent developments are centered primarily on industries based on digital transformation, the growth of AI and ML should be considered as a game-changing phenomenon.

Growth of AI in key sectors Artificial intelligence and machine learning are used in several applications in various sectors including human resources, e-commerce, defense and security, sales and marketing, communication, legal/compliance and fraud detection, urban management, real estate, automotive, education, energy, entertainment, fintech, transportation, and health care/biotech. The following are some investments made by top companies in AI-based startups around the globe:

the company is promoting startups that are developing new technology for robotics, autonomous cars, and ML platforms. › In November 2018, Microsoft signed an agreement to acquire XOXCO, a Texas-based software product design and development studio. XOXCO is also known for its bot development capabilities and conversational AI bots. Microsoft has already acquired numerous other AI startups in 2018, including Lobe Artificial Intelligence, Bonsai AI, and Semantic Machines.

› In November 2018, Qualcomm Technologies, Inc. announced the launch of the Qualcomm Ventures AI Fund. The launch of the fund is aimed at investing up to $100 million in promising startups that are transforming AI. Moreover,

As AI and ML started making impressions on numerous fields, beneficiaries of the AI and ML revolution attracted entrepreneurs

The recent surge in advancement of AI and ML technologies has triggered exponential investment avenues across startups and aligned funding for AI research and development. Further, cybersecurity companies are progressively relying on ML to detect new malware and for performance optimization of complex tasks. These technologies are also used to generate content such as original ultrarealistic images and written or audio stories. Simply put, AI and ML can be used by startup companies to gain significant traction.

10

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


Defense/military Incorporation of AI and ML technologies that support warfare activities have accelerated associated startup establishments. AI-enabled startups are focusing on developing military and commercial robots, which are military autonomous systems that would enhance the productivity of the defense systems. For instance, French armed forces minister Florence Parly recently announced an increase in the country’s annual expenditure on AI to $112.84 million (U.S) as a part of the total investment of $2.062 billion in public investment by 2022. This initiative was a part of France’s innovation to advance future weapon systems. Manufacturing Manufacturing platforms with the advancement of automation and robotics using AI and ML contribute to value creation in this field. Production-based startups in this arena are primarily focusing on finding an efficient solution to solve the complex tasks of mathematically planning and scheduling. Developments in Manufacturing › Drishti is an Indian startup concentrated on bringing AI innovations in the manufacturing space. Manufacturers use Drishti to drive sweeping improvements in quality, tractability, productivity, and true digital transformation. This startup raised $10 million (U.S.) to cover factory workers using AI and action recognition. › In January 2019, London-based startup Flexciton raised $2.8 million (U.S.) in funding to work on projects intended to modernize the manufacturing industry with AI-based production planning and scheduling. to invest in startups using the same technology. Mentioned below are some key sectors wherein the adoption is considerably high in terms of AI and ML:

Banking and financial services In an era of digitalization, the introduction of AI and ML has carved a different scope for BFSI [banking, financial services, and insurance] in the global economy. A new

Health care In the field of health care, elementary implementation of AI technology for virtual assistance and chatbots deliver positive growth outlook in the years to come. Investments by startups within AI and ML in the health care field are mainly focused on diagnosis, monitoring chronic conditions, robotic surgery, image analysis, drug discovery, and fitness wearables. Developments in Health Care › In October 2018, Philips announced the launch of its startup collaboration program, which involved the company’s innovation hubs present in the U.S., Eindhoven (Netherlands), Shanghai (China), and Bengaluru (India), primarily focused on the application of AI in the health care field. › In June 2019, Kang Health, an AI startup, received $12.5 million in funding for their application “K,” a HIPAA-compliant AI-powered health care app which compiles anonymous information from consumers about chronic conditions and their medical background. www.embedded-computing.com/ai-machine-learning

TRACE 32 ®

Debugger for RH850 from the automotive specialists

DEBUGGING

NEXUS TRACING

RH850, ICU-M and GTM Debugging

Code Coverage (ISO 26262)

AUTOSAR / Multicore Debugging Runtime Measurement (Performance Counter)

eec_rh850.indd 1

Multicore Tracing Onchip, Parallel and Aurora Tracing

www.lauterbach.com/1701.html

Industrial AI & Machine Learning RESOURCE GUIDE 2019

11

07.11.2018 12:21:20


GETTING STARTED WITH AI

breed of startup companies in this space is using AI to offer a list of operations for banking and financial services. Further, the rise of chatbots in the BFSI sector is rapidly diversifying the face of the banking industry. In a BFSI startup investment scenario, the investors are keen on the development of cybersecurity and e-commerce. For instance, Citibank’s product portfolio includes a strategic investment for startups’ continuous and rapid data evolution. This trend will further spur AI startups to capitalize on the research and development field. IT and telecommunications IT and telecommunication startup platforms are finding significant growth as they implement AI and ML to optimize, manage, and monitor network operations. In developing countries, governments are opting to prepare an ecosystem where the connectivity is seamless, AI is conventional, and AI/ML is designed to improve the aspects of e-governance. This movement will further nurture the innovation in traditional industries, enabling new business models and growing startups. Retail The landscape of the retail sector is experiencing a seismic shift with the growing use of AI. Sales and CRM applications, customer recommendations, manufacturing, logistics and delivery, and payment services, are only some of the functions AI and ML have a large part in. IT startup companies are also using AI to provide solutions to retailers to improve inventory management. Developments in retail › Focal System is a San Franciscobased startup that helps retailers avoid running out of supplies, aids in inventory management, optimizes checkout lines, and enhances the shopper experience. Focal Systems’ solutions are comprised of powerful devices in retail stores that leverage computer vision and ML technologies. GRAND VIEW RESEARCH

www.grandviewresearch.com

12

Several AI/ML startups illustrated › Argo AI: Argo AI tackles the most challenging applications in robotics, computer science, and AI with self-driving vehicles. It deploys and develops the latest advancement in ML, computer vision, and AI to manufacture efficient and safe autonomous vehicles. › Kreditech Holding SSL: Kreditech Holding SSL is aiming to provide improved credit services and convenience for digital banking services using AI. › ACRON OakNorth: ACRON’s platform supports automated process implementation that leverages ML. › SoundHound Inc.: SoundHound Inc. is an artificial intelligence platform that enables business owners and developers to deploy it at any place and retain control of their brand and users while innovating. Geographical breakdown of AI/ML startups With strong leadership in the AI field, the U.S. accounts for the prominent share in private and public investment fields. American tech startups are profoundly active in AI research and development fields. Further, prominent market players such as Google, Baidu, Apple, IBM, and Microsoft – among others – are involved in acquisitions and mergers with startups using AI/ML. Europe’s AI market is highly fragmented, as there are many leaders and startup companies using AI to scale up their returns. The European Commission and the European Union announced an initiative for investing in startups using AI and ML, including an investment worth $22.56 billion (U.S.); the investment is promised by the European Union, the private sector, and member states. Moreover, there is additional AI/ML investment on the horizon in 2020 under the research and innovation program. Future growth perspective for AI startups AI and ML will be able to spark future growth for venture capitalists, investors, pioneering entrepreneurs, and forward-thinking corporations. AI is anticipated to provide significant innovations both for corporations and consumers. In the automotive world, AI startups are focusing on developing technology for driverless or self-driving cars. With the convergence of other transformative technologies such as big data analytics and the IoT, AI has the potential to generate a new basis for economic growth in all developed regions, with Silicon Valley among the global hubs for it. AI developments are fostered by data availability, so data-driven sectors already have a head start. In addition, startups will require a larger investment than their peers in a different field, due to the high fixed cost of AI/ML, the result of the cost of data collection and aggregation, human resources, and computing power. For instance, despite a general trend upward, Europe’s AI startup funding remains limited. With investors demanding a rapid return on investment, corporations need a strong defender to justify portfolio diversification and venture strategies. The proliferation of digital technologies, subsequent investment in the AI field, and rising demand for advanced solutions are expected to drive the growth of AI and ML in the coming years. IAI Jain Swanubhooti has worked as a senior research associate in Information & Communications Technology (ICT) at Grand View Research since April 2019. She has experience working on many projects in Information & Communication Technology (ICT), including artificial intelligence, video streaming, smart electricity meters, digital utilities, and others. Swanubhooti – a management consulting professional – holds a bachelor’s degree in Engineering from Medi-Caps University in Indore, India.

TWITTER

@GrandViewInc

Industrial AI & Machine Learning RESOURCE GUIDE 2019

LINKEDIN

www.linkedin.com/company/grand-view-research/

FACEBOOK

@grandviewresearch

www.embedded-computing.com/ai-machine-learning



AI AT THE EDGE

Smarter IoT endpoints enabled by AI at the farthest edge By Chris Shore, Arm AI is already being used in various applications to identify patterns in complex scenes, such as people at a busy crossing.

The World Wide Web recently turned 30, a milestone that passed with surprisingly little fanfare. Perhaps because the internet as we knew it then is a bit like the Wright Brothers’ first flight – the technology when it began is so very different to what we have now that it pales by comparison. What is comparable is the respective technologies’ disruptive effect; in that respect, artificial intelligence (AI) is shaping up to be even more impactful than powered flight or democratized data. The advances underway in AI right now are redefining what we will consider possible for years to come. The new internet is the Internet of Things (IoT) and it’s all about data – generated and processed at a scale simply inconceivable before the IoT. Now, through applying AI to that data, we can achieve dramatically improved insights. AI can now identify leaks in London’s water grid so engineers can target precise pipeline replacements. It can measure how people using Tokyo’s Shibuya crossing at peak times impacts traffic flow. And it can measure how New Yorkers react to that new advert in Times Square. Three examples, three industries – utilities, logistics and marketing – all enhanced by AI. The amount of data currently being collated by the IoT is already huge, but it’s set to get far bigger and far more interesting. In February 2019, Gartner said that the adoption of artificial intelligence in organizations is tripling year on year. For engineers and engineering companies, increasing intelligence in the device network means we can start to realize the true potential of the IoT. Where AI will be most useful in the IIoT AI is quickly becoming a task that can be handled by mainstream computing resources; we already have AI, in the form of machine learning (ML) inference, running on singlesensor devices such as asthma inhalers. We can access AI-driven photo enhancement directly on our smartphones, and then there are the computer vision apps running in advanced vehicles. All of these are already improving lives, but we’ll see the most

14

Industrial AI & Machine Learning RESOURCE GUIDE 2019

immediate commercial value in industrial applications. In an industrial environment, any technology that can increase productivity is valuable, and operational data is routinely used to deliver insights into machines and their current condition. The data generated by industrial sensors contain patterns, which through increasingly sophisticated analytics can help predict when an asset will fail, allowing it to be repaired before that failure has a bigger overall impact on productivity. This branch of predictive and preventive analytics has previously been carried out in large servers and “the cloud,” but developments in AI and ML means it is now moving closer to the edge of the network. In fact, it’s being put directly

www.embedded-computing.com/ai-machine-learning


ARM

www.arm.com

FIGURE 1

TWITTER @Arm

LINKEDIN

www.linkedin.com/company/rm/about/

FACEBOOK @Arm

The number of devices at the edge will number in the billions.

resource required to train an AI algorithm is considerable, but it is effectively a nonrecurring expense. The resources needed to execute inference models are more modest, but in volume can consume just as much – if not more – processing resource as the training phase. How they differ is that, unlike training, each instance of inference can be packaged and executed in isolation from all others, which means it can easily be ported to smaller processing resources and replicated as many times as required. This distributed intelligence is the shape of the new internet, one that can operate in isolation once more if necessary, while remaining part of the whole. Edge processing removes the need to pass data across an increasingly congested network and consume ever-more valuable processing resources. into the machines that make up the Industrial Internet of Things (IIoT). Machine learning at the edge There are many reasons why ML processing is moving to the edge. The first is the simplest to accept: the edge is where the data is created. There are other – more critical – reasons though, most notably because data consumes resources both in terms of bandwidth to move and instruction cycles to process. If all of the data being generated across the IoT were to be processed by servers, it would involve huge volumes of network traffic and an exponential increase in server power. It’s exactly why the likes of Google are slimming down some of their algorithms – so they can run independently of the cloud, on edge AI-powered devices. (Figure 1.) Just as embedding an HTML server in an edge device is now commonplace, it is just as feasible to execute ML in an endpoint, such as a sensor. But the way ML will be implemented at the edge is crucial and it follows the concept of distributed processing. The processing

Architectures for ML Once training is complete, AI frameworks provide the route to deployment. For resource-constrained devices being deployed at the edge, this includes the likes of TensorFlow Lite and Caffe2. These and other such platforms are typically open source and often come with a “get you started” introduction; models that are already trained to provide some form of inference. These models can also be retrained with custom data sets, a process called transfer learning, which can save many hours of processing time. In order to be portable across different processing architectures, the models typically run through an interpreter and are accessed by the host software using APIs. Because the models are optimized, the whole implementation can be made to fit into the low hundreds of kilobytes of memory. There are numerous examples of how ML is running on, at, or near the edge of the network, and many of these will be running a Linux-based operating system. These CPU-based ML solutions use what are essentially general-purpose microprocessors, rather than the power-hungry and often large GPU-oriented devices that are common in desktop computers. GPUs have highly parallel execution blocks and make use of multiple MAC units, designed to carry out repetitive, math-oriented operations as fast as possible with little regard for the power consumed. They are often difficult to program, require high levels of power and are in general not suitable for resourceconstrained edge devices. TensorFlow Lite was designed to run some TensorFlow models on smaller processors, with pre-trained models available that can provide various types of ML, including image classification, object detection and segmentation. These three types of models work in slightly different ways: image classification works on the entire image, while object detection breaks the image up into rectangles, but segmentation goes further

www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

15


AI AT THE EDGE

to look at each individual pixel. To use trained TensorFlow models in a TensorFlow Lite deployment, the models need to be converted, which reduces the file size using optional optimizations. The converter can be used as an API for Python, the code example below demonstrates how it is used. import tensorflow as tf converter= tf.lite. TFLiteConvert er.from_saved_model(saved_model_dir) tflite_model = converter.convert() open(“converted_model.tflite”, “wb”).write(tflite_model)

Running ML on standard processors means developers can also take advantage of simple software solutions based on industry-standard languages such as Python.

These processors may feature DSP extensions in some cases, and these can be instrumental in accelerating parts of the data flow, but essentially generalpurpose processors can handle the levels of processing required to run ML in smaller devices, while still dealing with the general application code. CPU-led AI is already commonly used in smartphones, for identifying particular features in photos, for example. The same is true in industrial applications, where System on Chip (SoC) solutions based on multicore processors like the i.MX family from NXP are routinely being used to put ML into industrial processes. This includes machine vision systems that can identify specific products as they progress through a manufacturing process (Figure 2). These SoCs and others like them are perfect examples of how ML is being deployed today. Moving beyond the horizon While CPU- or MCU-led AI is commonplace now, we are already looking forward to the farthest edge of the device network where size, power, and cost requirements are ultra-constrained. This is where the latest version of TensorFlow comes in: TensorFlow Lite Micro, or TF Lite Micro as it’s called, is a version of the framework that has been designed to run on microcontrollers with perhaps no operating system, rather than microprocessors running Linux. The code and model together only need 45 KB of flash, and just 30 KB of RAM to run. This is inference at the farthest edge, in a device operating completely autonomously without any assistance from any other software or, just as importantly, additional hardware. The process of using TF Lite Micro is similar to using TensorFlow Lite, with the additional step of writing deeply embedded code to run the inference. As well as including the relevant .h files in the code, the main steps comprise: adding code to allow the model to write logs; instantiating the model; allocating memory for the input; output and intermediate arrays; instantiating the interpreter; validating the input shape, and actually running the model

16

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


You CAN get it... Hardware & software for CAN bus applications

FIGURE 2

A demonstration of how AI and machine vision are being used to identify types of pasta on a fast-moving conveyor belt.

and obtaining the output. The code snippet below is an example of how to obtain the output.

PCAN-Router FD Programmable router for CAN and CAN FD with 2 channels. Available in aluminum casing with D-Sub or Phoenix connectors.

TfLiteTensor* output = interpreter.output(0); uint8_t top_category_score = 0; int top_category_index; for (int category_index = 0; category_index < kCategoryCount; ++category_index) { const uint8_t category_score = output->data.uint8[category_index]; if (category_score > top_category_score) { top_category_score = category_score;

PCAN-PCI Express FD

top_category_index = category_index; } }

In order to support ML on microcontrollers, Arm has developed the CMSIS-NN software library, part of the Cortex Microcontroller Software Interface Standard (CMSIS) that deals with neural networks. Through quantization, which reduces floating-point numbers down to integers (a process that has been proven to result in little or no loss of accuracy), CMSIS-NN helps developers map models to the limited resources of a microcontroller. Super-efficient ML frameworks such as TF Lite Micro, along with CMSIS-NN, make it possible to use ML running on an ultra-low-power microcontroller. This clearly has many possible applications, but one scenario that is very applicable to always-on systems is where the majority of the system remains in a deep sleep mode until a specific condition brings it to life, such as a wake word. We can think of this as a new kind of interrupt service routine, one that uses intelligence to decide when the rest of the chip/system needs to get involved. This clearly demonstrates the potential that ultralow-power ML functionality has to make a huge impact at the edge. Moving forward, technology developments focused on the needs of edge inference will enable highly responsive and extremely capable ML models to run at even lower power levels. As an example, Arm has developed new vector extensions to the ArmV8-M architecture, called Helium. This is the latest development of the Arm Cortex-M processors, which gained the benefits of Arm TrustZone for security when the Armv8-M architecture was introduced in 2015. The development of the Helium vector extensions will combine NEON-like processing capability with the security of TrustZone. Helium www.embedded-computing.com/ai-machine-learning

CAN FD interface for PCI Express slots with data transfer rates up to 12 Mbit/s. Delivery incl. software, APIs, and drivers for Windows ÂŽ and Linux.

PCAN-Explorer 6 Professional Windows ÂŽ software for observation, control, and simulation of CAN FD and CAN 2.0 busses.

www.peak-system.com

Industrial AI & Machine Learning RESOURCE GUIDE 2019

17


AI AT THE EDGE

vector extensions will also deliver a significant performance boost to Cortex-M class microcontrollers that will help enable many new applications, with even more responsive and accurate ML at the edge. Helium will deliver up to a 15-time improvement in ML in Cortex-M devices, and as much as a 5-time improvement in signal processing. (Figure 3.) Just as importantly for developers, this means they will have access to ML in the same tool chain they use for other microcontroller-based developments. Integrating functions such as identifying unusual vibrations, unexpected noises, or alarming images will be implicit in the control code, streamlining the entire process of putting ML at the edge. The tool chain and models are already available for early evaluation, with first silicon expected to be available by 2021. Far from being “technology for technology’s sake,” the use of machine learning at the edge of the network is increasing

FIGURE 3

Helium will accelerate signal processing and machine learning algorithms.

due to demand for more responsive and robust control systems that aren’t dependent on cloud services and having an always-on connection to the IoT. Using inferencing at the edge to limit the amount of data transferred over increasingly congested networks will be essential, if the IoT is going to scale to the trillions of devices we now realize will be needed to meet growing expectations. IAI Chris Shore is director of embedded solutions, Automotive and IoT Line of Business, Arm.

OpenSystems Media works with industry leaders to develop and publish content that educates our readers. The Insider’s Guide to Excellent UX for Non-UX People By The Qt Company Excellent user experience (UX) can create legions of dedicated customers and has the power to elevate your company into a household brand. This white paper will look at all things UX as it relates to product development – what it is, why it matters, and what you can do about it, including some basic best practices for achieving an excellent UX with your products.

https://bit.ly/32ZSomh

18

Industrial AI & Machine Learning RESOURCE GUIDE 2019

Check out our white papers at www.embedded-computing.com/white-paper-library www.embedded-computing.com/ai-machine-learning


ADVERTORIAL

EXECUTIVE SPEAKOUT

MITYSOM-A10S:

ARRIA 10 SYSTEM ON MODULE AND DEVELOPMENT KITS CRITICAL LINK’S LATEST PRODUCTION-READY, INDUSTRIAL PERFORMANCE SOM. Open Architecture for User-Programmability Critical Link’s MitySOM-A10S is an Intel/Altera Arria 10 SoC SOM (system on module) developed exclusively for industrial applications. It is a production-ready board-level solution that delivers industrial performance and includes a range of configurations to fit your requirements.

Why choose a Critical Link SOM? Critical Link’s support is unmatched in the industry, including our application engineering and online technical resources. We provide production-ready board-level solutions in a range of configurations. With Critical Link SOMs, it’s about time: Time to market, time to focus on company IP, and product lifetime.

The MitySOM-A10S has been designed to support several upgrade options including various speed grades, memory configurations, and operating temperature specifications (including commercial and industrial temperature ranges).

› Built for long term production, with 10-15+ year availability › Proven track record for product performance in the field › Base board design files and other resources available online at no cost › Lifetime product maintenance and support

Customers using the MitySOM-A10S receive free, lifetime access to Critical Link’s technical support site, as well as access to application engineering resources and other services. Critical Link will also provide developers the design files for our base boards, further accelerating design cycles and time to market. Specifications › Up to 480KLE FPGA fabric › Dual-Core Cortex A9 processors › 4GB DDR4 HPS shared memory › 2GB DDR4 FPGA memory › 12 high speed transceiver pairs, up to 12.5Gbps › Max 138 Direct FPGA I/Os, 30 shared HPS/FPGA I/Os › Supports several high-level operating systems, including Linux out of the box › Designed for long life in the field with 24/7 operation (not a reference design) Flexible, Off-the-Shelf Board Level Solution for Industrial Applications Leverage the SoC’s dual core ARM and user-programmable FPGA fabric to do more embedded processing with 40% less power. 12 high speed transceiver pairs combined with Critical Link’s onboard memory subsystems make this SOM well-suited for the high-speed processing needs of the most cutting-edge industrial technology products. Example applications include: › › › › › › › ›

Test and Measurement Industrial Automation and Control Industrial Instrumentation Medical Instrumentation Embedded Imaging & Machine Vision Medical Imaging Broadcast Smart Cities / Smart Grid

Email us at info@criticallink.com or visit www.criticallink.com.


AI AT THE EDGE

MLPerf: The New Industry Benchmark for AI Inferencing at the Edge By Brandon Lewis, Editor-in-Chief

TOPS. FLOPS. GFLOPS. AI processor vendors calculate the maximum inferencing performance of their architectures in a variety of ways. Do these numbers even matter? Most of them are produced in laboratory-type settings, where ideal conditions and workloads allow the system under test (SUT) to generate the highest scores possible for marketing purposes. Most engineers, on the other hand, could care less about these theoretical possibilities. They are more concerned with how a technology impacts the accuracy, throughput, and/or latency of their inference device. Industry-standard benchmarks that compare compute elements against specific workloads are far more useful. For example, an image-classification engineer could identify multiple options that meet their performance requirements, then whittle them down based on power consumption, cost, etc. Voice-recognition designers could use benchmark results to analyze various processor and memory combinations, then decide whether to synthesize speech locally or in the cloud. But the rapid introduction of AI and ML models, development frameworks, and tools complicates such comparisons. As shown in Figure 1, a growing number of options in the AI technology stack also means an exponential increase in permutations that can be used to judge inferencing performance. And that’s before considering all the ways that models and algorithms can be optimized for a given system architecture. Needless to say, developing such a comprehensive benchmark is beyond the ability or desire most companies. And even if one was capable of accomplishing this feat, would the engineering community really accept it as a “standard benchmark?”

models with ResNet34 or MobileNet-v1 backbones, and the machine translation task using the GNMT model. Beyond these tasks is where MLPerf Inference starts to deviate from the norm of traditional benchmarks. Because the importance of accuracy, latency, throughput, and cost are weighted differently for different use cases, MLPerf Inference accounts for tradeoffs by grading inferencing performance against quality targets in the four key application areas of mobile devices, autonomous vehicles, robotics, and cloud. To effectively grade tasks in a context that is as close as possible to a real-world system operating in these application areas, MLPerf Inference introduces a Load Generator tool that produces query traffic based on four different scenarios:

A more complete assessment of the AI inferencing landscape can be found in MLPerf’s recently released Inference v0.5 benchmark. MLPerf Inference is a community-developed test suite that can be used to measure the inferencing performance of AI hardware, software, systems, and services. It is the result of a collaboration between more than 200 engineers from more than 30 companies.

› Continuous single-stream queries with a sample size of one, common in mobile devices › Continuous multistream queries with multiple samples per stream, as would be found in an autonomous vehicle where latency is critical › Server queries where requests arrive at random, such as in web services where latency is also important › Offline queries where batch processing is performed and throughput is a prominent consideration

As you would expect from any benchmark, MLPerf Inference defines a suite of standardized workloads organized into “tasks” for image classification, object detection, and machine translation use cases. Each task is comprised of AI models and data sets that are relevant to the function being performed, with the image classification task supporting ResNet-50 and MobileNet-v1 models, the object detection task leveraging SSD

The Load Generator delivers these scenarios in modes that test for both accuracy and throughput (performance): The SUT receives requests from the Load Generator, loads samples from a data

MLPerf: Better benchmarks for AI inference Industry and academia have developed several inferencing benchmarks over the past few years, but they tend to focus on more niche areas of the nascent AI market. Some examples include EEMBC’s MLMark for embedded image classification and object detection, the AI Benchmark from ETH Zurich that targets computer vision on Android smartphones, and Harvard’s Fathom benchmark that emphasizes the throughput but not the accuracy of various neural networks.

20

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


set into memory, runs the benchmark, and returns the results to the Load Generator. An accuracy script then verifies the results.

ML Applications ML Datasets

COCO

VOC

PyTorch

Caffe

Graph Compilers

XLA

Optimized Libraries

MKL DNN

Operating Systems

Linux

Hardware Targets

FIGURE 1

CPUs

GPUs

MxNet

nGraph

Glow

TPUs

Paddle Paddle

GNMT Theano

TVM

CuBLAS

OpenBLAS

MacOS

Android

NPUs

SSD

CNTK

ONNX

Windows

WMT

MobileNet

NNEF

CUDA

Autonomous Driving KITTI

SqueezeNet

GoogleNet

Tensorflow

Language Translation

Speech

Graph Formats

DSPs

Eigen RTOS

FPGAs

Accelerators

An increasing number of options in the AI development stack has complicated industry-standard benchmarking.

FIGURE 2 DSPs, FPGAs, CPUs, ASICs, and GPUs all successfully completed the MLPerf Inference closed division requirements.

SSD-ResNet34

SSD-MobileNets-v1 MobileNets-v1 GNMT

ResNet50-v1.5

80

Number of Results

MLPerf Inference is a semantic-level benchmark, which means that, while the benchmark presents a specific workload (or set of workloads) and general rules for executing it, the actual implementation is up to the company performing the benchmark. A company can optimize the provided reference models, use their desired toolchain, and run the benchmark on a hardware target of their choosing so long as they stay within certain guidelines.

The open division, on the other hand, is intended to foster innovation in AI models and algorithms. Submissions to the open division are still required to perform the

ResNet

ML Frameworks

Furthering flexibility As mentioned earlier, the variety of frameworks and tools being used in the AI technology marketplace are a key challenge for any inferencing benchmark. Another consideration mentioned previously is the tuning of models and algorithms to squeeze the highest accuracy, throughput, or lowest latency out of an AI inferencing system. In terms of the latter, techniques like quantization and image reshaping are now common practice.

To qualify for the closed division, submitters must use the provided models and data sets, though quantization is permitted. To ensure compatibility, entrants in the closed division cannot utilize retrained or pruned models, nor can they use caching or networks that have been tweaked to be benchmark- or data set-aware.

ImageNet

ML Models

As part of the benchmark, each SUT must execute a minimum number of queries to ensure statistical confidence.

It’s important to note, however, that this does not mean that submitting companies can take any and all liberties with MLPerf models or data sets and still qualify for the primary benchmark. The MLPerf Inference benchmark is split into two divisions – closed and open – with the closed division having more strict requirements as to what types of optimization techniques can be used and others that are prohibited.

Computer Vision

60

40

20

0

DSP

FPGA

CPU

ASIC

GPU

same tasks, but can change the model type, retrain and prune their models, use caching, and so on. As restrictive as the closed division might sound, more than 150 entries successfully qualified for the MLPerf Inference v0.5 launch. Figures 2 demonstrates the diversity of AI technology stacks used by the entrants, which spanned almost every kind of processor architecture and software frameworks ranging from ONNX and PyTorch to TensorFlow, OpenVINO, and Arm NN. Take the guesswork out of evaluation While the initial release of MLPerf Inference contains a limited set of models and use cases, the benchmarking suite was architected in a modular, scalable fashion. This will allow MLPerf to expand tasks, models, and application areas as technology and the industry evolve, and the organization already plans to do so. The latest AI inferencing benchmark is obviously significant as the closest measure of real-world AI inferencing performance currently available. But as it matures and attracts more submissions, it will also serve as a barometer of technology stacks that are being deployed successfully and a proving ground for new implementations. Rather than crunching vendor-specific datasheet numbers, why not let the technology speak for itself? After all, less guesswork means more robust solutions and faster time to market. IAI For more information on MLPerf Inference, visit https://edge.seas.harvard.edu/files/ edge/files/mlperf_inference.pdf.

www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

21


AI AT THE EDGE

Developing highperformance DNN object detection/ recognition applications for FPGA-based edge devices By Farhad Fallah, Aldec

Machine learning (ML) is being revolutionized using neural network (NN) algorithms, which are digital models of the biological neurons found in our brains. These models contain layers which are connected like a brain’s neurons. Many applications benefit from ML, including image classification/recognition, big data pattern detection, advanced driver-assistance systems, fraud detection, food-quality assurance, and financial forecasting. Machine learning (ML) is the process of using algorithms to parse data, learn from it, and then make a decision or prediction. Instead of preparing program codes to accomplish a task, the machine is “trained” using large volumes of data and algorithms to perform the task on its own.

Designing a high-performance machine learning application requires network optimization, which is typically done using pruning and quantizing techniques, and computation acceleration, which is performed using ASICs or FPGAs.

As algorithms for machine learning, neural networks include a wide range of topologies and sizes consisting of the first layer (or input layer), middle layers (or hidden layers), and the last layer (or output layer). Hidden layers perform a variety of dedicated tasks on the input and pass it to the next layer until a prediction is generated at the output layer.

Design flow for developing a DNN application Designing a DNN application is a three-step process: Choosing the right network, training the network, and then applying new data to the trained model for prediction (inference). Figure 1, as an example, illustrates the steps for an application to recognize cats.

Some neural networks are relatively simple and have only two or three layers of neurons, while so-called deep neural networks (DNNs) might be made of up to 100 to 1,000 layers. Determining the right topology and the size of the NN for a specific task requires experimentation and comparison against similar networks.

22

In this article, we will discuss how DNNs work, why FPGAs are becoming popular for DNN inference, and consider the tools you need to start designing and implementing a deep learning-based application using FPGAs [1].

As mentioned, there are multiple layers in a DNN model, and each layer has a specific task. In deep learning, each layer is designed to extract features at different levels. For example, in an edge detection neural network, the first middle layer detects features such as edges and curves. The output of the first middle layer is then fed to the second layer, which is responsible for detecting higher-level features, such as semicircles or squares. The third middle layer assembles the output of the other layers to create familiar objects and the last layer detects the object. In another example, if we set out to recognize a stop sign, the trained system would include layers for detecting the octagonal shape, the color, and the letters S, T, O, and P in that order and in isolation. The output layer would be responsible for determining if it is a stop sign.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


ALDEC

www.aldec.com

TWITTER

@AldecInc

LINKEDIN

www.linkedin.com/company/aldec

DNN learning models There are four main learning models: › Supervised: In this model, all the training data are labeled. The NN classifies the input data into different labels learned from the training dataset. › Unsupervised: In unsupervised learning, a deep learning model is handed a dataset without explicit instructions. The training dataset is a collection of examples without a specific desired outcome or correct answer. The neural network then attempts to automatically find structure in the data by extracting useful features and analyzing its structure. › Semi-supervised: A training dataset with both labeled and unlabeled data. This method is particularly useful when extracting relevant features from the data is difficult and labeling examples is a timeintensive task for experts. › Reinforcement: The action of rewarding a network to get the results and improve the performance. It is an iterative process: the more rounds of feedback, the better the network becomes. This technique is especially useful for training robots, which make a series of decisions in tasks like steering an autonomous vehicle or managing inventory in a warehouse. Training versus inference In training, the untrained neural network model learns a new capability from the existing data. Once the trained model is ready, it is fed new data and the performance of the system is measured. The ratio of detecting the image correctly is called inference. In the example given in Figure 1 (recognizing a cat), after inputting the training dataset, the DNN starts tuning the weights to find cats; here a weight is a measure of the strength of the connection between each neuron. If the result is wrong, the error will be propagated back to the network’s layer to modify the weights. This process happens again and

FACEBOOK @aurigaLLC

YOUTUBE

www.youtube.com/user/aldecinc

FIGURE 1 The three steps of recognizing a cat.

again until it gets the correct weighting, which results in getting a correct answer every time. Achieving high-performance DNN application Using DNN for classification requires a big dataset, which increases the accuracy. However, a drawback is that it produces many parameters for the model, which increases the compute cost and requires high memory bandwidth. There are two main ways to optimize a DNN application. The first is network optimization through pruning redundant connections and quantizing the weights and fusing the neural networks to narrow down the network size. Pruning is a form of DNN compression that reduces the number of synaptic connections to other neurons so that the overall amount of data is reduced. Typically, weights close to zero are removed. This can help eliminate the redundant connections with minor accuracy drops for tasks such as classification [2]. Additionally, quantization is done to bring the neural network to a reasonable size, while also achieving high performance accuracy. This is especially important for edge applications, where the memory size and number of computations are necessarily limited. In such applications, to get better performance the model parameters are held in the local memory to avoid time-consuming transfers using PCIe or other interconnection interfaces. In this method, the process of approximating a neural network that uses floating-point numbers (FTP32) by a neural network of low-bit-width numbers (INT8) is performed. This dramatically reduces both the memory requirement and computational cost of using neural networks. By quantizing the model, we lose a bit of precision and accuracy; however, for most applications there is no need for a 32-bit floating point. The second way to optimize the DNN is through computation acceleration using ASICs or FPGAs. Of these, the latter option has many benefits for machine learning applications. These include: › Power efficiency: FPGAs provide a flexible and customizable architecture which enable the usage of only the compute resources that we need. Having low-power systems for DNN is critical in many applications such as advanced driver-assistance systems (ADAS). › Reconfigurability: FPGAs are considered raw programmable hardware compared to ASICs. This feature makes them easy to use and reduces the time to market significantly. To catch up with daily-evolving machine learning algorithms, the ability to reprogram the system is extremely beneficial rather than waiting for the long fabrication time of SoCs and ASICs. › Low latency: Block RAMs inside the FPGA provides at least 50 times faster data transfer, compared to the fastest off-chip memories. This is a game-changer for machine learning applications, for which low latency is essential.

www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

23


AI AT THE EDGE

› Performance portability: All of the benefits of the next generation of the FPGA devices without any code modification or regression testing. › Flexibility: FPGAs are raw hardware and can be configured for any architecture, with no fixed architecture or data paths to tie you down. This flexibility enables FPGAs to do massive parallel processing, since the data path could be reconfigured at any time. The flexibility also brings any-to-any I/O connection, which enables FPGAs to connect to any device, network, or storage devices without the need for a host CPU. › Functional safety: FPGAs users can implement any safety feature to the hardware. Depending on the application, encoding could be done with high efficiency. FPGAs are widely used in avionics, automation, and security; these applications prove the functional safety of these devices. › Cost-efficiency: FPGAs are reconfigurable and the time to market for an application is fairly low. ASICs are very costly, and the fabrication time takes 6 to 12 months, if no errors show up. This is an advantage for machine learning applications, since the cost is very important and NN algorithms are evolving daily. Modern FPGAs typically offer a rich set of DSP and BRAM resources within their fabric that can be used for processing NN. However, compared to the depth and layer size of DNNs, these resources are no longer enough for a full and direct mapping, certainly not in the way it was often done in previous generations of NN accelerators. Even using devices like the Zynq MPSoC, where even the largest device is limited to 2k DSP slices and a total BRAM size of less than 10 MB, a complete mapping with all neurons and weights directly onto the FPGA is impossible. So, how can we use the power efficiency, reprogrammability, low latency, and other features of FPGAs for deep learning? New NN algorithms and architectural modification are required to enable the inference of DNNs on platforms with limited

24

memory resources such as FPGAs. Modern DNNs divide applications into chunks for FPGA processing; since FPGAs' on-chip memory is insufficient for an entire network, only partial data is stored there, with the bulk loaded from external memory (could be a DDR memory). However, transferring data back and forth between the FPGA and memory is going to increase the latency up to 50 times. The first option would be to reduce the memory data. In addition to the network optimization discussed above (pruning and quantization), there are also the options of weight encoding and batch processing. In the FPGA, the encoding format can be chosen with no obligation. There might be some accuracy loss, but this would be negligible compared to the latency caused by data transferring and the complexity of its processing. Weight encoding created the Binary Neural Networks (BNN), where the weights are reduced to only one bit. This method shrinks the amount of data for transferring and storing, as well as the computation complexity. However, this method makes only a small reduction for the hardware multipliers with a fixed input width. In the batch processing method, we reuse the weights already on the chip for multiple inputs using the pipelining method. It also reduces the amount of data to be transferred from off-chip memory to the FPGA [5]. Design, implementation of DNN applications on FPGAs Let’s actually dive into implementing a DNN in FPGAs: It makes sense to take full advantage of the most appropriate commercially available solutions to fast-track the development of an application. For instance, Aldec has an embedded development board called the TySOM-3AZU19EG. Along with a wide range of peripherals, it carries the largest FPGA in the Xilinx Zynq UltraScale+ MPSoC family, a device which has over a million logic cells and includes a quad-core ARM Cortex-A53 platform running up to 1.5 GHz. Importantly, for our purposes, this mammoth MPSoC also supports Xilinx’s deep learning processing unit (DPU), which the company created for machine learning developers. The DPU is a programmable engine dedicated for convolutional neural networks. It is designed to accelerate the computing workloads of DNN algorithms used in computer vision applications, such as image/video classification and object tracking/detection. There is a specific instruction set for DPU which enables it to work efficiently for many convolutional neural networks. Like a regular processor, a DPU fetches, decodes, and executes instructions stored in DDR memory. This unit supports multiple CNNs such as VGG, ResNet, GoogLeNet, YOLO, SSD, MobileNet, and FPN [3]. The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq-7000 SoC and Zynq UltraScale+ MPSoC devices with direct connections to the processing system (PS). To create the instructions for DPU, Xilinx provides a Deep Neural Network Development Kit (DNNDK) tool kit. Xilinx states: “The DNNDK is designed as an integrated framework, which aims to simplify and accelerate deep learning application development and deployment on the Deep Learning Processor Unit (DPU). DNNDK is an optimizing inference engine, and it makes the computing power of DPU become easily accessible. It offers the best of simplicity and productivity to develop deep learning applications, covers the phases of neural network model compression, programming, compilation, and runtime enablement” [4]. The DNNDK framework comprises the following units: › DECENT: Performs pruning and quantization to satisfy the low latency and high throughput › DNNC: It maps the neural network algorithm to the DPU instructions › DNNAS: For assembling DPU instructions into ELF binary code

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


› N2Cube: It acts as the loader for the DNNDK applications and handles resource allocation and DPU scheduling. Its core components include DPU driver, DPU loader, tracer, and programming APIs for application development › Profiler: Consists of DPU tracer and DSight. D tracer gathers the raw profiling data while running NN on DPU. DSight uses this data to generate the visualized charts for performance analysis. › Dexplorer: Provides running mode configuration, status checking, and code signature checking for DPU. › DDump: It dumps the info inside DPU ELF or hybrid executable or DPU shared library. It accelerates the analyzing and debugging issues for the users. All of this would fit into a flow as shown in Figure 2. Using DNNDK makes the process of designing an FPGA-based machine learning project much easier for developers. In addition, platforms like the aforementioned Aldec TySOM-3A-ZU19EG board are also there to provide a kickstart. For instance, Aldec has prepared some examples – including gesture detection, pedestrian detection, segmentation, and traffic detection – that target the board, meaning developers are not starting with a blank sheet. Let us consider one such board that was showcased at ArmTechCon earlier this year. A traffic-detection demonstration was built using a TySOM-3A-ZU19EG and an

FIGURE 2

The Deep Neural Network Development Kit (DNNDK) framework makes designing an FPGA-based machine learning project much easier for developers.

FMC-ADAS daughtercard; which provides interfaces and peripherals for up to five HSD cameras, RADAR, LIDAR, and ultrasonic sensors – i.e., sensory inputs for most ADAS applications. Figure 3 shows the architecture of the demo: Two DPUs implemented to FPGA connected to the processing unit using AXI HP ports to perform deep inference tasks such as image classification, object detection, and semantic segmentation. The DPUs require instructions to implement a neural network which are prepared by DNNC and DNNAS tools. They also need access to memory locations for input videos as well as output data. An application is run on the Application Processing Unit (APU) to control the system by managing interrupts and performs data transfer between units. The connection between the DPU and the user application is by DPU API and Linux driver. There are functions to read a new image/video to DPU, run the processing, and send the output back to the user application. Developing and training the model is done using Convolutional Architecture for Fast Feature Embedding (Caffe) outside of the FPGA, whereas optimization and compilation is done using DECENT and DNNC units provided as a part of the DNNDK tool kit (as in Figure 2). In this design, the SSD object detection CNN is used for background, pedestrian, and vehicle detection. In terms of performance, 45 fps was achieved using four input channels, demonstrating the high-performance deep learning application using TySOM-3AZU19EG and the DNNDK tool kit. IAI References [1] Guo, Kaiyuan, et al. “A survey of FPGA-based neural network accelerator.” [2] FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review. [3] DPU for convolutional neural network. “Xilinx.com” [4] DNNDK user guide. “Xilinx.com” [5] Efficient deep neural network acceleration through FPGA-based batch processing.

FIGURE 3

The traffic-detection demo has 5x video input pipelines that are used for data packing, AXI4 to AXI Stream data transferring, color space conversion (YUV2RGB), and sending the videos to memory.

www.embedded-computing.com/ai-machine-learning

Farhad Fallah is an application engineer with Aldec.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

25


AI DEVELOPMENT TOOLS & FRAMEWORKS

The perplexities of predictive maintenance By Seth Deland, MathWorks

Predictive maintenance enables companies to reduce machine downtime, eliminate unnecessary upkeep, and achieve many other business benefits. However, companies often face challenges around process and data when incorporating the technology into their operations. This article will explore three common obstacles engineers face when implementing predictive maintenance and discuss how to best avoid them. We begin with the fundamental lack of knowledge surrounding the anatomy of a predictive-maintenance workflow, move on to the task of synthesizing and sourcing adequate amounts of data, and finish with one of the most crucial – and frequently missed – components of predictive maintenance: workflow failures and knowing how to predict them. Benefit your business by understanding workflows Many engineers haven’t been properly educated on predictive-maintenance workflows and how to best leverage them. This could be because the company has yet to realize the value of such an investment, is unable to see past the risk of that investment or considers predictive maintenance too advanced for current business needs. Regardless of the reason, there are concrete steps you can take to minimize risk and get started with predictive-maintenance workflows as quickly as possible. The first step to getting started is to understand the five core development stages of predictive maintenance (Figure 1):

FIGURE 1 26

The basic predictive-maintenance workflow. Courtesy The MathWorks, Inc.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

1. Access sensor data: Data – gathered from multiple sources such as databases, spreadsheets, or web archives – must be in the right format and organized correctly for proper analysis. It’s important to remember that large data sets may require out-of-memory processing techniques. 2. Preprocess data: Real-world data is rarely perfect; it has outliers and noise that must be removed to obtain a realistic picture of normal behavior. Additionally, because statistical and machine learning modeling techniques are used later in the process, the quality of those models will be dependent on the quality of the preprocessed data. 3. Extract features: Instead of feeding sensor data directly into machine learning models, it is common to extract features from the

www.embedded-computing.com/ai-machine-learning


MATHWORKS

www.mathworks.com

FACEBOOK

@MathWorks

TWITTER

@MathWorks

LINKEDIN

www.linkedin.com/company/the-mathworks_2/

YOUTUBE

www.youtube.com/user/MATLAB

To build machine learning algorithms, which many predictive-maintenance systems rely on, there must be enough data to create an accurate model. This data usually originates from sensors on machinery, but companies can run into issues when data collection is not an option, if they’re using new sensors, or when data readings are incorrectly logged and information is limited.

sensor data. While the number of features that can be extracted from data is essentially unlimited, common techniques come from domains such as statistics, signal processing, and physics. 4. Train the model: Build models that classify equipment as healthy or faulty, can detect anomalies, or estimate remaining useful life for components. It’s useful to try a variety of machine learning techniques in this step, as it’s rarely clear beforehand the best type of model for a given problem. 5. Deploy the model: Depending on the system requirements, predictive models may be deployed to embedded devices or integrated with enterprise IT applications. There are numerous tradeoffs to consider here, as embedded devices provide fast responses and reduce the need to transmit data over the internet, while a centralized IT approach makes it easier to update models in the future. The Golden Rule: Go with what you know Understanding the various development stages of a predictive-maintenance workflow is a vital first step toward implementation, yet the idea of fully understanding, developing, and integrating a workflow seems daunting to many. Engineers can quickly and efficiently incorporate predictive maintenance into their daily routine by leveraging existing tools and software. Tools such as MATLAB have predictive-maintenance capabilities that enable engineers to work in a familiar environment. They also provide reference examples, algorithms, and access to technical support, training, and consulting teams. The additional guidance can cement the basics so you and your team can be confident that you approach problems in the best way. Lack of data: What happens now? Let’s now explore what happens when the challenge is an actual lack of data – the foundation of any predictive-maintenance model. www.embedded-computing.com/ai-machine-learning

Each of these challenges is solvable. Following are three commonly seen dataaccumulation scenarios as well as techniques and strategies for overcoming hurdles associated with each one. Scenario 1: Ground Zero In this scenario, your department does not collect enough data to train a predictive-maintenance model, and you are unsure what additional data can be sourced and from where. Consider other internal departments that collect data and might be able to supplement your existing data. Sourcing within your organization might be enough to meet your needs. Suppliers and customers also have the potential to supplement data, depending on the size of the business and where it lies in the supply chain. Explore existing agreements and determine whether a collaboration can be fostered. Offering to prolong the health and efficiency of equipment components is just one example of a benefit that would be appreciated across businesses. While this won’t always be possible, the volume of data that could be acquired does actually merit consideration.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

27


AI DEVELOPMENT TOOLS & FRAMEWORKS

Scenario 2: Feast or Famine Here, a department has the tools to capture an adequate amount of data, but the system cannot collect it until a fault occurs. Even worse, the system can only collect event codes and time stamps, meaning that sensors are not collecting data values crucial for developing models that can predict those failures.

DATA GENERATED BY SIMULATION TOOLS SHOULD BE COMPARED TO

Companies can increase the efficiency at which they capture data by changing datalogging options on the internal system, perhaps on a test fleet if production data is not available. It may even be possible to collect and transmit sensor data by reconfiguring existing embedded devices, though external data loggers may be needed when getting started (Figure 2).

MEASURED DATA, TO MAKE

Scenario 3: Simulation Software In certain scenarios, simulation tools can play a strong role in helping teams generate test data and combine it with available sensor data to build and validate predictive-maintenance algorithms. Data generated by simulation tools should be compared to measured data, to make sure the simulation is well-calibrated. For example, a DC server motor model could be built and then calibrated using realworld sensor data.

EXAMPLE, A DC SERVER

Regardless of your specific data needs, all businesses considering data for predictive maintenance should begin analyzing early and strategically. Once you’ve come to understand the data features that are most important for your goals, you can make informed decisions about which data need to be kept and which do not. Generating and leveraging failure data Having an estimate of the time until failure is useful; even more valuable, however, is information that describes the type of failure expected to occur and the root cause. Models that predict the type of failure can be trained on historical failure data, but engineers don’t always have access to failure data for the various failure scenarios. Teams can apply two approaches to make sure the lack of failure data doesn’t become a fatal deficiency during predictive-maintenance implementation – first, they

FIGURE 2 28

SURE THE SIMULATION IS WELL-CALIBRATED. FOR MOTOR MODEL COULD BE BUILT AND THEN CALIBRATED USING REAL-WORLD SENSOR DATA. can generate sample failure data; and second, they need to understand the data that’s available. Commonly used tools such as failuremode-effects analysis (FMEA) can provide useful starting points for determining which failures to simulate. From here, engineers can incorporate behaviors into the model in a variety of

Configuring data logging to collect and transmit sensor data. Courtesy The MathWorks, Inc.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


scenarios, which simulate failures by adjusting temperatures, flow rates, or vibrations, or by adding a sudden fault. When simulated, the scenarios result in failure data that can be labeled and stored for further analysis. Next, the team must understand the data that’s available: Depending on what sensors are available, certain types of failures may require looking at several sensors simultaneously to identify undesirable behavior. But looking at the raw data from dozens or hundreds of sensors can be intimidating. In this case, unsupervised learning techniques (a branch of machine learning) such as principal component analysis (PCA) will transform raw sensor data into a lower-dimensional representation. This data can be visualized and analyzed much more easily than high-dimensional raw data, enabling you to find valuable patterns and trends in unlabeled data. Even if failure data is not present,

operations data might indicate trends about how a machine degrades over time and estimate remaining useful life (RUL) for components. Simple ways to reduce the learning curve Another common obstacle engineers face involves the modeling and testing of algorithms that may seem foreign and intimidating. Engineers looking to reduce this learning curve can follow these three simple steps: › Define goals: Define upfront what your goals are (e.g., earlier identification of failures, longer cycles, decreased downtime), and how the predictivemaintenance algorithm will affect them. As an early step, build a framework that can test an algorithm and estimate its performance relative to your goals to enable faster design iterations. This will ensure that all different approaches are considered on a level playing field. › Start small: Practice using a project with a deeply understood system, the simpler the better. For example, start by looking at things at the component level rather than at the system or subsystem level. This approach will reduce the number of faults that need to be investigated and shorten the time to develop an initial prototype. › Gain confidence: When you start seeing promising results, use the domain knowledge within your team to predict different outcomes based on their cost and severity. Run a predictive-maintenance model in the background of existing maintenance procedures to understand how the model works in practice. IAI Seth Deland is product marketing manager, Data Analytics, MathWorks.

OpenSystems Media E-cast Panel Discussion: How Artificial Intelligence Puts the “Smart” in Smart Buildings Sponsored by Arkessa, KMC Controls, Riptide, Prescriptive Data Artificial intelligence (AI) can revolutionize building automation, especially through the use of advanced analytics. With this webcast, you’ll understand how combining advanced analytics with a rich cloud presence can ensure that maximum value is being attained in the smart building. https://bit.ly/33tAc4N

www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

29


AI DEVELOPMENT TOOLS & FRAMEWORKS

Applying machine learning on mobile devices By Igor Markov, Auriga Inc.

In the modern world, machine learning is used in various fields: image classification, consumer demand forecasts, film and music recommendations for particular people, clustering. At the same time, for fairly large models, the result computation (and to a much greater degree the training of the model) can be a resourceintensive operation. In order to use trained machine-learning models on devices other than the most powerful ones, Google introduced its TensorFlow Lite framework. To work with it, you need to train a model built using the TensorFlow framework (initially NOT the Lite framework) and then convert it to the TensorFlow Lite format. After that, the model can be easily used on embedded or mobile devices. In this article, we will describe all the steps for running a model on Android.

More generally, if the model is described using the TensorFlow framework, after training it should be saved and transferred to the TensorFlow Lite format using the converter: https://www.tensorflow.org/ lite/convert/index. In our case, this step is not required.

Training and transfer of model For example, take one of the standard MobileNet models. Training will be conducted on a set of pictures (ILSVRC-2012-CLS).

Android application The application will contain the following functions:

For training, we will download a set of models on a rather powerful Linux OS machine: git clone https://github.com/tensorflow/models.git

› Image capture from camera in real time › Image classification using TensorFlow Lite model › Display of classification result on screen

Install the build system, called Bazel, according to the instructions on the site https://docs. bazel.build/versions/ master/install.html. Launch training of the model (Figure 1): These commands will perform the training of the model and create files with the *.tflite extension required for use by the TensorFlow Lite interpreter.

The source code for the “image_classifiation” example from the www.github.com/ tensorflow/examples/ website can be used as a template for this application.

bazel build -c opt mobilenet_v1_{eval,train} ./bazel-bin/mobilenet_v1_train --dataset_dir the path to the set of pictures —checkpoint_dir the path to the checkpoints

FIGURE 1 30

These commands will launch training on the MobileNet model.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


AURIGA INC

www.auriga.com

TWITTER

@aurigainc

LINKEDIN

www.linkedin.com/company/auriga

Real-time image capture The “android.hardware.camera2” framework will be used, available starting from Android 5. Using the CameraManager system service, we will get access to the camera and receive frames in the onImageAvailable method by implementing the OnImageAvailableListener interface. Frames will come several times per second, with frequency depending on the hardware implementation of the built-in camera. For clarity, we will also display the image received from the camera on the screen by placing the TextureView component in a layout with a size that matches the screen size (except for a small section at the bottom of the TextureView, where we will place information on the results of the object classification). To do this, we will associate this component with the output from the camera. In the onImageAvailable method, we will first get the last available frame by calling acquireLatestImage (). A situation may arise in which the frame classification takes a long time, and during this period, the camera may issue more than one frame. That is why we take the last frame, skipping unprocessed frames. The frame comes in the YUV420 format. Let’s convert it to ARGB8888 format by calling convertYUV420 to the ImageUtils library. Since the format we need is an array of single-precision floating-point numbers in the range from -1 to 1, we will also perform this conversion.

FACEBOOK @aurigaLLC

YOUTUBE

www.youtube.com/channel/UCceaRiy09giqrw8_9p8uPCw

first parameter of the method is a recognizable frame in the format of an array of floating-point numbers, which we obtained from the camera. The second parameter is an array of floating-point arrays. These arrays will be filled with the performance results of the model. The result in our case is an array of probabilities. Each element of the array with i index is the probability that the i-th object is in the frame. Display of classification result on screen Let’s display on the screen the name of the most probable object using the preloaded names of the objects from the file with names stored in Assets. In the model we use, the names are in English, but you can translate them into Russian beforehand by changing the file labels.txt. Alternatively, you can display not just the name of the most probable object but a list of the three to five most probable objects alongside the corresponding probabilities. Conclusion We considered how a pretrained image-classification model can be used in the Android OS. In addition, the use of models is also possible on embedded devices, since the TensorFlow Lite interpreter also has a C++ interface and takes up about 300 KB of memory. IAI Igor Markov is a software engineer at Auriga Inc. He has solid expertise in mobile application development with over 15 years’ experience in commercial software development. He is also experienced in low-level embedded system and kernel driver development.

TRACE 32 ®

Trace-based Code Coverage

Image classification Before performing the classification, at the beginning of the application execution, it is necessary to load the model from the wired-in file of the application itself stored in the Assets directory. You need to copy to this directory the model file mobilenet_v1_1.0_224.tflite, obtained at the “Training and transfer of the model” stage, and the description of classification objects label.txt.

Statement Coverage Decision Coverage MC/DC Coverage

NO INSTRUMENTATION

After that, a TensorFlow Lite interpreter object is created. The model is passed as constructor parameters. Classification is performed by simply calling the interpreter’s run method. The www.embedded-computing.com/ai-machine-learning

eec_trace_tools.indd 1

www.lauterbach.com/1659.html

Industrial AI & Machine Learning RESOURCE GUIDE 2019

31

07.11.2018 12:20:46


AI FOR INDUSTRY

Demystifying AI and machine learning for digital health applications By Arvind Ananthan, MathWorks

The first wave of FDA-approved wearable digital health monitors integrated with consumer products such as smart watches are just becoming available. Medical sensor technology continues to advance at a rapid pace, allowing compact, cost-effective, and increasingly accurate physiological sensors to make their way into off-the-shelf wearable devices. One of the real drivers of this transformation is the availability of cutting-edge machine learning (ML) and artificial intelligence (AI) algorithms that can extract and interpret meaningful information from vast troves of data. Medical sensors, like those used in wearable digital health monitors, are tasked with collecting huge amounts of data from users. This information includes noisy data and not-so-perfect signals (such as ECG [electrocardiogram] data from a smart watch) corrupted with various artifacts that are hard to process using traditional algorithms that tend to be deterministic and rules-based. Until recently, unlocking the secrets in a physiological signal coming from these sensors to form reasonably accurate decisions acceptable for regulatory submissions was challenging and often impossible. Advances in ML and AI algorithms are now enabling engineers and scientists to overcome many of these challenges. We’ll take a closer look at the overall architecture of algorithms for processing physiological signals and demystify its operations, turning it into more real-world engineering founded in decades of research. To illustrate the power of a simple ML algorithm, let’s include a video (https://www. mathworks.com/videos/signal-processing-for-machine-learning-99887.html) that describes how the data from an accelerometer in an activity tracker can predict the various states of motion or rest of the wearer. We can extend this approach to more complex real-world medical signals such as ECG and develop algorithms that can automatically classify ECG signals as normal or exhibiting atrial fibrillation. Developing ML algorithms consists of two primary steps (Figure 1). The first step in this workflow is feature engineering, in which certain numerical/mathematical features from the data set of interest are extracted and presented to the subsequent step. In the second step, the extracted features are fed into a well-known statistical classification or a regression algorithm such as a support vector machine or a traditional neural network

32

Industrial AI & Machine Learning RESOURCE GUIDE 2019

configured appropriately to come up with a trained model that can then be used on a new data set for prediction. Once this model is iteratively trained using a wellrepresented labeled data set until satisfactory accuracy is achieved, it can then be used on a new data set as a prediction engine in a production environment. So how does this workflow look for an ECG signal classification problem? For this case study, we turn to the 2017 PhysioNet Challenge data set, which uses real-world single-lead ECG data. The objective is to classify a patient’s ECG signal as one of the four categories: Normal, Atrial Fibrillation, Other Rhythm, and Too Noisy. The overall process and the various steps for tackling this problem in MATLAB are shown in Figure 2. Preprocessing and feature engineering The feature engineering step is perhaps the hardest part in developing a robust

www.embedded-computing.com/ai-machine-learning


machine learning algorithm. Such a problem cannot simply be treated as a “datascience” problem, as it is important to have the biomedical engineering domain knowledge to understand the different types of physiological signals and data when exploring the various approaches in solving this problem. Tools such as MATLAB bring the data analytics and advanced ML capabilities to the domain experts and enable them to focus on feature engineering by making it easier to apply data-science capabilities such as advanced ML capabilities to the problems they are solving. In this example, we use advanced wavelets techniques for signal processing to remove noise and slowmoving trends such as breathing artifacts from the data set and extract various features of interest from the signals. Developing the classification model The Classification Learner App in Statistics and Machine Learning Toolbox is a particularly effective starting point for engineers and scientists that are new to ML. In our example, once a sufficient number of useful and relevant features are extracted from the signals, we use this app to quickly explore various classifiers and their performance and narrow down our options for further optimization. These classifiers include decision trees, random forests, support vector machines, and K-nearest neighbors (KNN). These classification algorithms enable the user to try out various strategies and choose the ones that provide best classification performance for the feature set (typically evaluated using metrics such as confusion matrix or an area under ROC curve). In our case, we very quickly achieved ~80 percent overall accuracy for all the classes, simply following this approach (the winning entries for this competition scored around 83 percent). Note that we have not spent much time on feature engineering or classifier tuning, as the focus was on validating the approach. Typically, spending some time on feature engineering and classifier tuning leads to significant further improvement in classification accuracy. More advanced techniques such as deep learning can also be applied to such problems where the feature engineering and extraction and classification steps are combined in a single training step, although this approach typically

FIGURE 1

A typical machine learning workflow comprising training and testing stages. Figure courtesy The MathWorks.

FIGURE 2

MATLAB workflow for developing machine learning algorithms to classify ECG signals. Figure courtesy The MathWorks.

requires a much larger training data set for this to work well compared to traditional ML techniques. Challenges, regulations, and promises While many of the commonly available wearable devices are not quite ready to replace their FDA-approved and medically validated counterparts, all technology and consumer trends are strongly pointing in that direction. The FDA is starting to play an active role in simplifying regulations and encouraging the evolution of regulatory science specifically through initiatives such as the Digital Health Software Precertification Program and modeling and simulation in device development. The vision of human physiological signals collected from daily-use wearables becoming the new digital biomarkers that can provide a comprehensive picture of our health is becoming more real now than ever, in large part due to the advances in signal-processing and ML/deep-learning algorithms. Workflows enabled by tools such as MATLAB are enabling medical devices’ domain experts to apply and utilize datascience techniques such as ML without having to be experts in data science. IAI Arvind Ananthan is the Global Medical Device Industry Manager at MathWorks. He has extensive experience working with medical device engineers, academic researchers, and regulatory authorities. Arvind – with a background in signal processing and electrical engineering – joined MathWorks 15 years ago as a technical sales engineer working with embedded systems before moving into his current role, where he identifies and addresses challenges faced by the medical device industry. MATHWORKS

www.mathworks.com

www.embedded-computing.com/ai-machine-learning

TWITTER

@MathWorks

LINKEDIN

www.linkedin.com/company/the-mathworks_2/

FACEBOOK

@MathWorks

Industrial AI & Machine Learning RESOURCE GUIDE 2019

33


AI ENABLEMENT

Low battery self-discharge: The key to long-life remote wireless sensors By Sol Jacobs, Tadiran Batteries

While various energy-saving techniques can help extend battery life, low annual self-discharge rate is the most critical one of these for remote wireless devices. Remote wireless devices increasingly require industrial-grade lithium batteries to deliver long-term power for applications ranging from system control and data automation (SCADA) to automated process control, artificial intelligence (AI), and machine learning (ML). Battery-powered remote wireless devices that draw small amounts of current (microamps to milliamps) can last longer with the use of low-power communications protocols (WirelessHART, ZigBee, LoRa, etc.) and low-power chipsets, including the latest “always-on” technologies. However, these energy-saving techniques often pale in comparison to the energy losses resulting from battery self-discharge. Application-specific requirements dictate the ideal power supply Remote wireless devices that draw microamps of average current are often paired with industrial-grade primary (non-rechargeable) lithium batteries. If the device draws milliamps of average current, enough to quickly exhaust a primary cell, then it may be better suited for energy harvesting in conjunction with a rechargeable Lithium-ion (Li-ion) battery to store the harvested energy.

34

Numerous factors must be considered when specifying a battery for a remote wireless application, including the amount of current consumed in active mode (including the size, duration, and frequency of pulses); energy consumed in "stand-by" mode (the base current); storage time (as normal self-discharge during storage diminishes capacity); thermal environments (including storage and in-field operation); equipment cut-off voltage, which drops as cell capacity is exhausted, or in extreme temperatures; and the self-discharge rate of the battery, which can exceed the daily energy consumed by actual use. Choices among primary lithium batteries Lithium batteries feature a higher intrinsic negative potential that exceeds all other metals. As the lightest nongaseous metal, lithium offers the highest specific energy (energy per unit weight) and energy density (energy per unit volume) of all available battery chemistries. Lithium cells operate within a normal operating current voltage (OCV) range of 2.7 V to 3.6 V. Lithium batteries are also nonaqueous and are therefore better adapted to extreme cold. Available primary battery chemistries include iron disulfate (LiFeS2), lithium manganese dioxide (LiMnO2), lithium thionyl chloride (LiSOCl2), alkaline, and lithium metal oxide (see Table 1). The preferred chemistry for ultra-long-life applications is lithium thionyl chloride (LiSOCl2), which is constructed two ways: bobbin-type and spiral-wound. Bobbin-type LiSOCl2 batteries feature the highest capacity and energy density, and extremely low self-discharge (under 1% per year for certain cells), thus enabling 40-year battery life for certain low-power applications. These cells also feature the widest temperature range (-80 °C to 125 °C) and a superior glass-to-metal hermetic seal to help prevent leakage. Specially modified bobbin-type LiSOCl2 batteries are used in the cold chain to monitor the transport of frozen foods, pharmaceuticals, tissue samples, and transplant organs

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


WHEN SPECIFYING AN INDUSTRIAL-GRADE BATTERY, CONDUCT THOROUGH DUE DILIGENCE TO ENSURE THAT THE BATTERY-POWERED SOLUTION WILL LAST AS LONG AS THE DEVICE TO MINIMIZE THE COST OF OWNERSHIP. at temperatures as low as -80 °C. These batteries can also handle extreme heat, including active RFID tags that track the location and status of medical equipment without having to remove the battery prior to autoclave sterilization at 125 °C.

The self-discharge rate of a bobbin-type LiSOCl2 battery varies depending on the method of manufacturing and the quality of the raw materials. A superior quality bobbin-type LiSOCl2 cell can feature a self-discharge rate of 0.7% per year, retaining over 70% of its original capacity after 40 years. By contrast, a lower quality bobbin-type LiSOCl2 cell can have a selfdischarge rate as high as 3% per year, losing 30% of its capacity every 10 years, making 40-year battery life unachievable.

Primary Cell

LiSOCL2 Bobbin-type with Hybrid Layer Capacitor

LiSOCL2 Bobbin-type

Li Metal Oxide Modified for high capacity

Li Metal Oxide Modified for high power

Alkaline

LiFeS2 Lithium Iron Disulfate

LiMnO2 CR123A

Energy Density (Wh/1)

1,420

1,420

370

185

600

650

650

Power

Very High

Low

Very High

Very High

Low

High

Moderate

Voltage

3.6 to 3.9 V

3.6 V

4.1 V

4.1 V

1.5 V

1.5 V

3.0 V

Pulse Amplitude

Excellent

Small

High

Very High

Low

Moderate

Moderate

Passivation

None

High

Very Low

None

N/A

Fair

Moderate

Performance at Elevated Temp.

Excellent

Fair

Excellent

Excellent

Low

Moderate

Fair

Performance at Low Temp.

Excellent

Fair

Moderate

Excellent

Low

Moderate

Poor

Operating Life

Excellent

Excellent

Excellent

Excellent

Moderate

Moderate

Fair

Self-Discharge Rate

Very Low

Very Low

Very Low

Very Low

Very High

Moderate

High

Operating Temp.

-55°C to 85°C, can be extended to 105°C for a short time

-80°C to 125°C -45°C to 85°C

-45°C to 85°C

-0°C to 60°C -20°C to 60°C

TABLE 1

0°C to 60°C

Comparison of primary lithium cells.

www.embedded-computing.com/ai-machine-learning

Industrial AI & Machine Learning RESOURCE GUIDE 2019

35


AI ENABLEMENT

Higher self-discharge rates may take years to detect, whereas quickly run test data can be highly misleading. Applications that demand long-life power, especially in extreme environments, require thorough due diligence to properly evaluate battery suppliers. For example, meter transmitter units (MTUs) used in AMR/AMI utility metering applications require long-life batteries, since a large-scale battery failure can prove highly disruptive to billing systems and disable remote startup/shutoff capabilities. Fear of disruption and compromised data integrity has forced utility companies to invest millions of dollars to prematurely replace batteries in MTUs to avoid the risk of a chaotic disruption. High pulses power two-way wireless communications Periodic high pulses are required to power advanced two-way wireless communications. Standard bobbin-type LiSOCl2 cells have a low-rate design that is not ideal for delivering high pulses, which can be overcome with the addition of a patented hybrid layer capacitor (HLC). The standard bobbin-type LiSOCl2 cell delivers low daily background current, while the HLC stores pulses of up to 15A. The HLC also has a unique end-of-life voltage plateau that enables "low battery" status alerts. Consumer electronics often use supercapacitors to store pulses electrostatically rather than chemically. Supercapacitors are ill-suited for industrial applications due to inherent limitations, including: short-duration power; linear discharge qualities that do not allow for use of all the available energy; low capacity; low energy density; and very high self-discharge (up to 60% per year). Supercapacitors linked in series also require cell-balancing circuits, which adds cost and bulkiness; these also consume extra energy to further accelerate self-discharge. The niche for energy harvesting keeps growing Energy harvesting is ideal for remote wireless applications that draw milliamps of current, enough to prematurely

36

TLI-1550 (AA Industrial Grade)

Li-Ion 18650

Diameter (max)

[cm]

1.51

1.86

Length (max)

[cm]

5.30

6.52

Volume

[cc]

17.71

17.71

Nominal Voltage

[V]

3.7

17.71

Max Discharge Rate

[C]

1.6C

17.71

Max Continuous Discharge Current

[A]

5

17.71

Capacity

[mAh]

3000

17.71

Energy Density

[Wh/l]

627

17.71

Power [RT]

[W/liter]

1045

17.71

Power [-20C]

[W/liter]

< 170

17.71

Operating Temp

deg. C

-20 to +60

17.71

Charging Temp

deg. C

0 to +45

17.71

[%/Year]

<20

17.71

Cycle Life

[100% DOD]

~300

17.71

Cycle Life

[75% DOD]

~400

17.71

Cycle Life

[50% DOD]

~650

17.71

[Years]

<5

17.71

Self Discharge rate

Operating Life

TABLE 2

Comparison of consumer versus industrial Li-ion rechargeable batteries.

exhaust a primary battery. Photovoltaic (PV) panels are the most popular and proven form of energy harvesting, with equipment movement, vibration, temperature differences, and ambient RF/EM signals being used in certain instances. For example, small solar/PV panels are being combined with industrial grade Li-ion batteries to track unpowered assets. Solar/Li-ion hybrid systems also power parkingmeter fee collection systems, with AI-enabled sensors deployed to identify open parking spots. Consumer-grade rechargeable Li-ion cells often suffice if the device is easily accessible, with a maximum operating life of 5 years and 500 recharge cycles, along with a moderate temperature range (0 to 40°C), and no high pulse requirements (see Table 2). By contrast, industrial grade Li-ion batteries can operate for up to 20 years and 5,000 full recharge cycles, with an expanded temperature range (-40 °C to 85 °C), and the ability to deliver high pulses for two-way wireless communications. Industrial-grade batteries can reduce the cost of ownership of remote wireless devices deployed in hard-to-access locations and extreme environments. When specifying an industrial-grade battery, conduct thorough due diligence to ensure that the batterypowered solution will last as long as the device to minimize the cost of ownership. All battery manufacturers being evaluated should be required to supply documented long-term test results, in-field performance data from equivalent applications, and multiple customer references. IAI Sol Jacobs is VP and general manager of Tadiran Batteries. TADIRAN BATTERIES www.tadiranbat.com

Industrial AI & Machine Learning RESOURCE GUIDE 2019

TWITTER

@TadiranBat

FACEBOOK

@TadiranBat

www.embedded-computing.com/ai-machine-learning


BY ENGINEERS, FOR ENGINEERS In the rapidly changing technology universe, embedded designers might be looking for an elusive component to eliminate noise, or they might want low-cost debugging tools to reduce the hours spent locating that last software bug. Embedded design is all about defining and controlling these details sufficiently to produce the desired result within budget and on schedule. Embedded Computing Design (ECD) is the go-to, trusted property for information regarding embedded design and development.

embedded-computing.com


Industrial AI & Machine Learning Resource Guide

2019

RESOURCE GUIDE PROFILE INDEX

AI & MACHINE LEARNING Crystal Group, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

APPLICATIONS: SECURITY ADL Embedded Solutions, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

APPLICATIONS: INDUSTRIAL AUTOMATION/CONTROL ACCES I/O Products, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 ADL Embedded Solutions, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 American Portwell Technology, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Vector Electronics & Technology, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 WinSystems, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

HARDWARE MODULES/SYSTEMS FOR MACHINE LEARNING Avnet Integrated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44, 45 Virtium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

INDUSTRIAL congatec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

38

Industrial AI & Machine Learning RESOURCE GUIDE 2019

www.embedded-computing.com/ai-machine-learning


Crystal Group RIA™ AI & Autonomy Solution Accelerate innovation and conquer complexity in artificial intelligence (AI), automation, autonomous vehicles (AVs), advanced driver-assistance systems (ADAS), machine learning, sensor fusion, unmanned platforms, and many other high-tech projects with Crystal Group RIA™ – Crystal Group Rugged

Intelligence and Autonomy.

Put advanced projects on the fast track to market with Crystal Group RIA, designed specifically to reduce development time and streamline systems integration to speed past competitors. Crystal Group RIA combines impressive compute power, data-handling capabilities, and storage capacity in a compact, rugged solution that can withstand harsh conditions and environments – including potholes, collisions, and extreme temperatures that could cause traditional systems to fail. Available in custom or off-the-shelf configurations, Crystal Group RIA highperformance computers sport the latest Intel® processors, high-capacity DDR4 memory, and sophisticated power and thermal management stabilized in a size, weight, and power (SWaP)-optimized aluminum chassis. Built for safety and reliability, Crystal Group RIA systems leverage 35 years of experience tailoring high-performance, fail-safe rugged hardware for hundreds of defense missions, challenging industrial and commercial applications, aerospace programs, ground vehicle platforms, and a broad array of power, telecommunications, and critical infrastructure projects.

Crystal Group, Inc. crystalrugged.com

FEATURES 10-32VDC input power Light weight aluminum construction – 30-40 lbs. ĄĄ Up to 2 TB DDR4 of memory ĄĄ Versatility with two removable 15mm drives or three removable 9.5mm drives ĄĄ Expandable with six PCI-E slots ĄĄ Liquid cooled to maximize compute density ĄĄ Intel® Xeon® Scalable Processors ĄĄ ĄĄ

crystalrugged.com

info@crystalrugged.com  800-378-1636 www.linkedin.com/company/crystal-group/

@CrystalGroup Applications: Security

Compact Embedded Computers Built with the latest rugged, industrial IoT and cyber security needs in mind, ADL Embedded Solutions’ family of fanless, industrial PC solutions meet the performance and size requirements of a multitude of IIoT applications. The heart of ADL’s compact embedded system designs are the Intel® Atom® E3800 or E3900-based SBCs as small as 75mm x 75mm. These standalone SBCs are the building blocks for our compact systems as small as 1.3" x 3.4" x 3.2" and offer vertical or horizontal expansion possibilities using the Edge-Connect form factor to maintain a small footprint depending on your specific application uses. The expansion connector features a number of interfaces including PCIeX1, USB 2.0/3.0, SATA, SMBus and DisplayPort

FEATURES ĄĄ Ultra-small, compact footprint ĄĄ Intel® E3800 or E3900-Series

Atom processors ĄĄ Vertical or horizontal expansion

ADL offers a range of COTS peripheral modules that can easily be integrated for added I/O functions like CAN, Ethernet, GPIO, Serial COM, storage and much more. Customers can also define custom peripheral boards for special I/O or power supply requirements as well as the custom enclosures necessary for complete solutions.

www.adl-usa.com

www.embedded-computing.com/ai-machine-learning

ĄĄ Wide Temperature

ĄĄ Up to 15-year availability

ĄĄ Custom System Design Services available

Applications: Industrial IoT (IIoT) network and cloud computing, Cyber security edge devices for networks, ICS and SCADA threat security, Secure networking (routing, traffic monitoring and gateways), Intelligent machinery and equipment controllers, Unmanned or autonomous vehicle mission / payload computing, Traffic Engineering, Transportation mobile computing, Wind turbine datalogging and collision avoidance, Oil and Gas and Kiosk and ATM applications.

Custom PC Integration

ADL Embedded Solutions, Inc.

ĄĄ Edge-Connect form factor

sales@adl-usa.com  855-727-4200 @ADLEmbedded www.linkedin.com/company/adl-embedded-solutions

Industrial AI & Machine Learning RESOURCE GUIDE 2019

39

Industrial AI & Machine Learning Resource Guide

AI & Machine Learning


Industrial AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control

mPCIe-ICM Family PCI Express Mini Cards The mPCIe-ICM Series isolated serial communication cards measure just 30 x 51 mm and feature a selection of 4 or 2 ports of isolated RS232/422/485 serial communications. 1.5kV isolation is provided port-to-computer and 500V isolation port-to-port on ALL signals at the I/O connectors. The mPCIe-ICM cards have been designed for use in harsh and rugged environments such as military and defense along with applications such as health and medical, point of sale systems, kiosk design, retail, hospitality, automation, and gaming. The RS232 ports provided by the card are 100% compatible with every other industry-standard serial COM device, supporting TX, RX, RTS, and CTS. The card provides ±15kV ESD protection on all signal pins to protect against costly damage to sensitive electronic devices due to electrostatic discharge. In addition, they provide Tru-Iso™ port-to-port and port-to-PC isolation. The serial ports on the device are accessed using a low-profile, latching, 5-pin Hirose connector. Optional breakout cables are available, and bring each port connection to a panel-mountable DB9-M with an industry compatible RS232 pin-out. The mPCIe-ICM cards were designed using type 16C950 UARTS and use 128-byte transmit/receive FIFO buffers to decrease CPU loading and protect against lost data in multitasking systems. New systems can continue to interface with legacy serial peripherals, yet benefit from the use of the high performance PCI Express bus. The cards are fully software compatible with current PCI 16550 type UART applications and allow for users to maintain backward compatibility.

ACCES I/O Products, Inc. www.accesio.com

FEATURES ĄĄ PCI Express Mini Card (mPCIe) type F1, with latching I/O connectors ĄĄ 4 or 2-port mPCIe RS232/422/485 serial communication cards ĄĄ Tru-Iso™ 1500V isolation port-to-computer and 500V isolation

port-to-port on ALL signals

ĄĄ High performance 16C950 class UARTs with 128-byte FIFO for each

TX and RX

ĄĄ Industrial operating temperature (-40°C to +85°C) and RoHS standard ĄĄ Supports data communication rates as high as 3Mbps – 12MHz with ĄĄ ĄĄ ĄĄ ĄĄ

custom crystal Custom baud rates easily configured ±15kV ESD protection on all signal pins 9-bit data mode fully supported Supports CTS and RTS handshaking

contactus@accesio.com

linkedin.com/company/acces-i-o-products-inc.

 858-550-9559 twitter.com/accesio

Applications: Industrial Automation/Control

USB3-104-HUB – Rugged, Industrial Grade, 4-Port USB 3.1 Hub Designed for the harshest environments, this small industrial/military grade 4-port USB 3.1 hub features extended temperature operation (-40°C to +85°C), locking USB and power connections, and an industrial steel enclosure for shock and vibration mitigation. The OEM version (board only) is PC/104-sized and can easily be installed in new or existing PC/104-based systems as well. The USB3-104-HUB makes it easy to add USB-based I/O to your embedded system or to connect peripherals such as external hard drives, keyboards, GPS, wireless, and more. Real-world markets include Industrial Automation, Security, Embedded OEM, Laboratory, Kiosk, Military/Mission Critical, Government, and Transportation/Automotive. This versatile four-port hub can be bus powered or self (externally) powered. You may choose from two power inputs (power jack and terminal block) to provide a full 900mA source at 5V on each of the downstream ports. Additionally, a wide-input power option exists to accept from 7VDC to 28VDC. All type A and type B USB connections feature a locking, high-retention design.

ACCES I/O Products, Inc. www.accesio.com

40

FEATURES ĄĄ Rugged, industrialized, four-port USB 3.1 hub ĄĄ USB 3.1 Gen 1 with data transfers up to 5Gbps (USB 2.0 and 1.1 compatible) ĄĄ Extended temperature (-40°C to +85°C) for industrial/military grade applications ĄĄ Locking upstream, downstream, and power connectors prevent accidental disconnects ĄĄ SuperSpeed (5Gbps), Hi-speed (480Mbps), Full-speed (12Mbps), and Low-speed (1.5Mbps) transfers supported ĄĄ Supports bus-powered and self-powered modes, accessible via DC power input jack or screw terminals ĄĄ LED for power, and per-port RGB LEDs to indicate overcurrent fault, High-Speed, and SuperSpeed ĄĄ Wide input external power option accepts from 7-28VDC ĄĄ OEM version (board only) features PC/104 module size and mounting compatibility

contactus@accesio.com

linkedin.com/company/acces-i-o-products-inc.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

 858-550-9559 twitter.com/accesio

www.embedded-computing.com/ai-machine-learning


ADLE3800SEC Intel® E3800 Series Edge-Connect SBC Measuring just 75mm x 75mm, the ADLE3800SEC is an embedded SBC specially optimized for Size, Weight, and Power (SWAP) applications. Based on the E3800 series Intel Atom™ SoC, this tiny board delivers maximum performance in the smallest possible size. It features a quad-core processor with up to 2MB onboard cache, and an integrated Intel HD Graphics engine with support for DirectX 11, Open GL 4.0, and full HD video playback.

ABOUT EDGE-CONNECT ARCHITECTURE: Via the backside board-edge

connector, additional I/O is easily accessible using standard and customerspecific breakout boards. Easy expansion helps reduce cabling, integration time, and system size while increasing quality and overall MTBF. Easily connect to sensors, cameras, and storage with a full range of onboard I/O: 2x Gigabit LAN, 1x USB 3.0, 1x USB 2.0, 2x PCie, and SATA. The Intel HD Graphics engine supports video output in either HDMI or Display Port format. An onboard M.2 socket allows users to install the fastest Solid State storage solutions on the market. Extended Temperature Ratings and hard-mounted Edge-Connect design make the ADLE3800SEC ideal for industrial embedded applications.

FEATURES

Systems, Government and Defense, Video Surveillance, Small Scale Robotics, Remote Datalogging, Man-Wearable Computing.

Ą

APPLICATIONS: UUAV, UUV Unmanned Systems, Industrial Control ADL Embedded Solutions, Inc. www.adl-usa.com

Ą Ą Ą Ą Ą

Ą

Small Size (75mm x 75mm) 4GB soldered DRAM (DDR3-1333 MHz) Low-power Atom® processor (8W TDP) Quad-Core/Dual-Core Versions Available M.2 Storage Socket Onboard Expansion Connector Extended Temperature Available

sales@adl-usa.com

 855-727-4200

www.linkedin.com/company/adl-embedded-solutions

@ADLEmbedded

Applications: Industrial Automation/Control

KUBER Series KUBER-2000 series is a new generation of palm-sized, flexible and ready-to-use industrial IoT appliance. It is designed for a variety of applications in the Industry 4.0 world. With less than 4-inch length, you can expect scalable Intel® Celeron® or Intel Atom® x5 processors with 2 or 4 cores, robust aluminum housing and common industrial features. The flexible expansion design ensures that various I/O choices for different applications are satisfied. Ideal applications include industrial/ factory automation, facility management, transportation, intralogistics or smart warehouse, medical equipment, communication testing equipment, electrical charging station management, automated guided vehicle (AGV) and IoT nodes for data collection/ management, and edge computing. With 6 models in its family so far, the base model KUBER-2110 offers the industrial IoT market a rich portfolio of ultra-small form factor devices with modular expansion design, that ensures various I/O selection for different applications with minimal investment. With different I/O integrations to choose from, other members of the family include: KUBER-212A: Suitable for Edge Controller or Computing in Harsh Industrial Environment KUBER-212B: Suitable for Automated Guided Vehicle (AGV) Control or Management KUBER-212D: Suitable for Ethernet-powered IoT Device KUBER-212E: Suitable for IoT Gateway in Industrial Automation Environments KUBER-212G: Suitable for IoT Gateway with Enriched I/Os for Expansion

American Portwell Technology, Inc.

https://www.portwell.com

www.embedded-computing.com/ai-machine-learning

FEATURES ĄĄ Intel® Celeron® processors (“Apollo Lake”), up to 4 cores (optional) ĄĄ 4GB onboard LPDDR4 memory ĄĄ Supports 2 GbE, 2 USB 3.0, 1 DP, 1 RS-232 and

1 RS-232/422/485

ĄĄ M.2 Key E and mini-PCIe for Wireless communication ĄĄ 32GB onboard eMMC 5.0 storage ĄĄ Heavy industrial EMC ĄĄ Extended temperature options for extreme environments

info@portwell.com  1-510-403-3399 linkedin.com/company/portwell

Industrial AI & Machine Learning RESOURCE GUIDE 2019

41

Industrial AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control


Industrial AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control

A FINE TECHNOLOGY GROUP

cPCI, PXI, VME, Custom Packaging Solutions VME and VME64x, CompactPCI, or PXI chassis are available in many configurations from 1U to 12U, 2 to 21 slots, with many power options up to 1,200 watts. Dual hot-swap is available in AC or DC versions. We have in-house design, manufacturing capabilities, and in-process controls. All Vector chassis and backplanes are manufactured in the USA and are available with custom modifications and the shortest lead times in the industry. Series 2370 chassis offer the lowest profile per slot. Cards are inserted horizontally from the front, and 80mm rear I/O backplane slot configuration is also available. Chassis are available from 1U, 2 slots up to 7U, 12 slots for VME, CompactPCI, or PXI. All chassis are IEEE 1101.10/11 compliant with hot-swap, plug-in AC or DC power options.

FEATURES ĄĄ

Made in the USA

ĄĄ

Most rack accessories ship from stock

Our Series 400 enclosures feature side-filtered air intake and rear exhaust for up to 21 vertical cards. Options include hot-swap, plug-in AC or DC power, and system voltage/ temperature monitor. Embedded power supplies are available up to 1,200 watts.

ĄĄ

Series 790 is MIL-STD-461D/E compliant and certified, economical, and lighter weight than most enclosures available today. It is available in 3U, 4U, and 5U models up to 7 horizontal slots.

Modified ‘standards’ and customization are our specialty

ĄĄ

Card sizes from 3U x 160mm to 9U x 400mm

ĄĄ

System monitoring option (CMM)

ĄĄ

AC or DC power input

ĄĄ

Power options up to 1,200 watts

All Vector chassis are available for custom modification in the shortest time frame. Many factory paint colors are available and can be specified with Federal Standard or RAL numbers.

VISIT OUR NEW WEBSITE! WWW.VECTORELECT.COM

For more detailed product information,

QUALITY SYSTEMS PACKAGING AND PROTOTYPE PRODUCTS

please visit www.vectorelect.com or call 1-800-423-5659 and discuss your application with a Vector representative. Vector Electronics & Technology, Inc. www.vectorelect.com

42

Industrial AI & Machine Learning RESOURCE GUIDE 2019

Made in the USA Since 1947

inquire@vectorelect.com

 800-423-5659

www.embedded-computing.com/ai-machine-learning


NET-429 Industrial TSN Switch WINSYSTEMS’ NET-429 network switch is designed for the harsh environments of the factory floor and provides performance for time-critical industrial networks. The switch has eight 10/100/1000 Mbps RJ45 Ethernet ports plus two 1000Base-X SGMII SFP ports and redundant power inputs with Power-over-Ethernet (PoE) PD support. Enabled for the latest IEEE 802.1 standards for Quality of Service (QoS) and Time Sensitive Networking (TSN), it includes advanced prioritization and timing features to provide guaranteed delivery of time-sensitive data. The NET-429 is based on the Marvell® Link Street® family, which provides advanced Quality of Service (QoS) features with 8 egress queues. The high-performance switch fabric provides line-rate switching on all ports simultaneously while providing advanced switch functionality. It also supports the latest IEEE 802.1 Audio Video Bridging (AVB) and Time Sensitive Networking (TSN) standards. These new standards overcome the latency and bandwidth limitations of Ethernet to allow for the efficient transmission of real-time content for industrial applications. The AVB/TSN protocols enable timing sensitive streams (such as digital video, audio, or industrial control traffic) to be sent over the Ethernet network with low latency and robust QoS guarantees.

FEATURES ĄĄ ĄĄ

IEEE 802.1 Time Sensitive Networking (TSN) IEEE1588v2 one-step Precision Time Protocol (PTP)

ĄĄ

8x 10/100/1000 Mbps RJ45 Ethernet ports

ĄĄ

2x 1000Base-X SGMII SFP ports

ĄĄ

256 Entry TCAM for Deep Packet Inspection

ĄĄ

Supports 4096 802.1Q VLANs

ĄĄ

Redundant Wide Range 9-36V DC Inputs

ĄĄ

ĄĄ

Power over Ethernet (PoE) PD 802.3at Type1 device Fanless -40°C to +85°C Operating Temperature Range

ĄĄ

Shock and Vibration Tested

ĄĄ

Long lifetime (10+years availability)

Easing the burden of deployment, the NET-429 can be powered remotely as a PoE-PD Type I device or through the two wide range 9-36V DC power inputs. All three power inputs are redundant for maximum uptime and include overload protection to prevent damage to other systems. WINSYSTEMS also combines design elements of our broad portfolio for application specific design for OEM clients. Contact an Application Engineer to schedule a consultation of your product requirements.

www.winsystems.com

WINSYSTEMS, INC.

www.winsystems.com

sales@winsystems.com https://www.linkedin.com/company/winsystems-inc-

www.embedded-computing.com/ai-machine-learning

 817-274-7553 http://twitter.com/winsystemsinc

Industrial AI & Machine Learning RESOURCE GUIDE 2019

43

Industrial AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control


Industrial AI & Machine Learning Resource Guide

Hardware Modules/Systems for Machine Learning

MSC C6B-CFLR The MSC C6B-CFLR COM Express™ Type 6 module family is well-positioned at the top end of the performance range. These powerful modules are based on the newest 9th generation Intel® Core™ processor. The MSC C6B-CFLR high-end module family features a wide range of scalability options, from two cores with single thread to six cores with twelve threads. For applications requiring maximum utilization of computing power, extremely powerful CPU variants are available with a thermal design power (TDP) of 45 W/35 W. The dual-core module, with a TDP of 25 W for the CPUs, is suitable for applications that need only moderate cooling, for example, low-noise cooling in medical devices. For other applications requiring higher reliability, some modules can be specified with error checking and correction (ECC) options.

FEATURES Based on Intel’s latest high-performance 9th generation processors ĄĄ Available as both standard and fully customizable form factors ĄĄ Specifically designed for industrial environments ĄĄ High performance for industrial AI and machine vision applications ĄĄ Designed and manufactured by Avnet in Germany ĄĄ

https://www.avnet.com/wps/portal/integrated/products/embedded-boards/com-express-modules/

Avnet Integrated

integrated@avnet.com

www.avnet.com/integrated

 See website

https://www.linkedin.com/company/18980630/

Hardware Modules/Systems for Machine Learning

MSC C6C-WLU The MSC C6C-WLU COM Express™ Type 6 module family is based on the 8th generation of quad-core Intel® Core™ U processors. For the first time, the powerful processor technology is available in the COM Express™ Compact form factor of 95 x 95mm, which can be easily integrated into applications where space is a premium. Early access to the new 8th Generation U processor series enables developers to accelerate market introduction of their innovative products. The scalable MSC C6C-WLU COM Express™ module family offers considerable increased performance in comparison to predecessor products based on the 7th generation of dual-core Intel® Core™ processor. The performance of multi-threading applications can be increased by up to 40%; however, the typical Thermal Design Power (TDP) of the CPU is only 15W and can be reduced to about 10W. The optimized utilization of the available performance is possible via clock enhancement at 25W TDP.

FEATURES Purpose built for long-term heavy industrial applications Designed and manufactured by Avnet Integrated in Germany ĄĄ Perfect for demanding real-time automation control and analytics ĄĄ Sold as standard form factor or fully customizable to fit complex designs ĄĄ ĄĄ

https://www.avnet.com/wps/portal/integrated/products/embedded-boards/com-express-modules/

Avnet Integrated

www.avnet.com/integrated

44

Industrial AI & Machine Learning RESOURCE GUIDE 2019

integrated@avnet.com  See website https://www.linkedin.com/company/18980630/

www.embedded-computing.com/ai-machine-learning


MSC SM2S-MB-EP5 with MSC SM2S-IMX8MINI SimpleFlex is the intelligent combination of a standard Computer-On-Module with a standard carrier board. It combines the advantages of Standard SBC and Custom SBC by choosing the COM from a huge portfolio of CPU and memory configuration. The ready-to-use platform is cost-efficiently adapted with the selected interfaces and assembled in-house at fullautomatic production lines. For the customization, more than 30 pre-validated interface combinations are available. SimpleFlex is therefore best suited for series production with large quantities. For our application-ready Embedded Platform SMARC™ 2.0 choose the latest module: The MSC SM2S-IMX8MINI module features NXP’s i.MX 8M Mini processors that are based on the latest 14nm FinFET technology to allow high computing and graphics performance at very low power consumption combined with a high degree of functional integration. MSC SM2S-IMX8MINI offers single-, dual- or quad-core ARM Cortex-A53 processors in combination with the ARM Cortex-M4 real-time processor and GC NanoUltra multimedia 2D/3D GPU. It provides fast LPDDR4 memory, up to 64GB eMMC Flash memory, Gigabit Ethernet, PCI Express, USB 2.0, an on-board Wireless Module as well as an extensive set of interfaces for embedded applications. The module is compliant with the new SMARC 2.0 standard, allowing easy integration with SMARC baseboards. For evaluation and design-in of the SM2S-IMX8MINI module, Avnet Intergrated provides a development platform and a starter kit. Support for Linux is available (Android support on request). ™

MSC SM2S-IMX8MINI Features: • NXP™ i.MX 8M Mini ARM® Cortex™-A53 up to 1.8GHz • ARM Cortex-M4 Real Time Processor at 400MHz • Up to 4GB LPDDR4 SDRAM • Up to 64GB eMMC Flash • Up to 2x Gigabit Ethernet, 2 x CAN

FEATURES ĄĄ

SMARC™ 2.0 carrier board for short size modules

ĄĄ

Form Factor (146 x 80 mm)

ĄĄ

Input voltage 12 VDC to 36VDC

ĄĄ

LVDS/eDP/MIPI-DSI connector

ĄĄ

Up to two Gigabit Ethernet ports

ĄĄ

µHDMI/DisplayPort connector

ĄĄ

USB Type-C (with DisplayPort)

ĄĄ

Mini PCI Express Card slot

ĄĄ

M.2 key M slot

ĄĄ

ĄĄ

One USB 3.0 Type A, One USB 2.0 Type A, One USB 2.0 OTG port Two CAN (one CAN opt. galv. isolated) www.avnet.com/integrated

Avnet Integrated

www.avnet.com/integrated www.embedded-computing.com/ai-machine-learning

integrated@avnet.com  See website www.linkedin.com/showcase/avnet-integrated/

Industrial AI & Machine Learning RESOURCE GUIDE 2019

45

Industrial AI & Machine Learning Resource Guide

Hardware Modules/Systems for Machine Learning


Industrial AI & Machine Learning Resource Guide

Hardware Modules/Systems for Machine Learning

®

Solid State Storage and Memory

Industrial-Grade Solid State Storage and Memory Virtium manufactures solid state storage and memory for the world’s top industrial embedded OEM customers. Our mission is to develop the most reliable storage and memory solutions with the greatest performance, consistency and longest product availability.

Classes include: MLC (1X), pSLC (7X) and SLC (30X) – where X = number of entire drive-writes-per-day for the 3/5-year warranty period.

Industry Solutions include: Communications, Networking, Energy, Transportation, Industrial Automation, Medical, Smart Cities and Video/Signage.

Memory Products include: All DDR, DIMM, SODIMM, Mini-DIMM, Standard and VLP/ULP. Features server-grade, monolithic components, best-in-class designs, and conformal coating/under-filled heat sink options.

Features

New! XR (Extra-Rugged) Product Line of SSDs and Memory:

• Broad product portfolio from latest technology to legacy designs • 22 years refined U.S. production and 100% testing • A+ quality – backed by verified yield, on-time delivery and field-defects-per-million reports • Extreme durability, iTemp -40º to 85º C • Industrial SSD Software for security, maximum life and qualification • Longest product life cycles with cross-reference support for end-of-life competitive products • Leading innovator in small-form-factor, high-capacity, high-density, high-reliability designs • Worldwide Sales, FAE support and industry distribution

Virtium

www.virtium.com

46

StorFly® SSD Storage includes: M.2, 2.5", 1.8", Slim SATA, mSATA, CFast, eUSB, Key, PATA CF and SD.

Industrial AI & Machine Learning RESOURCE GUIDE 2019

StorFly-XR SSDs enable multi-level protection in remote, extreme conditions that involve frequent shock and vibration, contaminating materials and/or extreme temperatures. Primary applications are battlefield technology, manned and unmanned aircraft, command and control, reconnaissance, satellite communications, and space programs. Also ideal for transportation and energy applications. Currently available in 2.5" and Slim-SATA formats. Include: custom ruggedization of key components, such as ultrarugged connectors and screw-down mounting, and when ordered with added BGA under-fill, can deliver unprecedented durability beyond that of standard MIL-810-compliant solutions. XR-DIMM Memory Modules have the same extra-rugged features as the SSDs, and include heatsink options and 30µ" gold connectors. They also meet US RTCA DO-160G standards.

sales@virtium.com www.linkedin.com/company/virtium

 949-888-2444 @virtium

www.embedded-computing.com/ai-machine-learning


FEATURES The new conga-TC370 COM Express Type 6 modules, the conga-JC370 embedded 3.5 inch SBCs, and the conga-IC370 Thin Mini-ITX motherboards all feature:

Faster Innovation and time to market

ĄĄ The latest Intel® Core™ i7, Core™ i5, Core™ i3 and Celeron

With three optimized form factors for designers to choose from, congatec delivers a simpler, efficient way to harness the benefits of 8th Gen Intel® Core™ U-series processors for IoT. These products draw from congatec’s deep expertise in embedded and industrial design to offer an enriched feature set, along with long product availability, hardware and software customization, and value-added design support. As a result, OEMs and ODMs can build high-performing solutions with less development time and cost.

ĄĄ The memory is designed to match the demands of consolidating

embedded processors with a long-term availability of 10+ years. multi OS applications on a single platform: Two DDR4 SODIMM sockets with up to 2400 MT/s are available for a total of up to 64GB.

ĄĄ USB 3.1 Gen2 with transfer rates of 10 Gbps is supported

natively, which makes it possible to transfer even uncompressed UHD video from a USB camera or any other vision sensor.

ĄĄ Supports a total of 3 independent 60Hz UHD displays with up

Performance at the edge Specially designed for embedded use conditions in which space and power are limited, 8th Gen Intel Core U-series processors provide high performance for edge devices with up to four cores. This enables a wide range of designs at 15W TDP, configurable down to 12.5W. congatec products based on these processors deliver high-quality visual, audio, and compute capabilities with integrated graphics and high-definition media support. • Ensure exceptional graphics performance while helping lower BOM costs with integrated Gen 9.5 Intel® Graphics with up to 24 execution units. • Deliver on rising expectations for video performance with 4K/UHD content support, plus accelerated 4K hardware media codecs. Designs can support up to three displays. • Develop media and video applications with the Intel® Media SDK, which provides tools and an API enabling hardware acceleration for fast video transcoding, image processing, and media workflows. • Create better audio experiences with enhanced speech and audio quality from microphones, voice activation and wake from standby, and enhanced playback with Intel® Smart Sound Technology and Intel’s programmable quad-core audio DSP, designed for low power consumption.

to 4096x2304 pixels as well as 1x Gigabit Ethernet (1x with TSN support).

ĄĄ The new boards and modules offer all this and many more

interfaces with an economical 15W TDP that is scalable from 10W (800 MHz) to 25W (up to 4.6 GHz in Turbo Boost mode).

congatec products based on 8th Gen Intel Core U-series processors also help bring artificial intelligence (AI) to more places. With high processing and integrated graphics performance, combined with the optimized Intel® Distribution of OpenVINO™ toolkit, these processors improve inference capabilities like facial recognition, license plate recognition, people counting, and fast and accurate anomaly detection on manufacturing lines.

www.congatec.us

congatec

www.congatec.us

sales-us@congatec.com

www.linkedin.com/company/congatec-ag

www.embedded-computing.com/ai-machine-learning

 858-457-2600 twitter.com/congatecAG

Industrial AI & Machine Learning RESOURCE GUIDE 2019

47

Industrial AI & Machine Learning Resource Guide

Industrial


You probably already use Tadiran batteries, but just don’t know it!

PROVEN

40 YEAR OPERATING

LIFE

If you have a smart automatic water, gas, electricity, or heat meter in your home. If you have an electronic toll collection transponder, tire inflation sensor, or emergency E-CALL system in your car. If you have a GPS tracking device on your trailer, container, or cargo. If you have wireless sensors, controls, or monitors in your factories and plants. If you use electronics with real-time clock or memory back-up in your office.

If you have never heard of Tadiran Batteries, it is only because you have never had a problem with our products powering your products. Take no chances. Take Tadiran batteries that last a lifetime.

* Tadiran LiSOCL2 batteries feature the lowest annual self-discharge rate of any competitive battery, less than 1% per year, enabling these batteries to operate over 40 years depending on device operating usage. However, this is not an expressed or implied warranty, as each application differs in terms of annual energy consumption and/or operating environment.

Tadiran Batteries 2001 Marcus Ave. Suite 125E Lake Success, NY 11042 1-800-537-1368 516-621-4980 www.tadiranbat.com

*


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.