Embedded AI and Machine Learning with 2021 Resource Guide

Page 1

www.embeddedcomputing.com/features/devkit-selector

Development Kit Selector WWW.EMBEDDEDCOMPUTING.COM/MACHINE-LEARNING

2021 RESOURCE GUIDE

2021 | VOLUME 2 | NUMBER 1

Connect Your Infrastructure for Digital Twins PG 16

PG 24

AI Manufacturing in 4 Steps PG 14

UD INFO CORP.

PG 29

Industrial Grade NAND Flash SSD Products and DRAM Modules

VECOW

PG 24

One-stop Edge AI Solution Services


AD LIST PAGE

ADVERTISER

1

Digi-Key Corporation – Development Kit Selector Embedded World – Intelligent. Connected. Embedded. Embedded World – Intelligent. Connected. Embedded. Lauterbach, Inc. – Multicore Debugging & Real-Time Trace Lauterbach, Inc. – Debugger for RH850 from the Automotive Specialists Tadiran – IIoT Devices Run Longer on Tadiran Batteries UD Info – Industrial Grade NAND Flash SSD Products and DRAM Modules Vecow – One-stop Edge AI Solution Services Vector Elect – VME/VXS/cPCI Chassis, Backplanes & Accessories

10 19 7

11

32 1

1 3

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass rich.nass@opensysmedia.com EDITOR-IN-CHIEF Brandon Lewis brandon.lewis@opensysmedia.com ASSOCIATE EDITOR Tiera Oliver tiera.oliver@opensysmedia.com ASSISTANT EDITOR Chad Cox chad.cox@opensysmedia.com ASSISTANT EDITOR Taryn Engmark taryn.engmark@opensysmedia.com TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com FREELANCE TECHNOLOGY WRITER Saumitra Jagdale saumitra.jagdale@opensysmedia.com MARKETING COORDINATOR Katelyn Albani katelyn.albani@opensysmedia.com CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com SENIOR WEB DEVELOPER Aaron Ganschow aaron.ganschow@opensysmedia.com WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com CONTRIBUTING DESIGNER Joann Toth joann.toth@opensysmedia.com EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

SALES/MARKETING DIRECTOR OF SALES AND MARKETING Tom Varcie tom.varcie@opensysmedia.com (734) 748-9660 MARKETING MANAGER Eric Henry eric.henry@opensysmedia.com (541) 760-5361

PROFILES

STRATEGIC ACCOUNT MANAGER Rebecca Barker rebecca.barker@opensysmedia.com (281) 724-8021 STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838

AI & MACHINE LEARNING 24

Vecow

APPLICATIONS: COMPUTER/MACHINE VISION 25

Lattice Semiconductor Corporation

APPLICATIONS: INDUSTRIAL AUTOMATION/CONTROL 26 27

STRATEGIC ACCOUNT MANAGER Kathleen Wackowski kathleen.wackowski@opensysmedia.com (978) 888-7367 SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582 ASSISTANT DIRECTOR OF PRODUCT MARKETING/SALES Barbara Quinlan barbara.quinlan@opensysmedia.com (480) 236-8818 STRATEGIC ACCOUNT MANAGER Glen Sundin glen.sundin@opensysmedia.com (973) 723-9672

Vector Elect Mactron Group

INSIDE SALES Amy Russell amy.russell@opensysmedia.com

HARDWARE MODULES/SYSTEMS FOR MACHINE LEARNING 28 29

TAIWAN SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com CHINA SALES ACCOUNT MANAGER Judy Wang judywang2000@vip.126.com

congatec UD Info

EUROPEAN MARKETING SPECIALIST Steven Jameson steven.jameson@opensysmedia.com +44 (0)7708976338

NEURAL NETWORK PROCESSORS: TPU 30

FlexLogix

STORAGE 29

WWW.OPENSYSMEDIA.COM

Virtium LLC

SOCIAL

PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com EXECUTIVE VICE PRESIDENT John McHale john.mchale@opensysmedia.com EXECUTIVE VICE PRESIDENT Rich Nass rich.nass@opensysmedia.com

GROUP EDITORIAL DIRECTOR John McHale john.mchale@opensysmedia.com VITA EDITORIAL DIRECTOR Jerry Gipper jerry.gipper@opensysmedia.com

Facebook.com/Embedded.Computing.Design

TECHNOLOGY EDITOR Emma Helfrich emma.helfrich@opensysmedia.com SENIOR EDITOR Sally Cole sally.cole@opensysmedia.com CREATIVE PROJECTS Chris Rassiccia chris.rassiccia@opensysmedia.com FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com

@Embedded_ai

FINANCE Rosemary Kristoff rosemary.kristoff@opensysmedia.com SUBSCRIPTION MANAGER subscriptions@opensysmedia.com

LinkedIn.com/in/EmbeddedComputing

youtube.com/user/VideoOpenSystems

2

CORPORATE OFFICE 1505 N. Hayden Rd. #105 • Scottsdale, AZ 85257 • Tel: (480) 967-5581 REPRINTS WRIGHT’S MEDIA REPRINT COORDINATOR Wyndell Hamilton whamilton@wrightsmedia.com (281) 419-5725

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning



CONTENTS FEATURES 6

2021 | Volume 2 | Number 1

8

Embedded AIoT: Where Edge and Endpoint AI Meet the Cloud

www.linkedin.com/in/embeddedcomputing/

By Dr. Sailesh Chittipeddi, Renesas Electronics

8

@Embedded_ai

Under Threat: How SSDs in Edge Computing Applications Can Maintain High Data Security and Integrity

COVER

By Jason Chien, Silicon Motion

12

Enhancing AI Inference through Sparsity Support and Transformer Optimization for Minimizing Latency

20

By Saumitra Jagdale, Contributing Editor

14

Ensuring AI Success in Manufacturing

Technology infrastructure has evolved to the point that AI is no longer a projection: it's here. From digital twin enablers to machine learning solutions for 24/7 factory automation, the 2021 Embedded AI & Machine Learning Resource Guide includes tips and techniques for implementing intelligence at the edge. Products to help get you there start on Page 24.

By Philipp Wallner, The MathWorks

16

Connectivity by Design

20

Anomaly Detection Using Reality AI Software Tools

22

By Hilmar Retief, Bentley Systems

By Saumitra Jagdale, Contributing Editor

Who’s IP is it? The AI Inventor or the AI’s Inventor? By Tiera Oliver, Associate Editor, and Taryn Engmark, Assistant Editor

24 2021 Embedded AI RESOURCE GUIDE

WEB EXTRAS Shift Left to Secure Connected Embedded Systems By Mark Pitchford, LDRA https://bit.ly/ShiftLeftEmbeddedSystems

How to Troubleshoot Embedded Device Software Faster with a Static Call Flow Browser By Hari Nagalla, Texas Instruments

22

https://bit.ly/TroubleshootEmbeddedFirmware

Published by:

2021 OpenSystems Media® © 2021 Embedded Computing Design © 2021 Embedded AI and Machine Learning All registered brands and trademarks within Embedded Computing Design and Embedded AI and Machine Learning magazines are the property of their respective owners.

COLUMNS 5 4

NVIDIA DRIVE SDK Updates Take the Wheel of Autonomous Drive By Tiera Oliver, Associate Editor Embedded AI & Machine Learning RESOURCE GUIDE 2021

enviroink.indd 1

10/1/08 10:44:38 AM

To unsubscribe, email your name, address, and subscription number as it appears on the label to: subscriptions@opensysmedia.com www.embeddedcomputing.com/machine-learning


NVIDIA DRIVE SDK Updates Take the Wheel of Autonomous Drive By Tiera Oliver, Associate Editor

Tiera.Oliver@opensysmedia.com

Autonomous drive is no longer the future. Although we have yet to reach full levels of autonomy, these systems continue to advance, as do the capabilities that reside within and all around the vehicle.

and one front-facing lidar to give vehicles much more than just a single pair of eyes on the road.

At NVIDIA’s fall GPU Technology Conference (GTC), founder and CEO Jensen Huang offered a glimpse into what the company has been working on for the AI-driven automobile. This includes more than 65 updates to NVIDIA software development kits (SDKs) that will advance everything from in-vehicle personal assistants to autonomous vehicle mapping and localization to designing with next-generation ADAS SoCs.

Synthesizing Real-World Roads NVIDIA Concierge and Chauffer provide a bridge to fully automated driving. But developers working on completely automated systems today need more than just platforms that support level 5 technology – they need an environment to prove out their designs before deploying them in the real world.

Not only do these releases target different parts of the vehicle, they also scale across the various levels of autonomy to enable what is hopefully a seamless transition into fully autonomous drive. Hands (Almost) Off The Wheel Vehicles have become more than just transportation systems. They are now also basically smartphones. As such, today’s users expect the same level of support from their vehicles and an experience that is similarly tailored to their individual needs and preferences. To limit driving distractions associated with multitasking in a connected car, the new NVIDIA DRIVE Concierge links vehicle occupants to a range of always-on intelligent services. These include real-time conversational AI and support for commands that prompt level 3 and above vehicles to autonomously search for available parking spots, self-park, and respond to remote summons. NVIDIA DRIVE Concierge is based on the company’s DRIVE IX cockpit software platform that runs on NVIDIA’s Drive Orin SoC, a software-defined AI platform being promoted as the central computer for next-generation vehicles. Concierge also integrates tightly with NVIDIA DRIVE Chauffer. DRIVE Chauffeur leverages NVIDIA DRIVE AV software for perception, mapping, planning layers, and DNNs to collect road data and take over driving responsibilities at certain times. For instance, Chauffer technology is at the heart of the automated valet functionality mentioned previously. The Chauffer platform runs on Hyperion 8, a production-ready compute architecture that’s already supported by sensors from tier 1 suppliers like Continental, Hella, Luminar, Sony, and Valeo to monitor road conditions. It’s designed to work with a sensor suite comprised of 12 cameras, 9 radars, 12 ultrasonic sensors, www.embeddedcomputing.com/machine-learning

Together, NVIDIA DRIVE Concierge and DRIVE Chauffer offer 360º, 4D visualization of the interior and exterior vehicle environment so drivers can become passengers whenever possible.

VEHICLES HAVE BECOME MORE THAN JUST TRANSPORTATION SYSTEMS. THEY ARE NOW ALSO BASICALLY SMARTPHONES. AS SUCH, TODAY’S USERS EXPECT THE SAME LEVEL OF SUPPORT FROM THEIR VEHICLES AND AN EXPERIENCE THAT IS SIMILARLY TAILORED TO THEIR INDIVIDUAL NEEDS AND PREFERENCES. Simulation has been a cornerstone of self-driving car engineering almost since its inception, and offerings like the NVIDIA Omniverse Replicator continue to advance its sophistication. The Omniverse Replicator engine generates synthetic, ground truth data that incorporates all phenomenon one might find on real-world roads, including weather, occlusion, and a variety of other edge cases. And all of these are produced using highfidelity physics that considers depth, velocity, and so on. Once generated, labeled Omniverse Replicator data can be fed into the company’s DRIVE Sim simulation tool, which is used to train automotive deep neural networks (DNNs) such as object detection, obstacle avoidance, and other autonomous drive AI. Collectively, NVIDIA DRIVE technologies are closing the gap between traditional vehicles and the fully autonomous drive capabilities that are now within our reach. EAI To get started with these and other NVIDIA SDKs for automotive, check out the DRIVE AGX Orin developer kit that can be used for complete stack evaluation at https://developer.nvidia.com. Embedded AI & Machine Learning RESOURCE GUIDE 2021

5


INFERENCING AT THE EDGE

Embedded AIoT: Where Edge and Endpoint AI Meet the Cloud By Dr. Sailesh Chittipeddi, Renesas Electronics

Skyrocketing demand for touch-free experiences has accelerated the move toward AI-powered systems, voice-based control, and other contactless user interfaces, pushing intelligence closer and closer to the endpoint. The Artificial Intelligence of Things (AIoT) is the key to unlocking the seamless, handsfree experience that will help keep users safe in a post-COVID environment. Consider the possibilities: smart shopping carts that allow you to scan your goods as you drop them in your cart and use mobile payments to bypass the checkout counter, or intelligent video conferencing systems that automatically recognize and switch focus on different speakers during meetings to provide a more “in-person” experience for remote teams. One of the most important trends in the electronics industry today is the incorporation of AI into embedded devices, particularly AI interpreting sensor data such as images and machine learning for alternative user interfaces such as voice. Why is now the time for an embedded AIoT breakthrough? AIoT is Moving Out Initially, AI sat up in the cloud where it took advantage of computational power, memory, and storage scalability levels that the edge and endpoint just could not match. However, more and more, we are seeing not only machine learning training algorithms move out toward the edge of the network, but also a shift from deep learning training to deep learning inference. Where “training” typically sits in the network core, “inference” now lives at the endpoint where developers can access AI analytics in real time and then optimize device

6

Embedded AI & Machine Learning RESOURCE GUIDE 2021

performance rather than sifting through the device-to-cloud-to-device loop (Figure 1). Today, most of the inference process runs at the CPU level. However, this is shifting to a chip architecture that integrates more AI acceleration on chip. Efficient AI inference demands efficient endpoints that can infer, pre-process, and filter data in real time. Embedding AI at the chip level, integrating neural processing and hardware accelerators, and pairing embedded-AI chips with special-purpose processors designed specifically for deep learning offers developers a trifecta of the performance, bandwidth, and realtime responsiveness needed for nextgeneration connected systems. An AIoT Future: At Home and the Workplace In addition, a convergence of advancements around AI accelerators, adaptive www.embeddedcomputing.com/machine-learning


RENESAS ELECTRONICS

TWITTER

@RenesasGlobal

www.renesas.com

and predictive control, and hardware and software for voice and vision open up new user interface capabilities for a wide range of smart devices. For example, voice activation is quickly becoming the preferred user interface for always-on connected systems for both industrial and consumer markets. We have seen the accessibility advantages that voice-control based systems offer users navigating physical disabilities as they leverage spoken commands to activate and achieve tasks. With rising demand for touchless control as a health and safety countermeasure in shared spaces like kitchens, offices, and factory floors, voice recognition – combined with a variety of wireless connectivity options – will bring seamless, non-contact experiences into the home and workspace. Multimodal architectures offer another path for AIoT. Using multiple input information streams improves safety and ease of use for AI-based systems. For example, a voice plus vision processing combination is particularly well suited for hands-free AI-based vision systems. Voice recognition activates object and facial recognition for critical vision-based tasks in applications like smart surveillance or video conferencing systems. Vision AI recognition then jumps in to track operator behavior, control operations, or manage error or risk detection. On factory and warehouse floors, multimodal AI powers collaborative robots – or cobots – as part of a technology package that serves as the five senses allowing cobots to safely perform tasks side-by-side with human counterparts. Voice plus gesture recognition allows the two groups to communicate in their shared workspace.

LINKEDIN

www.linkedin.com/company/ renesas/

YOUTUBE

www.youtube.com/user/ RenesasPresents

FACEBOOK

www.instagram.com/renesas_global

edge and device endpoints. This integrated AI will be the foundation that powers a complex combination of “sense” technologies to create smart applications with more natural, “human-like” communication and interaction. EAI Dr. Sailesh Chittipeddi is the Executive Vice President and General Manager of the IoT and Infrastructure Business Unit at Renesas.

Endpoint e-AI Cloud

Realtime Application

IT

Edge

Information Technology

Endpoint

OT

Learning Pre or 1 time Training

Operational Technology

Inference New AI application

FIGURE 1

Moving inferencing to endpoints helps break the device-to-cloudto-device data loop for improved latency and performance. (Source: Renesas Electronics)

Multicore Debugging & Real-Time Trace for Arm ® Cortex ®-A/-R/-M Infineon TriCore™ AURIX™ Renesas RH850

get there

What’s on the Horizon? According to IDC Research, there will be 55 billion connected devices worldwide by 2025 generating 73 zettabytes of data, and edge AI chips are set to outpace cloud AI chips as deep learning inference continues to relocate to the www.embeddedcomputing.com/machine-learning

Embedded AI & Machine Learning RESOURCE GUIDE 2021

7


INFERENCING AT THE EDGE

Under Threat: How SSDs in Edge Computing Applications Can Maintain High Data Security and Integrity By Jason Chien, Silicon Motion

It’s easy to assume that storage is a simple function and that an SSD lacks the smarts of a microcontroller or microprocessor. Not so – an SSD must be capable of playing an active role in the maintenance of an AI application’s data security and integrity. The AI that edge computing systems implement is often mission- or safetycritical. This is the case in a car, where driver assistance systems that detect pedestrians save lives. In industrial environments, AI systems that perform condition-based monitoring of machines play an essential role in keeping production lines running. AI is a data-heavy, compute-intensive technology. In embedded computing systems, it requires an inference engine that is typically hosted in a microprocessor or FPGA to detect patterns in a set of data. Connectivity and hardware constraints dictate that some, or all, of an AI system’s neural network processing is performed locally at the edge of the network. And this calls for local storage with high data capacity. Given the large quantities of data handled by the typical edge computing application, the most suitable type of device for local data storage is an SSD

8

that provides multiple tens of gigabytes of storage capacity. An engineer specifying an SSD for AI-enabled embedded computing systems can benefit from an understanding of the functions and technologies that advanced SSDs deploy to keep user data and code safe from attack or impairment. So, what are the key issues for IoT and embedded system developers to take into account when specifying an SSD for edge AI applications? Connected to a Hostile World Any threat to the stored data AI depends on represents a threat to the whole system. And any networked system is vulnerable to attack by a range of agents including hackers, commercial competitors, organized crime, and hostile nation states. Such threats to stored data can be thwarted by the application of modern security countermeasures: › Authentication prevents unauthorized devices and peripherals from modifying or replacing stored data › Encryption of data at rest and in transit between devices means that it can only be read by authorized users who possess the secret key for decryption Risk to the data on which edge computing systems operate does not, however, only arise from security threats. The operating environment can also impair the integrity of data stored in an SSD. For example, extreme temperatures, shock and vibration, and unexpected power outages can all cause data bits to be lost or corrupted.

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning


SILICON MOTION

www.siliconmotion.com

Standards for Security and Integrity Performance The safety- or mission-critical nature of many AI-based embedded systems means that users need the strongest possible guarantee that an SSD will operate reliably, without losing or impairing data, and protect it from cyber-attack. This requirement has become more difficult in recent years for two reasons. One is increasing sophistication of hackers and cyber-criminals in bypassing security protections. The industry standard for protecting data from snooping or tampering is the Advanced Encryption Standard (AES). When AES cryptography is implemented in an SSD, all data on the drive is stored in a secure, encrypted state. This provides a practically unbreakable barrier to attackers who want to steal or view stored data. The second challenge comes from the type of memory arrays used in modern high-density SSDs. To provide high storage capacities of as much as 1 TB in compact, chip-style form factors, embedded SSDs contain the latest triplelevel cell (TLC) or quad-level cell (QLC) NAND flash technology. www.embeddedcomputing.com/machine-learning

TWITTER

@silicon_motion

LINKEDIN

www.linkedin.com/company/ siliconmotion

YOUTUBE

www.youtube.com/channel/ UCB4_AXjqQV_LgCRlNgzsctQ

FIGURE 1

ECC software applied at every data transfer point in an SSD protects the integrity of the data. (Source: Silicon Motion)

FIGURE 2

Modern SSDs use a variety of error correction algorithms to maintain data integrity over the lifetime of the device. (Source: Silicon Motion)

Compared to older single-level cell (SLC) NAND memory, TLC and QLC NAND provide much higher memory density, but are less inherently robust. In TLC and QLC NAND arrays, the effects of repeated program/erase cycles and high-temperature operation can lead to increased bit error rates (BERs) and a higher risk of data loss. To counter these effects, today’s SSDs deploy advanced forms of error correction code (ECC), which detects and corrects bit errors generated during read or write operations in TLC or QLC NAND memory (Figure 1). SSDs can also implement sophisticated measures to protect data from loss at high operating temperatures or after many program/erase cycles. Intelligent sensing in the SSD can detect at-risk memory cells and recharge them automatically. The latest SSD technologies use complex algorithms to configure the timing of recharge operations depending on the number of program/erase cycles, operating temperature history, and the frequency and severity of bit errors, analyzed on a block-by-block basis. The Crucial Role of Firmware in a Secure, Robust SSD While Figure 1 illustrates the hardware components of an SSD, the most important component is not shown: the firmware that controls system operation. Every SSD includes firmware to implement basic functions such as data addressing, data retrieval, and allocation of data to memory blocks. But in the most advanced SSDs, the firmware performs an additional range of sophisticated functions, such as: › Enhance the security of data › Maintain data integrity by managing ECC software (Figure 2) › Prolong data retention by managing the physical condition of the NAND flash array and refreshing at-risk data Embedded AI & Machine Learning RESOURCE GUIDE 2021

9


INTELLIGENT. CONNECTED.EMBEDDED. JOIN THE GLOBAL PLATFORM OF THE EMBEDDED COMMUNITY 15 – 17.3.2022 Become an exhibitor now! embedded-world.com/join

Media partners


INFERENCING AT THE EDGE

"FOR THE EMBEDDED SYSTEM DEVELOPER, ... THE SSD FIRMWARE IS AS IMPORTANT A FEATURE TO EVALUATE AS HARDWARE SPECIFICATIONS LIKE MEMORY CAPACITY..."

System developers will benefit from careful evaluation of SSDs to ensure that they provide comprehensive encryption and robust error correction of TLC or QLC NANDhosted data in AI-enabled embedded computing applications. Along with sophisticated monitoring and data protection functions, these capabilities comprise an SSD that can preserve the integrity of AI application data in systems that are constantly exposed to high temperatures, repeated program/erase cycles, or cyber threats (Figure 3). EAI Jason Chien is Product Marketing Director at Silicon Motion.

For the embedded system developer, this means the SSD firmware is as important a feature to evaluate as hardware specifications like memory capacity, data retention time, and program/erase cycle ratings. Differences in firmware between one model of SSD and another play out in important parameters such as lifetime and BERs. High-performance firmware can also control the operation of critical environmental protection functions that ensure data integrity is maintained in adverse operating conditions. For instance, SSD firmware can implement data flush operations to save data from being transferred to or from the NAND flash array in the event of an unexpected power outage. Comprehensive Protection of Critical AI Application Data AI edge computing systems process huge amounts of critical data, which in automotive, industrial, consumer, or medical systems can be mission- or safety-critical or subject to profound privacy concerns.

FerriSSD products in a BGA chip-style package meet edge AI applications’ need for high-density, small-footprint data storage up to 480 GB. (Source: Silicon Motion)]

Host I/F: SATA/PCIe Flash Management

Nand Flash Stack

TRACE 32 ®

Debugger for RH850 from the automotive specialists

DEBUGGING

NEXUS TRACING

RH850, ICU-M and GTM Debugging

Code Coverage (ISO 26262)

AUTOSAR / Multicore Debugging Runtime Measurement (Performance Counter)

The threat to data applies as much to data stored in an SSD as it does to the device SoC through which all application data passes. This means that an SSD requires comprehensive security and environmental protection functions. www.embeddedcomputing.com/machine-learning

20mm

FIGURE 3

16mm

An SSD’s firmware is readily configurable, so the functions that it performs can be made highly adaptive to the conditions to which each individual SSD is exposed – physical factors such as temperature and power cycling, as well as logical factors such as the number and type of program and erase operations.

eec_rh850.indd 1

Multicore Tracing Onchip, Parallel and Aurora Tracing

www.lauterbach.com/1701.html

Embedded AI & Machine Learning RESOURCE GUIDE 2021

11

07.11.2018 12:21:20


MODEL TRAINING & COMPRESSION

Enhancing AI Inference through Sparsity Support and Transformer Optimization for Minimizing Latency By Saumitra Jagdale, Freelance Technology Writer

The latest version of NVIDIA’s TensorRT SDK now includes features that support more enhanced, responsive AI applications. AI models have become more complicated in recent times due to escalating demand for intelligence in real-time applications across industries. This requirement means the deployment of highperformance edge inferencing systems that are optimized for each use case. The purpose of inference is to retain as much accuracy as possible from the training phase. Trained models can be tweaked to a user’s target hardware to get the lowest response time and maximum throughput, though the goal of being as precies as possible often clashes with the available memory and throughput in resource-constrained edge devices. In other words, a well-trained, highly accurate model may be too slow and cumbersome to run in certain edge applications. Different AI technology suppliers have different approaches to overcome

12

this challenge. NVIDIA’s is built around the TensorRT inferencing platform, a software development kit (SDK) optimized to leverage the power of Tensor Cores in NVIDIA GPUs. This strategy continues to evolve with the release of TensorRT version 8. Inside NVIDIA TensorRT 8 NVIDIA TensorRT works with any framework, including TensorFlow and PyTorch, to allow users to optimize, validate, and deploy trained neural networks into productiongrade video streaming, speech recognition, recommendation, fraud detection, text generation, and natural language processing (NLP) applications. Version 8 of the SDK builds on this foundation to deliver 40x higher throughput than CPU-only platforms while keeping latency to a minimum. It achieves this through three primary feature enhancements that combine to cut language query inference time in half: 1. Sparsity on NVIDIA Ampere GPUs – Sparsity prunes weak connections that do not contribute to the network’s overall calculation. 2. Transformer optimization – Transformer optimizations boost performance, while quantization-aware training improves accuracy. 3. BERT-Large Support – Google’s Bidirectional Encoder Representations from Transformers (BERT) is a transformer-based machine-learning technique for pre-training NLP.

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning


FIGURE 1

The TensorRT 8 Optimizer leverages INT8 resolutions to double model performance versus the previous version of the SDK. (Source: NVIDIA)

Sparsity with NVIDIA’s Ampere Architecture As the processing power required to execute computer vision, speech recognition, and NLP neural networks grows, efficient modeling and computation become increasingly important. Often, specific nodes or layers of a model can be less significant or even irrelevant to a certain task, and therefore be passed over during execution. In these instances, sparse computing improves overall system efficiency by eliminating the need for a neural network to perform computations on those specific weights or parameters. TensorRT 8 supports this feature. Available with Ampere architecture GPUs, sparsity can reduce model weights by almost half for improved performance, throughput, and latency. Reducing Inference Calculations with Transformer Optimization Another way performance enhancements are achieved in TensorRT 8 is through transformer optimization, or the reducing bit resolution where possible. 8-bit (INT8) calculations are becoming common as a means of optimizing machine learning models created in frameworks like TensorFlow to run on reduced-resource systems. Transformer quantization support in TensorRT capitalizes on this trend by allowing developers to utilize INT8, which substantially decreases inference calculations and memory usage in Tensor Cores (Figure 1). Thus, TensorRT 8 can double the performance of many models compared to TensorRT 7. And, when paired with quantization aware training (QAT) that emulates inferencetime quantization, accuracy can be doubled. Transformer Optimization, Continued: BERT-Large NVIDIA also included support for Google’s BERT-Large inference in the Tensor RT 8 release. Language models like BERT-Large are used behind the scenes by many conversational AI inferencing services. The challenge with BERT-Large models is that their size increases latency, which means that many language-based applications must forego features like nuance or emotion recognition during conversational inference. However, the breakthrough integration of BERT-Large in TensorRT 8 allows NLP models to be analyzed in just 1.2 milliseconds, resulting in real-time response rates to natural language queries. www.embeddedcomputing.com/machine-learning

FIGURE 2

TensorRT 8 powers the GE Healthcare Vivid E95 scanner used for automated cardiac view detection. (Source: General Electric)

Now, companies can deploy an entire workflow in milliseconds with TensorRT 8, potentially paving the way for a new generation of conversational AI apps that provide users with a more intelligent and low latency experience. TensorRT: Deployed Across Industries TensorRT’s better performance and accuracy make it a popular choice for industries such as healthcare, automotive, telecom/ datacom, financial services, and retail. For example, GE Healthcare leveraged the SDK to accelerate automated cardiac view detection in its Vivid E95 scanner (Figure 2). The improved performance allowed for an enhanced view detection algorithm that cardiologists can use to make more accurate diagnoses and detect diseases earlier. TensorRT is also used by Verizon, Ford, the United States Postal Service, American Express, and other well-known companies. Head over to https://developer.nvidia.com/tensorrt to find out ways the SDK is being put to use and how you can leverage it in your next edge AI system build. EAI

Embedded AI & Machine Learning RESOURCE GUIDE 2021

13


ML MEETS INDUSTRIAL IOT

Ensuring AI Success in Manufacturing By Philipp Wallner, The MathWorks

This four-step process can help automation engineers successfully implement AI into manufacturing processes that operate 24/7. AI offers a number of new applications for manufacturing engineers. To provide full value, AI models need to be integrated across the entire manufacturing operation. And these processes could run nonstop, seven days a week. This means engineers must focus on multiple aspects of AI if they are to integrate it fully across manufacturing workflows. This starts with data preparation, then modeling, followed by simulation and test, and finally deployment (Figure 1). This four-step workflow allows AI models to be successfully integrated into 24/7 manufacturing operations. It’s Not All About Modeling Engineers using machine learning and deep learning often anticipate spending a large percentage of their time developing and fine-tuning AI models. Yes, modeling is an important step in the workflow, but the model is not the end of the journey. The key to success in a practical AI implementation is uncovering issues early on and knowing what aspects of the workflow to focus time and resources on for the best results. And these aren’t always in the most obvious places.

through industrial communication protocols like OPC UA, as well as other pieces of machine software such as supervisory and control logic and the HMI. 2. These engineers already have the skills to successfully incorporate AI. They have inherent domain knowledge about the equipment, and with tools for data preparation and designing models, they can get started even if they’re not AI experts. The AI-Driven Workflow Now we can begin to understand the four steps of an AI-driven workflow and how each step plays a critical role in successfully implementing intelligence into manufacturing equipment. Step 1: Data Preparation Data preparation is arguably the most important step in the AI workflow. Without robust and accurate input data to train a model, projects will likely fail. Moreover, if an engineer gives the model “bad” data, he or she will not get insightful results and spend many hours trying to figure out why the model is not working.

Two important aspects to consider before diving into the workflow are:

To train an accurate model you must begin with as much clean, labeled data as you can gather. This is one of the most time-consuming steps of the workflow.

1. AI is often only a small piece of a larger manufacturing system and must work correctly in all scenarios and with all of the other components of a continuously running manufacturing line. These include data collected from sensors on the equipment and channeled

When deep learning models do not work as expected, many often focus on how to make the model better – tweaking parameters, fine-tuning the model, and providing multiple training iterations – but engineers would be better served focusing on the input data. Preprocessing and confirming the

14

Embedded AI & Machine Learning RESOURCE GUIDE 2021

correct labeling of data that is being fed into a model ensures the data can be understood by the model. Another challenge experienced in the manufacturing industry is that companies operating the machinery have access to operational equipment data but it is the machine builders who need that data to train AI models for deployment on the equipment. Many machine builders and their customers (machine operators) have thus developed agreements and business models for sharing measured sensor data for AI model training and improvement. One example of the importance of data preparation is from construction equipment company Caterpillar, which takes in high volumes of field data from various machines. This wealth of data is necessary for accurate AI modeling, but the sheer volume of data can make the data cleaning and labeling process even more time intensive than usual. To streamline this process, Caterpillar performs automatic labeling through an integration with MATLAB that allows clean data to be generated and input into machine learning models quickly. This process produces stronger insights from field machinery and can scale to give users the flexibility of their domain expertise without having to be expert in the field of AI. Step 2: AI Modeling Once the data is clean and properly labeled, it’s time to move on to the modeling stage of the workflow. This is where data becomes an input, and the model learns from it. www.embeddedcomputing.com/machine-learning


THE MATHWORKS

www.mathworks.com

TWITTER

LINKEDIN

@MathWorks

The goal of successful modeling is to create a robust, accurate model that can make intelligent decisions based on realworld operational input data. This is also where deep learning (neural networks), machine learning (SVM, decision trees, etc.), or a combination thereof comes into the workflow as engineers decide which technology will deliver the most accurate, robust results. Regardless of the selection of a deep learning or machine learning model, at this stage it’s important to have direct access to many AI algorithms, such as classification, prediction, and regression, for evaluation. You may also want to consider the variety of prebuilt models developed by the broader community as a starting point for comparison. Using flexible tools like MATLAB and Simulink offers engineers the support needed in these iterative environments. While algorithms and prebuilt models are a good start, they’re not the complete picture. Engineers must understand how to use these algorithms to find the best approach for their specific problem, and MATLAB provides hundreds of examples for building AI models across multiple domains. AI modeling is an iterative step within the complete workflow and engineers must track any changes made to the model during this step. Tracking changes and recording training iterations with tools like MathWorks’ Deep Learning Toolbox Experiment Manager is crucial as it helps explain the parameters that led to the creation of the most accurate models so results can be reproduced. Step 3: Simulation and Test AI models exist within a larger system and must work with all other pieces in the system. In the manufacturing industry the AI model might take care of predictive maintenance, dynamic trajectory planning, or visual quality inspection while the rest of the machine software includes control firmware, supervisory logic, and more. Simulation and testing are key to validating that the AI model is working properly, www.embeddedcomputing.com/machine-learning

www.linkedin.comcompany/the-mathworks_2

FIGURE 1

YOUTUBE

www.youtube.com/user/MATLAB

The four steps automation engineers should consider for a complete, AI-driven workflow. (Source: The MathWorks, Inc.)

and that everything works well together within the system before deploying the model into the real world. To build this level of confidence prior to deployment, engineers must ensure that the model will respond the way it is supposed to, no matter the situation. Questions you should ask in this stage include: › What is the overall accuracy of the model? › Does the model perform as expected in each scenario? › Does it cover all corner cases? Trust is achieved once you have successfully simulated and tested all cases the model could encounter and can verify that the model performs as expected. By using simulation tools like Simulink, engineers can verify that the model works as desired for all the anticipated use cases prior to deployment on the target equipment, avoiding redesigns that are both costly and time consuming. Step 4: Deployment Once you are ready to deploy, the target hardware is next. In other words, the model needs to be optimized for the target using the final language in which it will be implemented. This step typically requires design engineers to share an implementation-ready model, which allows them to fit that model into the designated industrial control environment. That designated hardware environment can range from embedded controllers and PLCs to industrial PCs to the cloud,

and MATLAB can generate the production code in all scenarios. This offers engineers the leeway to deploy their model across a variety of environments from different hardware vendors without having to rewrite the original code. Take the example of deploying a model directly to a PLC. Automatic code generation eliminates errors that could be introduced through manual programming and provides optimized C/C++ or IEC 61131 code that will run efficiently on PLCs from major vendors. Stronger Together Ultimately, engineers are at their best when they can leverage their domain expertise and build on it with the right resources. They don’t have to become data scientists or even AI experts to achieve success with AI. Tools designed for engineers and scientists, functions and apps to integrate AI into your workflow, deployment options for 24/7 operational use, and experts who can answer AI integration questions are crucial resources for achieving success. All of those resources are available today and can help move your industrial AI design into production in four simple steps. EAI As Industry Manager for Industrial Automation & Machinery at MathWorks, Philipp Warner works closely with innovation leaders among key customers and internal development teams to sharpen MathWorks’ strategy and offerings for digital transformation, AI, and Industry 4.0.

Embedded AI & Machine Learning RESOURCE GUIDE 2021

15


ML MEETS INDUSTRIAL IOT

Connectivity by Design By Hilmar Retief, Bentley Systems

There are not degrees of openness – either you mean it or you don’t. Just as the bloodstream nourishes each individual system of the human body, information and data is the lifeblood of every part of your organization. The problem is that most organizations are poorly connected. As a result, this lifeblood isn’t allowed to properly travel between systems, leaving each system isolated, underutilized, and undernourished. Data silos are a symptom of a poorly connected system, which results in wasted time and resources and causes poor decision-making, missed opportunities, and duplication of efforts as each employee attempts to recreate existing data sets from their own, most likely outdated, cache of information. The Need for Unified and Aggregated Information The way to address the data silo problem is to improve connectivity among these data silos and systems. Industry trends and standards such as Industry 4.0 and digital twins are the culmination of this need for unified and aggregated

16

information to connect and curate both new and legacy data sources, creating a more holistic and better-connected ecosystem. Connectivity by design extends and advocates not only the advance of connectivity, but also embedding connectivity into the actual design of solutions and software. Systems, by design, should be able to discover, inherit, evaluate, and share intelligence across different systems or components. We should be able to monitor, analyze, and control at the sub-unit level in real time and visualize data at the system level and within the entire ecosystem. This is the glue that will accelerate digitalization. Open Always Wins Open means you are not locked into a single-vendor solution. You can import and export data freely. When you write an application with an open technology, it can run anywhere. It doesn’t have to run in any specific cloud, and it doesn’t force you to store your data in a cloud that is constrained by terms of service. You will always be able to access and export your data. There are many advantages from which organizations can benefit when they can allow the exchange of data between multi-vendor devices without any closed or proprietary restrictions. When non-proprietary open standards are utilized, interoperability between data sources and endpoints is assured without any limitations. Peer reviews can be done easily regardless of the design platforms used so design teams can collaborate, innovate, and accelerate development at a fraction of the cost while ensuring robust and reliable data. Openness Depends on Standardization For most organizations, their information technology infrastructure is a hodge-podge of new, legacy, on-premises, and cloud applications and services. These systems

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning


BENTLEY SYSTEMS

TWITTER

www.bentley.com/en

@BentleySystems

LINKEDIN

www.linkedin.com/company/bentley-systems

YOUTUBE

@BentleySystems

CLOUD

EDGE

Service delivery Computing offload IoT management Storage & caching

FIGURE 1 cannot be replaced overnight and must play a role in a connected information ecosystem for years or even decades to come. Real-time information exchange among these heterogeneous and very often geographically distributed systems is critical for supporting complex business scenarios across cross-functional business processes. Industry standards foster openness and enable interoperability between products from different vendors. They provide a basis for mutual understanding and facilitating communication, which improves inter-business communication and speeds development. Interoperability standards with staying power include: › ISO 15926 is for data integration and interoperability in capital projects. It addresses sharing, exchange, and hand-over of data between systems. › ISO 18101 provides guidance on the requirements for interoperability among systems of systems, individual systems (including hardware and software), and components. This standard grew out of the open standard MIMOSA CCOM. www.embeddedcomputing.com/machine-learning

Edge Node

Edge Node

Edge computing drives compute capability closer to the point of data collection. (Source: Bentley Systems)

› OPC Unified Architecture (OPC-UA) is one of the most important communication protocols for the Industrial IoT. It standardizes data exchange between machines, devices, and systems in the industrial environment for platform independence and interoperability. › RAMI 4.0 is a standardized model defining a service-oriented architecture (SOA) and a protocol for network communication, data privacy, and cybersecurity. Industrial standards for open interoperability enable vendors to work together to open their systems, so that the users of the vendor software get a complete picture of their assets and data. Edge or Cloud, the Best of Both Worlds – As Long as They Are Connected When introducing IIoT to your organization, you will likely consider leveraging edge devices to collect and process the data from that device directly on the device, which drives the compute capability closer to the point of data collection (the location of the IIoT device). This is called edge computing (Figure 1). The role of edge computing is to ingest, store, filter, and send data to cloud systems. Supervisory control and data acquisition (SCADA) is a control system architecture designed for remote monitoring and control of industrial applications. The difference between cloud and edge computing is simply where the processing takes place. Cloud works via a centralized data center, while edge computing is a collection of points. There are many benefits to using cloud computing to centralize and aggregate information, which can culminate in a complete digital twin of a facility. But there are equally valid and important reasons for using edge computing, ranging from latency and bandwidth problems to cost, reliability, and privacy concerns. Ultimately, in a hyper-connected and open environment, in either a centralized or distributed/edge IIoT ecosystem, it is important for decision makers to get a complete, Embedded AI & Machine Learning RESOURCE GUIDE 2021

17


ML MEETS INDUSTRIAL IOT

timely, accurate, and trustworthy picture of the performance of the asset, which leads to the kinds of benefits that are impossible in a siloed data ecosystem. Moreover, while IIoT feeds have tremendous individual value, they must be connected and combined with information from traditional legacy data sources, including asset registries, work schedules, performance, failure, and reliability management plans, as well as maintenance activities, to optimize their value to the decision makers. Business Outcomes and Decision Making Business outcomes may change when injecting hyper-connectivity. For example, a team monitoring a drilling operation may want to determine when to trigger replacement of the drill head and minimize work stoppage. The camera at the tip of the drill head and associated sensors measure vibration, temperature, angular velocity, and movement, then transmit time-series signals including video, audio, and other data. This data must be analyzed as close to real-time as possible with respect to object identification, precision geolocation, and process linkage. Advanced predictive analytics compare the drill head condition to past patterns and similar rigs or geologies to determine when the drilling rate and performance will decrease below a tolerable speed and identify predictable component failures. These predictions, combined with the drilling schedule, will allow operators to decide when to replace the drill head. Ideally the spare parts supply chain can auto-trigger based on the predictions or replacement decisions. When triggered, purchase orders are followed by transport and logistics for delivery and workforce scheduling to execute the replacement prior to breakage. Data about the drill-head and lag time for each process and operation is captured for future aggregate studies. In this scenario, the signals coming from the equipment need to be synthesized into a unified set of information for optimal value. In this case, timeseries signals, asset registry data, performance data, and metrics are best evaluated together. Clearly combining cloud computing and edge computing with engineering models, reliability analysis, supply chain, and maintenance data provides the best outcome, but it is only possible in an architecture that connects datasets and data. Connected data provides context in real time, allowing you to see how asset performance is impacting key business metrics. People, Process, and Data Connectivity When we say “connectivity by design” we also mean connectivity among people and between people and data. A business strategy of connectivity results in an exponential gain in productivity. Business process integration is not just about software, and it is certainly not just about IT. Business process integration unifies the organization’s culture with an improved data analytics strategy and makes it possible for data to become actionable by people or automated in real time. Business process integration is a key initiative that is designed to leverage connectivity. Bentley’s iTwin Connected Data Environment In the example diagram of Bentley’s iTwin Connected Data Environment, the business processes related to the acquisition and aggregation of data are connected and collectively contribute to the creation of a real-time digital twin. In this case,

18

Embedded AI & Machine Learning RESOURCE GUIDE 2021

"Ultimately, in a hyper-connected and open environment, in either a centralized or distributed/ edge IIoT ecosystem, it is important for decision makers to get a complete, timely, accurate, and trustworthy picture of the performance of the asset, which leads to the kinds of benefits that are impossible in a siloed data ecosystem." the engineering models from CAD tools such as those from Bentley, AVEVA, or Hexagon that implement schemas based on the ISO 15926 standard are acquired via what are called bridges. Bridges A and B, also referred to as connectors, understand these schemas and transform them into iModel BIS schemas. From there the acquired data is aggregated to become a unified engineering dataset, which then becomes available for visualization and analytics. Simultaneously, information from configuration and reliability management tools is collected and transformed into a CCOM-compliant data model. As with the iModel, this operational data is gathered and unified into the operations data hub. There, it can be reported on and combined with engineering data and geometry data resident in the iModelHub, as well as IIoT data provided via the Microsoft Azure IoT Hub, to provide the user with a complete and real-time digital twin (Figure 2). www.embeddedcomputing.com/machine-learning


A digital twin is a digital representation of a physical asset, process, or system that allows us to understand and model its performance. Digital twins are continuously updated with data from multiple sources, which allow them to reflect the current state of real-world systems or assets. Everything is connected to everything and the ability to use this information to support decision-making is where the true value of a digital twin is realized. Why Open, Why Connected, Why Now? Everybody says “open.” But there aren’t degrees of openness – either you mean it or you don’t. Open technology is designed to have vendor-switching capability. Adopting and natively embedding patterns such as open-source and openinteroperability standards such as ISO 15926 and more recently ISO 18181 makes it as easy as possible for customers and third-party developers to interact with applications and cloud services.

Author

Align

Sync

Sensor Data

Federate

Visualize Azure Digital Twins

Azure IoT Hub

iModelHub Engineering Tool A

A

i

Bridge A

Time Series Insights

Project Manager

DTDL

Engineering Tool B

B

PROJECT START

Bridge B

CURRENT STATUS

iModel.js App Backends

Tile

Configuration Management

Azure Service Bus

Operations Data Hub

Reality Data Services

Tile

Bing Maps Service

Tile

Azure Blob Storage

BIM Manager

Inspector

Operations

Asset Performance Data

Reliability Management

FIGURE 2

iTwin Viewer

JSON

The Bentley iTwin Connected Data Environment transforms CAD models and data models to generate complete, real-time digital twins. (Source: Bentley Systems)

Open source and open data pave the way to creating a complete and high-fidelity digital twin that fully represents a facility in all aspects, from design, construction, and commissioning to maintenance and operations. Open and connected isn’t a goal in and of itself. It is a corporate strategy that is built into the fabric of an organization that enables users of the technology to have easy access to their data. EAI Hilmar Retief is a Distinguished Engineer of Industry Solutions in the Office of the CTO at Bentley Systems.

INTELLIGENT. CONNECTED.EMBEDDED. JOIN THE GLOBAL PLATFORM OF THE EMBEDDED COMMUNITY 15 – 17.3.2022 Become an exhibitor now! embedded-world.com/join Media partners

www.embeddedcomputing.com/machine-learning 1083220_Anzeige_Aussteller_DEU_177,8x123,8.indd 1

Embedded AI & Machine Learning RESOURCE GUIDE 2021 06.10.21

19

09:10


ML MEETS INDUSTRIAL IOT

Anomaly Detection using Reality AI Software Tools By Saumitra Jagdale, Freelance Technology Writer

Production workflows are most efficient when machinery operates without faults or anomalies. That much is obvious to factory operators around the world, but what may not be is how to ensure equipment reaches and maintains that condition. Generally, systems fail due to deterioration of components or external forces that disturb internal processes. So, it can be inferred that some type of departure from normal behavior must occur for a system to break down. Often, these anomalies occur gradually rather than abruptly, which makes detecting – much less predicting – anomalies before a major event occurs difficult.

feeds a dataset that must be fit to the AI model used for anomaly detection. For this, the physical features of the machinery need to be measured using a range of sensors that evaluate condition and performance.

Factories can meet optimization targets by monitoring overall system health but forego lengthy operational research. While traditional machine health evaluation methods generally focus on diagnosis and lack the ability to predict faults or threats, the integration of AI introduces the concept of machine-level predictive analytics.

› Accelerometers from several different suppliers › Current and temperature sensors for performance and heat evaluation › Contact microphones for audio sensing of the machinery

Sensor Data Collection at the Edge Node Data collection plays a significant role in exploratory machine-level analytics, and

20

Reality AI and Advantech have partnered on an edge node that serves the purpose of evaluating machinery for anomalies and predicting the life expectancy of running components. The RealityCheck AD is based on Advantech’s EPC-S201 fanless embedded PC, which hosts a dual-core Intel Celeron N3350 SoC, 8 GB RAM, and a 64 GB SSD. It comes with a wide range of sensor options for data collection:

The edge node comes with options for wall and rail mounting options that facilitate deployment in a variety of industries. Additionally, the device supports a wide range of connectivity, including Wi-Fi, Ethernet, and cellular communication. Anomaly Detection using Reality AI Software Tools Of course, sensor data captured by the Advantech embedded PC needs to be cleaned and pre-processed. Then it can be sufficiently analyzed by AI algorithms. Reality AI software efficiently extracts features from these datasets using an algorithmic search to derive a custom transform based on time and frequency. The software then

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning


Anomaly Detection Results for Blower AD 400 window III_model_2

9 8 7 6 5 4 3 2

1.92

1 fan-blocked

fan-normal1

fan-normal2

fan-unbalanced

filter-clogged

off-speed1

off-speed2

X-Axis Order: Test_Condition

FIGURE 1

Collect initial baseline & build v0 model

Threshold:

Deploy model for continuous inference

Anomaly detected !

Investigate anomaly

Add to baseline and retrain

Not an anomaly

The company’s solution, RealityCheck AD, also comes with the functionality to adapt the weights of AI models depending on the use case. It can also generate visualization solutions for better accuracy and understanding of the application.

Label with cause of anomaly and save for later

True Anomaly !

Additionally, it provides in-depth hardware analytics that aid in the solution design (Figure 1). RealityCheck AD uses a baseline strategy for first normalizing and later optimizing anomaly detections. It evaluates a reference region in the feature space and compares those observations with a baseline normal region, which is based on a model it also builds through the process of classification. All known anomalies occur on the distant edges of the baseline reference model and, therefore, these distant points become anomalies that models look for throughout the detection process. www.embeddedcomputing.com/machine-learning

1.92

The RealityCheck AD AI development tool conducts in-depth hardware analytics to assist in the design of predictive maintenance solutions. (Source: Reality AI)

offers optimal models for anomaly detection, signal classification, and life expectancy prediction.

FIGURE 2

off-speed2

The RealityCheck AD development tool helps automate the creation of feedback loops for AI-enabled predictive analytics. (Source: Reality AI)

Building Predictive Analytics from Scratch Data As stated, the flow of anomaly detection starts with the creation of a baseline region in the initial detection model. The reference data set is continuously updated via real-time data collection, which occurs while the running detection model predicts anomalies in the system. Predictions are investigated and verified to determine whether they are completely accurate. If so, the specific data point is labeled as an anomaly and updated in the reference data set, thus completing the cycle. Similarly, false predictions are labeled and updated accordingly (Figure 2). Although the AI models are accurate, a few anomalies may go undetected due to noise or overfitting. However, most of these are detectable in end-of-line testing. With machine-level analytics, equipment insights can become the foundation of production efficiency optimizations. And it all starts at the edge node. EAI Head over to the RealityCheck AD webpage at reality.ai/industrial-ad for more detailed information on the solution. Embedded AI & Machine Learning RESOURCE GUIDE 2021

21


FROM THE MIND OF MACHINES

Who’s IP is it? The AI Inventor or the AI’s Inventor? By Tiera Oliver, Associate Editor, and Taryn Engmark, Assistant Editor

It’s official. Whether AI has rights is no longer just the realm of science fiction. The University of Surrey recently filed an application with the European Patent Register on behalf of DABUS, or Device for the Autonomous Bootstrapping of Unified Sentience. And for the first time, a patent was awarded to a non-human entity. Well, at least temporarily. Not only was this the first time an AI was awarded a patent, but a University press release says this was also the first time an AI was named as an inventor in a patent application and the first time an AI was named in a patent application. Period. According to the Artificial Inventor Project[1], which seeks inventorship status for artificial intelligence systems, DABUS is a creativity engine “wherein controlled chaos combines whole neural nets, each containing simple notions, into complex notions (e.g., inventions). The representation of ideas takes the form of snake-like chains of nets often involving millions to trillions of artificial neurons. “Similarly, the consequences sprouting from these notions are represented as chained nets whose formation may trigger the release of simulated reward or penalty neurotransmitters to either

22

reinforce any worthwhile idea or otherwise erase it. As these serpentine forms appear, they are filtered for their self-assessed novelty, utility, or value and then absorbed within another net that serves as an interrogatable ‘witness’ of ideas cumulatively developed by the system.” In short, DABUS arranges a network of neural networks that imitates a human brain alongside blockchain-like concepts of distributed documentation, understanding, and approval. This combination allows the system to comb through bits of data to forge new connections and generate its own ideas. So far DABUS has two inventions: a container based on fractal geometry and a neural flame. According to Raconteur[2], the container can be likened to a cup that uses a series of bumps and bulges connected by fractal patterns like those found on a snail’s shell. The neural flame is a light source based on a blinking frequency that is difficult for human eyes to avoid, making it potentially useful in search and rescue missions. The patent applications were filed by an international team of lawyers and researchers led by University of Surrey Professor of Law and Health and author of The Reasonable Robot: Artificial Intelligence and the Law[3], Ryan Abbott. Abbott filed the patents on behalf of Dr. Stephen A. Thaler, president and CEO of Imagination Engines in St. Charles, Missouri, the designer of DABUS. At the end of July 2021, South Africa’s Companies and Intellectual Property Commission, which works cooperatively with the European Patent Office[4], approved the beverage container patent application that classified DABUS as the inventor. However, Thaler’s name remains on the application as both applicant and grantee. The AI Inventor: Why Now? Although AI is still in its early days, it’s somewhat surprising that this marks the first time an AI has been awarded, or even been considered, for a patent. But it turns out that traditional patent and IP laws state that only a “natural person” can be labeled

Embedded AI & Machine Learning RESOURCE GUIDE 2021

www.embeddedcomputing.com/machine-learning


as an inventor or granted a patent, limiting patent rights to humans or their employer(s). Yet South Africa’s patent law does not explicitly state the requirement of an inventor being a natural person. And because it does not, it has become the site of what Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, believes is a landmark “that recognizes the need to change how we attribute invention. “We are moving from an age in which invention was the preserve of people to an era where machines are capable of realizing the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.” Of course, not everyone sees it that way. Following South Africa’s decision to grant the patent naming DABUS as the inventor of the fractal container, the worlds of AI and patent law have clashed considerably. Some believe granting patents to AIs will make it harder for human

inventors to secure patent rights; others think it will result in IP law chaos in the form of patent trolling, or constant, paralyzing IP infringement litigation; others still say this paves the way for AIs to hold and exercise additional rights. Outside of South Africa, DABUS patent applications are currently pending in Canada, India, the Republic of Korea, China, Taiwan, New Zealand, Israel, Brazil, Switzerland, and Saudi Arabia. The applications are in the appeals process in the U.S., U.K., Germany, Australia, and the European Patent Office. As for the U.S., a report published in October 2020[5] from the United States Patent and Trademark Office (USPTO) revealed that the vast majority of respondents to a request for comment believe that ownership of a patent or invention should remain exclusive to natural persons or companies via assignment. However, there is some consideration of “expanding ownership to a natural person: (1) who trains an AI process, or (2) who owns/controls an AI system.”

As for the team that has filed DABUS patents worldwide, they plan to continue arguing that, “While patent law in many jurisdictions is very specific in how it defines an inventor … the status quo is not fit for purpose in the Fourth Industrial Revolution.” References:

1. The Artificial Inventor Project. (n.d.). Retrieved November 2, 2021, from https://artificialinventor.com/dabus/. 2. Rothwell, R. (2020, February 4). AI Inventors: The Fight to Protect a Computer’s Creations. Raconteur. Retrieved November 2, 2021, from https:// www.raconteur.net/technology/artificialintelligence/ai-inventors-protect-ip/. 3. Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press. 4. Office, E. P. (n.d.). EPO Launches First-Ever Reinforced Co-Operation Program with South Africa. EPO. Retrieved November 2, 2021, from https://www.epo.org/news-events/ news/2018/20180629.html. 5. U.S. Patent & Trademark Office. (2020). (rep.). Public Views on Artificial Intelligence and Intellectual Property Policy. Retrieved November 2, 2021, from https://www. uspto.gov/sites/default/files/documents/ USPTO_AI-Report_2020-10-07.pdf.

Learn to Program Your First Edge AI Application in Minutes with Texas Instruments Sponsored by Texas Instruments

In under 20 minutes, this step-by-step embedded tutorial will teach you to program a basic AI vision application using Python and Jupyter notebooks. It will show you how sensor data is used to make a “smart” camera; how to train and deploy a deep learning model with free, open-source software; and how to deploy your application on hardware accelerators using industry-standard APIs. Watch the tutorial at https://bit.ly/EdgeAIinMinuteswithTI

WATCH MORE WEBCASTS:

https://www.embeddedcomputing.com/webcasts www.embeddedcomputing.com/machine-learning

Embedded AI & Machine Learning RESOURCE GUIDE 2021

23


Embedded AI & Machine Learning Resource Guide

AI & Machine Learning

One-Stop Edge AI Solution Services Vecow is a team of global embedded experts. We commit to designing, developing, producing, and selling industrial-grade computer products. All of our products are leading in performance, trusted in reliability, exhibit advanced technology, and innovative concepts. Vecow offers One-Stop AIoT Solution Service, AI Inference Systems, AI Computing Systems, Fanless Embedded Systems, In-Vehicle Computing Systems, Robust Computing Systems, Single Board Computers (SBC), Industrial Motherboards, Multi-Touch Computers, Multi-Touch Displays, Frame Grabbers, Embedded Peripherals and Design & Manufacturing Services with leading performance, trusted reliability, advanced technology, and innovative concepts. Vecow aims to be your trusted embedded business partner. Our experienced service team is dedicated to creating and maintaining strong partnerships and one-stop integrated solutions. Our services are specific and consider each partner’s unique needs in regards to: Machine Vision, Autonomous Car, Robotic Control, Rolling Stock, Public Security, Traffic Vision, Smart Automation, Deep Learning, and any Edge AI applications.

FEATURES VHub AI Developer Premier : Coding-Free AI Accelerate Solution VHub AI Developer Premier is a coding-free solution which system integrators and solution owners can use to accelerate their AI project. By taking a platform-based approach, Vecow delivers both hardware and software integrated turnkey solutions for a wide range of applications, including Smart Retail, Traffic Vision, Smart Factory, Access Control, and Public Surveillance. ECX-2400 AI : Workstation-grade AI Computing System Workstation-grade Platform: 10-core 10th Gen Intel® Xeon®/Core™ i9/i7/i5/i3 Processor with Intel® W480E chipset, up to 95W TDP CPU; Built-in independent Quad-core/Single-core AI accelerator card, which replaces power-hungry desktop GPUs in industrial edge servers delivering low latency with a low power envelope, supporting up to 64 TOPS; 6 GigE LAN with 4 IEEE 802.3at PoE+, DC 12V to 50V Power Input, 80V Surge Protection, Software Ignition Control, TPM 2.0; Optional VHub AIoT Solution Service supports OpenVINO based AI accelerator and advanced Edge AI applications.

VECOW CO., LTD. www.vecow.com

24

ABP-3000 AI : Ultra-slim AI Inference System 8th Gen Intel® Core™ U-series processor running with Compact Hailo-8™ AI Accelerator supports up to 26 Tera-Operations Per Second (TOPS) with best-in-class power efficiency of 3 TOPS/W, Ultra-slim and cableless design, fanless 0°C to 60°C operation, 4 Independent GigE LAN with 2 IEEE 802.3at PoE+, DC 9V to 50V Power Input, Ignition Power Control, TPM 2.0, Optional VHub One-Stop AIoT Solution Service supports OpenVINO based AI accelerator and advanced Edge AI applications. EAC-2000 : Compact Edge AI Computing System Small form factor NVIDIA® Jetson Xavier™ NX supports up to 21 TOPS AI performance, Advanced NVIDIA Volta™ architecture with 384 NVIDIA® CUDA® cores and 48 Tensor cores, Fanless -20°C to 70°C operation, 4 GigE LAN with 2 PoE+, 4 USB 3.1, 1 Digital Display, 4 GMSL 1/2 automotive cameras with Fakra-Z connectors, supports 5G/4G/LTE/WiFi/BT/GPRS/UMTS, DC 9V to 50V wide range power input.

sales@vecow.com https://www.linkedin.com/company/vecow-co.-ltd

Embedded AI & Machine Learning RESOURCE GUIDE 2021

 +886 2 22685658 @VecowCo

www.embeddedcomputing.com/machine-learning


Lattice sensAITM Solution Stack On November 10, 2021, Lattice Semiconductor, the low power programmable leader released an updated version of the Lattice sensAI™ solution stack in conjunction with a new low power roadmap enabling artificial intelligence and machine learning on smart edge devices, particularly within the Client Compute market. The updated Lattice sensAI solution stack enables next generation smart PC experiences by supporting AI applications in line with next generation PC Trends, including smart and aware capabilities, enhanced collaboration, and slim new form-factors. Lattice sensAI enables instant on and aware capabilities through user presence detection, safeguards user privacy through onlooker detection and enhances collaboration through face framing capabilities. Additionally, Lattice sensAI delivers up to 28 percent extended battery life through attention tracking technology that allows the device to save power when the user is not actively engaged on the device.1 According to ABI Research, On-device AI inferencing capabilities are expected to reach 60% of all devices by 2024. Whether for professional, personal, or educational use, PCs have become the dominant means of collaboration, especially over the past few years. Today, PC usage dominates daytime productive hours diminishing battery life over time. With Lattice sensAI, user presence detection enables client devices to automatically power on and off as the user approaches or departs. Another feature greatly impacting battery longevity is attention tracking to enable brightness alteration depending on a user’s attentiveness. When a user is distracted, devices with the Lattice sensAI solution stack will detect the change in attention and save energy by lowering the brightness of the screen. Additional features of the recently released Lattice sensAI stack are tailored to support OEMs affected by the vast transition into largely remote workforces. Face framing capabilities enabled with sensAI improve the video experience in conference applications, while onlooker detection applications enable the device to identify when someone is shoulder surfing and reacts by blurring the screen to maintain data privacy.

FEATURES Up to a 28 percent increase in battery life – Operating on Lattice Nexus-based FPGAs, Lattice sensAI increases battery longevity by up to 28% compared to devices using CPUs to power AI applications. Expanded application support – The performance and accuracy gains made possible with the updated version of Lattice sensAI expand the stack’s target applications to include the highly-accurate object and defect detection applications used in automated industrial systems. The stack has a new hardware platform for voice and vision-based ML application development using Lattice CertusPro™-NX FPGA and featuring an onboard image sensor, two I2S microphones, and expansion connectors for adding additional sensors. Easy-to-use tools – The stack has an updated neural network compiler and supports Lattice sensAI Studio, a GUIbased tool with a library of AI models that can be configured and trained for popular use cases. sensAI Studio now supports AutoML features to enable creation of ML models based on application and dataset targets. Included in sensAI Studio are several popular models optimized for Lattice FPGAs. The stack is compatible with other widely-used ML platforms, including Caffe, Keras, TensorFlow, and TensorFlow Lite. 1

Lattice internal testing

www.latticesemi.com/sensAI

Lattice Semiconductor www.latticesemi.com

www.embeddedcomputing.com/machine-learning

sales@latticesemi.com  408-826-6000 https://www.linkedin.com/company/lattice-semiconductor/

@latticesemi

Embedded AI & Machine Learning RESOURCE GUIDE 2021

25

Embedded AI & Machine Learning Resource Guide

Applications: Computer/Machine Vision


Embedded AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control

A FINE TECHNOLOGY GROUP

cPCI, PXI, VME, Custom Packaging Solutions VME and VME64x, CompactPCI, or PXI chassis are available in many configurations from 1U to 12U, 2 to 21 slots, with many power options up to 1,200 watts. Dual hot-swap is available in AC or DC versions. We have in-house design, manufacturing capabilities, and in-process controls. All Vector chassis and backplanes are manufactured in the USA and are available with custom modifications and the shortest lead times in the industry. Series 2370 chassis offer the lowest profile per slot. Cards are inserted horizontally from the front, and 80mm rear I/O backplane slot configuration is also available. Chassis are available from 1U, 2 slots up to 7U, 12 slots for VME, CompactPCI, or PXI. All chassis are IEEE 1101.10/11 compliant with hot-swap, plug-in AC or DC power options. Our Series 400 enclosures feature side-filtered air intake and rear exhaust for up to 21 vertical cards. Options include hot-swap, plug-in AC or DC power, and system voltage/temperature monitor. Embedded power supplies are available up to 1,200 watts.

Series 790 is MIL-STD-461D/E compliant and certified, economical, and lighter weight than most enclosures available today. It is available in 3U, 4U, and 5U models up to 7 horizontal slots. All Vector chassis are available for custom modification in the shortest time frame. Many factory paint colors are available and can be specified with Federal Standard or RAL numbers.

FEATURES Most rack accessories ship from stock Modified ‘standards’ and customization are our specialty Card sizes from 3U x 160mm to 9U x 400mm System monitoring option (CMM) AC or DC power input Power options up to 1,200 watts

Made in the USA Since 1947

For more detailed product information, please visit www.vectorelect.com or call 1-800-423-5659 and discuss your application with a Vector representative.

Vector Electronics & Technology, Inc. www.vectorelect.com

26

Embedded AI & Machine Learning RESOURCE GUIDE 2021

inquire@vectorelect.com

 800-423-5659

www.embeddedcomputing.com/machine-learning


Versatile – Mobile Type & Wall Mount & Desktop Use – Touch Embedded System! MACTRON GROUP stands for

“Transformer Team”, which means we are flexible with the fast-changing market. M, A, C is corresponding to Medical Healthcare, Industrial Automation and Business Commercial market segments. Our vision is to become the most reliable symbol of branding ability with the best quality. Our comprehensive product lines include PPC (Touch Panel PC), TDM (Touch Display Monitor), BPC (Embedded Box PC), and MTP (Mobile Tablet PC) to satisfy global partners’ market requirements. With capabilities spanning the value chain, and by taking full advantage of the latest technologies, MACTRON GROUP is committed to delivering products with forward-thinking features and best-in-class customer service. To make a “Fantastic Life with Touch Science!”, our passion for Touch Science IT is second to none. It is MACTRON GROUP’s visionary mission to fulfill our partners’ requests based on our professional customization capabilities and cost-effective solutions for the projects of Touch Science IT!

FEATURES MediTRON Medical Healthcare (MediTRON) Solutions are used to power devices and platforms that are streamlining health care delivery and improving patient care. AutoTRON Industrial Automation (AutoTRON) offers Total Solutions by providing real-time feedback to guide the manufacturing process, and utilizing cyber-physical systems to perform difficult tasks. CommTRON Business Commercial (CommTRON) Solutions include Touch Panel PC, Touch Display Monitor, Embedded Box PC and Mobile Tablet PC – which are the key building blocks for powerful kiosks machines and related application usages.

MACTRON GROUP CO., LTD. www.mactrongroup.com

www.embeddedcomputing.com/machine-learning

sales@mactrongroup.com

www.linkedin.com/company/mactron-group

 +886-2-2795-1668

twitter.com/MACTRONGROUP

Embedded AI & Machine Learning RESOURCE GUIDE 2021

27

Embedded AI & Machine Learning Resource Guide

Applications: Industrial Automation/Control


Embedded AI & Machine Learning Resource Guide

Hardware Modules/Systems for Machine Learning

conga-TC570r Ultra-rugged congatec modules with soldered RAM for highest shock and vibration resistance. Designed to withstand even extreme temperature ranges of -40°C to +85°C, the new COM Express Type 6 Computer-on-Modules built on 11th Gen Intel® Core® provide full compliance for shock and vibration resistant operation in challenging transportation and mobility applications. For more price sensitive applications, congatec also offers a cost optimized Intel® Celeron® processor variant for the extended temperature range from 0°C to 60°C. Typical customers for the new range of Computer-on-Modules based on the Tiger Lake microarchitecture are OEMs of trains, commercial vehicles, construction machines, agricultural vehicles, self-driving robots and many other mobile applications in the most challenging outdoor and off-road environments. Shock and vibration resistant stationary devices are another important application area as digitization requires critical infrastructure protection (CIP) against earthquakes and other mission critical events.

congatec

www.congatec.us

FEATURES LPDDR4X RAM with up to 4266 MT/s and in-band error-correcting code (IBECC) for single failure tolerance and high data transmission quality in EMI critical environments. Demanding graphics and compute workloads benefit from up to 4 cores, 8 threads, and up to 96 graphics execution units for massive parallel processing. Integrated graphics supports 8k displays or 4x 4k; it can also be used as parallel processing unit for convolutional neural networks (CNN) or as an AI and deep learning accelerator. Scalable TDP from 12 W to 28 W, enabling fully sealed system designs with passive cooling only. Real-time capable design including support for Time Sensitive Networking (TSN), Time Coordinated Computing (TCC) and RTS Realtime Systems’ hypervisor for virtual machine deployments and workload consolidation in edge computing scenarios. Using the Intel OpenVINO™ software toolkit can be extended across CPU, GPU and FPGA compute units to accelerate AI workloads, including computer vision, audio, speech, and language recognition systems.

www.congatec.com/en/products/com-express-type-6/conga-tc570r/

sales-us@congatec.com

www.linkedin.com/company/congatec

 858-457-2600 twitter.com/congatecAG

Hardware Modules/Systems for Machine Learning

More edge computing power

What industrial IoT applications need today is a combination of high-performance low-power processor technology, robust real-time operation, real-time connectivity, and real-time hypervisor technologies. Featuring the very latest Intel Atom, Celeron, and Pentium processors (aka Elkhart Lake), congatec boards and modules offer more power for low-power applications in every aspect. Target markets include automation and control – from distributed process controls in smart energy networks and the process industry to smart robotics, or even PLC and CNC controls for discrete manufacturing. Other real-time markets are found in test and measurement technology and transportation applications, such as train and track systems or autonomous vehicles, all of which also benefit from the extended temperature options. The new low-power processor generation is also a perfect fit for graphics-intensive applications such as edge-connected POS, kiosk and digital signage systems, or distributed gaming and lottery terminals.

FEATURES Intel Atom x6000E Series processors, Intel Celeron and Pentium N & J Series processors (code named “Elkhart Lake”) Intel® UHD Graphics (Gen11) for up to 3x 4k @ 60fps and 10-bit color depth Extended temperature range from -40°C to +85°C is supported Time Sensitive Networking (TSN), Intel Time Coordinated Computing (Intel TCC) and Real Time Systems (RTS) hypervisor support Up to 4.267 MT/s Memory Support with Inband ECC UFS 2.0 for higher bandwidth and data processing

congatec

www.congatec.us

28

sales-us@congatec.com www.linkedin.com/company/congatec

Embedded AI & Machine Learning RESOURCE GUIDE 2021

 858-457-2600 twitter.com/congatecAG

www.embeddedcomputing.com/machine-learning


Industrial Grade NAND Flash SSD products and DRAM modules UD info Corp., is a total-memory-solution provider on NAND Flash SSD products and DRAM modules, is focusing on industrial, medical, automotive, transportations, defense, surveillance and other applications, that inquire high reliability and longevity in supply. Product Lines: • PCIe Interface SSD solutions, SATA interface SSD solutions, CF cards, PATA SSD, USB drives, eMMC & DRAM Modules. • Variety Flash Options have been implemented into UD info’s SSD solutions including SLC (60K PE cycles), pSLC (30K PE cycles), MLC (3K PE cycles), 3D TLC (3K PE cycles) & 3D pSLC (30K+ PE cycles). • 3+ years Warranty, Fixed BOM solutions, Quality Assurance with 5 years traceability have all been applied. New Products: • 2.5" SATA SSD with FIPS 140-2 data encryption – This government-standard encryption contains robust cryptographic technology which entirely preserves your most sensitive data. It is perfect for applications that require security of the highest level. Conformal coating, security labels, and additional features with extensive customization available at customers’ request.

• PCIe M.2 2280 with Power Loss Protection – It helps secure the integrity of your data and has been widely tested and used in high-reliability fields. With up to 1920GB storage memory, industrial-grade temperature solutions, 3DTLC based architecture and astonishing read/write speeds. • PCIe M.2 2230 SSD – It is a high-performance, small size, and large capacity device designed for various small size applications. It uses the M key interface and uses PCIe Gen 3x4 channels for data transfer. • 16TB High Capacity 2.5" SATA SSD & PCIe U.2 SSD – Both high capacity solutions are perfect for Surveillance, Transportation, Security and Defense applications.

FEATURES PLP (Power Loss Protection) provides extra power to protect all data during power loss at write. pSLC, an ideal alternative solution provides reliability and cost efficiency. S.M.A.R.T. (Self Monitoring and Analysis Report Technology) to detect Health Status of the storage. All products contain Industrial Temperature Solutions, with temperature ranges from -40 to 85º.

www.udinfo.com.tw

UD info Corp.

www.udinfo.com.tw www.embeddedcomputing.com/machine-learning

sales@udinfo.com.tw • sales@udinfo-usa.com www.linkedin.com/company/udinfocorp  +886-2-77136050

Embedded AI & Machine Learning RESOURCE GUIDE 2021

29

Embedded AI & Machine Learning Resource Guide

Hardware Modules/Systems for Machine Learning


Embedded AI & Machine Learning Resource Guide

Neural Network Processors: TPU

®

AI + eFPGA InferX X1 Edge Inference Accelerator TM

The InferX X1 Edge Inference Accelerator is optimized for the processing of real-time megapixel vision workloads. These workloads are characterized by deep networks with many feature maps and multiple operator types. Also, model accuracy targets may require the use of mixed precisions, including INT8, INT16 and BF16. These workloads also require low latency batch size = 1 inference processing. The X1's dynamic tensor processor array offers ASIC speed and efficiency while providing model flexibility, through the use of reconfigurable control logic technology, to quickly adopt and deploy new edge inference model technologies via field updates, thus future-proofing designs. The accelerator architecture of the X1 makes it easy to support processing of multiple input data types including high definition video and multi-spectral images. The X1 is supported by the InferX Edge Inference SDK, which provides both model compiler and runtime software. The model compiler converts models expressed in TensorFlowLite or ONNX and compiles them to operate directly on the X1 accelerator.

The X1 is available as a 22x22 mm semiconductor device for custom board designs and also in HHHL PCI Express and M.2 M+B key form factors from Flex Logix.

FEATURES High performance 4K element dynamic tensor processor array Optimized for tough, megapixel image processing models Designed for low latency B=1 inference processing Dynamic architecture future-proofs designs to reconfigurable architecture future-proofs designs Higher throughput from less hardware drives better Inferences/$/Watt INT8, INT16, BFloat16 support – can mix between layers Programmed via TensorFlow Lite/ONNX

The Flex Logix InferX X1 Edge Inference co-processor supports rapid inference processing at the Edge. The X1 is available now in PCI Express card, M.2 card or chip form factor for custom design applications.

flex-logix.ai

Flex Logix

https://flex-logix.com/

30

Embedded AI & Machine Learning RESOURCE GUIDE 2021

info@flex-logix.com

www.linkedin.com/company/flex-logix-technologies-inc-/mycompany/ www.embeddedcomputing.com/machine-learning


®

Solid State Storage and Memory

Industrial-Grade Solid State Storage and Memory Virtium manufactures solid state storage and memory for the world’s top industrial embedded OEM customers. Our mission is to develop the most reliable storage and memory solutions with the greatest performance, consistency and longest product availability. Industry Solutions include: Communications, Networking, Energy, Transportation, Industrial Automation, Medical, Smart Cities and Video/Signage. StorFly® SSD Storage includes: M.2, 2.5", 1.8", Slim SATA, mSATA, CFast, eUSB, Key, PATA CF and SD.

Features

Classes include: MLC (1X), pSLC (7X) and SLC (30X) – where X = number of entire drive-writes-perday for the 3/5-year warranty period.

• 22 years refined U.S. production and

Memory Products include: All DDR, DIMM, SODIMM, Mini-DIMM, Standard and VLP/ULP. Features server-grade, monolithic components, best-in-class designs, and conformal coating/under-filled heat sink options.

New! XR (Extra-Rugged) Product Line of SSDs and Memory: StorFly-XR SSDs enable multi-level protection in remote, extreme conditions that involve frequent shock and vibration, contaminating materials and/or extreme temperatures. Primary applications are battlefield technology, manned and unmanned aircraft, command and control, reconnaissance, satellite communications, and space programs. Also ideal for transportation and energy applications. Currently available in 2.5" and Slim-SATA formats. Include: custom ruggedization of key components, such as ultra-rugged connectors and screwdown mounting, and when ordered with added BGA under-fill, can deliver unprecedented durability beyond that of standard MIL-810-compliant solutions. XR-DIMM Memory Modules have the same extra-rugged features as the SSDs, and include heatsink options and 30µ" gold connectors. They also meet US RTCA DO-160G standards.

Virtium

www.virtium.com www.embeddedcomputing.com/machine-learning

sales@virtium.com www.linkedin.com/company/virtium

• Broad product portfolio from latest technology to legacy designs 100% testing

• A+ quality – backed by verified yield, on-time delivery and field-defects-per-million reports • Extreme durability, iTemp -40º to 85º C • Industrial SSD Software for security, maximum life and qualification • Longest product life cycles with cross-reference support for end-oflife competitive products • Leading innovator in small-formfactor, high-capacity, high-density, high-reliability designs • Worldwide Sales, FAE support and industry distribution

 949-888-2444 twitter.com/virtium

Embedded AI & Machine Learning RESOURCE GUIDE 2021

31

Embedded AI & Machine Learning Resource Guide

Storage


IIoT devices run longer on Tadiran batteries.

PROVEN

40 YEAR OPERATING

LIFE

Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries. Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.

ANNUAL SELF-DISCHARGE TADIRAN

COMPETITORS

0.7%

Up to 3%

Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.

* Tadiran LiSOCL2 batteries feature the lowest annual self-discharge rate of any competitive battery, less than 1% per year, enabling these batteries to operate over 40 years depending on device operating usage. However, this is not an expressed or implied warranty, as each application differs in terms of annual energy consumption and/or operating environment.

Tadiran Batteries 2001 Marcus Ave. Suite 125E Lake Success, NY 11042 1-800-537-1368 516-621-4980 www.tadiranbat.com

*


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.