19 minute read
Editor’s Choice for November
November 2021
COT’S PICKS
Avnet Edge AI Development Kit Enables High-Performance Edge Processing with Low-power On-chip Accelerators
The kit features Avnet Embedded’s SMARC computing platform based on NXP’s i.MX.8M Plus, a 10.1-inch touch display, and a dual-camera vision board supporting interchangeable IAS camera modules
Avnet’s new development kit enables OEM design engineers to deploy autonomous artificial intelligence (AI) capabilities to embedded applications, thereby reducing or eliminating the dependency on cloud connectivity or processing. The Avnet Edge AI Development Kit features Avnet Embedded’s robust SMARC Computer-on-module (COM), based on NXP’s i.MX 8M Plus applications processor, combined with a production-ready SimpleFlex Carrier and a long-term available 10.1 touch display, provides a cost-effective, high-performance computing solution for machine learning (ML) edge applications. Rounding out the kit is a dual camera vision board that can support single or dual IAS camera modules based on onsemi image sensors.
The hardware is compliant with the new SMARC 2.1.1 module standard. The embedded computing solution fits within compact external dimensions of 146mm (h) x 80mm (w) and is suitable for operation over a -40C to +85C industrial temperature range.
“This new Edge AI development kit allows designers to augment existing applications with new features like face recognition, voice command processing, and other compute-intensive machine learning algorithms while still bringing their applications to market quickly,” said Jim Beneke, vice president of Products and Emerging Technologies. “Our new kit enables advanced AI and ML applications to run faster at the edge through the power-efficient neural processing unit (NPU) included in NXP’s i.MX 8M Plus MPU. This also enables more autonomous systems where cloud connectivity is not required or can supplement the system’s capabilities with higher-level functions.”
“Avnet Embedded is a Gold Partner with NXP with many years of design experience designing and manufacturing computing modules to industry standards,” said Tim Jensen, senior director of product innovation at Avnet Embedded. “The SMARC module hardware, which is designed and created inhouse by Avnet Embedded, unlocks a vast array of potential for designers to leverage a system built on our in-house expertise.”
Along with the kit, Avnet provides example applications that leverage the NXP i.MX 8M Plus NPU core, with 2.25 TOPS of performance, to accelerate deep learning neural network inference and delivery of better performance for practical applications like face recognition for access control lockout.
“The i.MX 8M Plus applications processor, with its compute resources, connectivity options and especially with the dedicated NPU accelerator, is ideal to deploy machine learning applications for secure and accurate decision-making at the edge,” said Ali Osman Ors, director of AI Machine Learning Strategy and Technologies at NXP. “With the i.MX 8M Plus applications processor and the Avnet Edge AI Development Kit, we are supporting and enabling our customers to move to the ‘intelligent’ edge.”
“Fast time to market is an essential business need for emerging vision applications,” said Guy Nicholson, marketing director, industrial and commercial sensing division at onsemi. “The onsemi IAS module ecosystem has brought the mobile-style image sensor format to the broad industrial market. Through our collaboration with NXP and Avnet, we are now also providing a platform for camera system OEMs to rapidly develop and get to production with an industry-leading AI and machine learning solution.”
November 2021
COT’S PICKS
RadioWaves Adds New GPS/GNSS Timing Antennas
RadioWaves, an Infinite Electronics brand has just released a new series of GPS/GNSS timing antennas that cover L1 and L5 GPS bands
RadioWaves’ new series of GPS/GNSS timing antennas provide a top-of-the-line axial ratio and higher accuracy for the reception of satellite timing signals and reference frequencies for enhanced phase synchronization in precision network deployments.
The high gain, low noise figure of 2 dB, and high out-of-band rejection provided by these antennas allow for the use of longer and cost-effective cables for easy and flexible installs. They also feature a VSWR of less than 1.8:1 and are compatible with several existing mounting brackets. In addition, these fully ruggedized, weather-sealed antennas are IP67 compliant and perfect for use in outdoor and marine environments.
These antennas come equipped with built-in surge protection and support a wide range of GNSS including GPS, GLONASS, BEIDOU, GALILEO, and IRIDIUM. Increased position accuracy in densely populated urban areas, flexible installation, and improved system security make RadioWaves’ latest antenna offering a critical system component.
“Our timing antennas with dual feed and dual-band capability provide top-of-the-line axial ratio and higher accuracy for the reception of satellite timing signals and reference frequencies for use in advanced network applications. These rugged outdoor antennas are suitable for use in all outdoor and marine environments,” said Kevin Hietpas, Antenna Product Line Manager.
RadioWaves www.radiowaves.com
Seeq Announces Expanded Microsoft Azure Machine Learning Support
New Seeq Azure Add-on feature enables rapid deployment of Azure Machine Learning algorithms to frontline plant employees
Seeq Corporation announced additional integration support for Microsoft Azure Machine Learning. This new Seeq Azure Add-on announced at Microsoft Ignite 2021, an annual conference for developers and IT professionals hosted by Microsoft, enables process manufacturing organizations to deploy machine learning models from Azure Machine Learning as Addons in Seeq Workbench. The result is machine learning algorithms and innovations developed by IT departments can be operationalized so frontline OT employees can enhance their decision-making and improve production, sustainability indicators, and business outcomes.
Seeq customers include companies in the oil & gas, pharmaceutical, chemical, energy, mining, food and beverage, and other process industries. Investors in Seeq, which has raised over $100M to date, include Insight Ventures, Saudi Aramco Energy Ventures, Altira Group, Chevron Technology Ventures, and Cisco Investments.
Seeq’s strategy for enabling machine learning innovations provides end-users with access to algorithms from a variety of sources, including open-source, third-party, and internal data science teams. With the new Azure Machine Learning integration, data science teams can develop models using Azure Machine Learning Studio and then publish them using the Seeq Azure Add-ons feature, available this week on GitHub. Using Seeq Workbench, frontline employees with domain expertise can easily access these models, validate them by overlaying near real-time operational data with the model results and provide feedback to the data science team. This enables an iterative set of interactions between IT and OT employees, accelerating time to insight for both groups while creating the continuous improvement loop necessary to sustain the full lifecycle of machine learning operations.
“Seeq and Azure Machine Learning are critical and complementary solutions for a successful machine learning model lifecycle,” says Megan Buntain, Director of Cloud Partnerships at Seeq. “By capitalizing on IT and OT users’ strengths, the Seeq Azure Add-on expands the Seeq experience and creates new opportunities for organizations to scale up model deployment and development.”
Along with increased access to machine learning models through this integration, Seeq’s self-service applications enable frontline employees to perform ad hoc analyses and use the models themselves, rather than rely on an IT team member for support. As the models yield results, Seeq empowers users to scale them across the organization to improve asset reliability, production monitoring, optimization, and sustainability.
November 2021
COT’S PICKS
Supermicro Enhances Broadest Portfolio of Edge to Cloud AI Systems with Accelerated Inferencing and New Intelligent Fabric Support
Super Micro Computer, Inc. announces the enhancement of the broadest portfolio of Artificial Intelligence (AI) GPU servers which integrate new NVIDIA Ampere-family GPUs, including the NVIDIA A100, A30, and A2.
Supermicro’s latest NVIDIA-Certified Systems deliver ten times more AI inference performance than previous generations, ensuring that AI-enabled applications such as Image Classification, Object Detection, Reinforcement Learning, Recommendation, Natural Language Processing (NLP), Automatic Speech Recognition (ASR) can produce faster insights with dramatically lower costs. In addition to inferencing, Supermicro’s powerful selection of A100 HGX 8-GPU and 4-GPU servers delivers three times higher AI training and eight times faster performance on big data analytics compared to previous generation systems.
“Supermicro leads the GPU market with the broadest portfolio of systems optimized for any workload, from the edge to the cloud,” said Charles Liang, president, and CEO of Supermicro. “Our total solution for cloud gaming delivers up to 12 single-width GPUs in one 2U 2-node system for superior density and efficiency. In addition, Supermicro also just introduced the new Universal GPU Platform to integrate all major CPU, GPU, and fabric and cooling solutions.”
The Supermicro E-403 server is ideal for distributed AI inferencing applications, such as traffic control and office building environmental conditions. Supermicro Hyper-E edge servers bring unprecedented inferencing to the edge with up to three A100 GPUs per system. Supermicro can now deliver complete IT solutions that accelerate collaboration among engineering and design professionals, including NVIDIA-Certified servers, storage, networking switches, and NVIDIA Enterprise Omniverse software for professional visualization and collaboration.
“Supermicro’s wide range of NVIDIA-Certified Systems are powered by the complete portfolio of NVIDIA Ampere architecture-based GPUs,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “This provides Supermicro customers top-of-the-line performance for every type of modern-day AI workflow —- from inference at the edge to high-performance computing in the cloud and everything in between.”
Supermicro’s powerful data center 2U and 4U GPU (Redstone, Delta) systems will be the first to market supporting the new Quantum-2 InfiniBand product line and the BlueField DPUs. The NVIDIA Quantum-2 InfiniBand solution includes high-bandwidth, ultra-low latency adapters, switches and cables, and comprehensive software for delivering the highest data center performance, which runs across the broad Supermicro product line.
The Quantum-2 InfiniBand-based systems will provide 400 Gb/s, 2X higher bandwidth per
port, increased switch density, and 32X higher AI acceleration per switch than the previous generation of InfiniBand communication adapters and switches and offer both Intel or AMD processor support.
With hybrid work environments becoming the norm, new technologies are required to ensure a workforce’s technical parity. The combination of NVIDIA’s Omniverse Enterprise and Supermicro GPU servers will transform complex 3D workflows, resulting in infinite iterations and faster time-tomarket for a wide range of innovative products. In addition, NVIDIA’s Omniverse Enterprise, and AI Enterprise on VMware for integrating AI into their enterprise workflows, are optimized and tested on Supermicro’s NVIDIA-Certified Systems, enabling geographically diverse teams to work together seamlessly.
Pixus Offers 1U High Chassis With Dual AC Input and Chassis Monitoring
Pixus Technologies has announced a new option for its 1U high OpenVPX, CompactPCI, and VME64x enclosures for power and monitoring.
The 1U chassis supports various configurations of 3U/6U OpenVPX, or 6U CompactPCI, or VME64x backplanes. The enclosure provides dual, non-redundant AC inputs with 12V power which can be converted to 5V and 3.3V as well. A chassis monitor is located in the rear of the chassis to report the status of the voltages, fans, and temperature. The data is accessible via a USB interface and the unit sets one of three relays if an event occurs.
The Pixus 1U chassis supports Rear Transition Modules (RTMs) and various backplane configurations. Card guides for conduction-cooled boards are available. Pixus offers backplane/chassis systems in commercial, development, and MIL rugged formats. The company also provides IEEE and Eurocard components for the embedded computer market.
Pixus Technologies https://pixustechnologies. com
November 2021
COT’S PICKS
V1161 Programmable 100G Ethernet XMC ACAP Card
The V1161 is a next-generation high performance embedded computing XMC featuring the Xilinx® Versal™ Adaptive Compute Acceleration Platform (ACAP), the NVIDIA® Mellanox® ConnectX®-5 (MCX5) network interface device, and rugged optical and electrical IO options. The V1161 is specifically targeted at applications requiring a combination of high-speed interfaces, network offloads, and onboard payload processing resources. Use cases include sensor interface design with on-board data processing (or pre-processing), multi-level secure networking, and protocol bridging applications. Radar, SIGINT, video, storage, medical imaging, and embedded communications systems all can benefit from the V1161 module.
The V1161 is a proven high-bandwidth and low-latency performance leader in 10/25/40/100Gbs Ethernet applications. The V1161 includes hardware offloads for UDP, TCP, RoCE v2, DPDK, GPUDirect, NVMEoF, and many other protocol stacks. The combination of the MC-X5 device and the ACAP device allows for system designers to leverage the off-the-shelf world-class Ethernet performance while deploying unique data processing and security algorithms in the onboard ACAP device. This combination maximizes the effectiveness of the deployed algorithms while eliminating design efforts required to establish high bandwidth Ethernet, PCIe controllers, efficient DMA engines, or low overhead software drivers.
In addition to the Ethernet interfaces described, the FPGA fabric provided within the ACAP part is capable of hosting New Wave DV IP cores for Fibre Channel, ARINC-818, sFPDP, Aurora, and others. This makes the V1161 an ideal hardware platform for mixed interface protocol needs or protocol bridging applications.
The convenient XMC form factor and rugged design of the V1161 can turn a VPX-based single-board computer into a single-slot sensor interface and heterogeneous computing solution. The V1161 mounted on an x86 based single board computer will provide 100G optical interfaces, FPGA fabric, ARM processor cores, and x86 processor cores all in a single slot solution. V1161 is also available from New Wave DV in a 3U VPX form-factor instead of XMC if desired.
FEATURES
• Up to eight (8) 1G to 25G optical ports via MPO front panel I/O or VITA 66 optical backplane I/O. Electrical I/O via Pn6 is also available. • Xilinx® Versal® ACAP (FPGA) • NVIDIA® Mellanox® ConnectX®-5 Network Interface Device • Hardware offloads for UDP, TCP, RoCE v2, DPDK, GPUDirect, NVMEoF, +more • Supports PCIe Gen4 x16, Gen4 x8, Gen3 x16, Gen3 x8
• Onboard embedded PCIe Switch device • Advanced APIs that support multi-core and multi-processor architectures • Wide range of operating system software support
New Wave DV https://newwavedv.com/
November 2021
COT’S PICKS
congatec and MATRIX VISION present PCIe based high-speed vision technology
congatec and MATRIX VISION will showcase their new SMARC Computer-on-Module platform with PCI Express (PCIe) based camera module extension for the first time at Vision in Stuttgart. With no overhead and no need for additional interfaces such as GbE, USB, or MIPI CSI, image data are written directly into the RAM of the SMARC module with virtually no latency and higher bandwidth [1]. MATRIX VISION’s Sony Pregius sensor-based camera modules deliver image data to congatec’s Intel Atom processor-based SMARC module at speeds of up to 226.5 frames per second (FPS) and with 1.6-megapixel resolution. Such high-speed transmission enables hard real-time at clock rates of approximately 4 milliseconds. This clocking is also a great fit for actuator commands over TSN (Time-Sensitive Networking) based Ethernet, which provides hard real-time at clock rates <1 millisecond. Typical use cases are found in industrial machine vision applications in electronics and semiconductor manufacturing, the automotive industry, food and beverage, pharmaceuticals, packaging, and printing. Other markets include healthcare, intelligent transportation systems (ITS), as well as airport security and surveillance systems.
“PCIe-based camera implementations are predestined for ultra-low-latency, high-speed, real-time vision applications. One reason is that – unlike GbE, USB or MIPI – there is no overhead in the protocol. Secondly, the interface is always natively supported by the processor, which is not always the case with GbE, USB, or MIPI,” explains Martin Danzer, Director Product Management at congatec.
“The ability to use multiple lanes in parallel makes PCIe performance highly scalable across multi-camera system solutions while keeping overall system costs low. PCIe also offers high investment security into the future as this bus is inextricably linked to the x86 processor bus,” explains Uwe Hagmaier, Head of R&D at MATRIX VISION.
The live demo, which can operate with up to four camera modules, is designed for SMARC modules with Intel Atom, Intel Pentium, and Intel Celeron processors (code names Elkhart Lake and Apollo Lake). Variants featuring NXP i.MX8 M Plus processor-based SMARC modules are also available. The MATRIX VISION mvBlueNAOS camera module family uses the latest global shutter sensors from the Sony Pregius and Pregius S series. Providing high image quality, small pixel sizes, and high transfer rates, they are a perfect fit for this camera platform. To support the various processor architectures available on SMARC, a mvIMPACT Acquire SDK is part of the package. The GenICam GenTL producer ensures compatibility with existing developments and guarantees a smooth switch between different hardware platforms. Additional packages for LabVIEW, DirectShow, VisionPro, and Halcon are also available.
Developers interested in evaluating the PCIe vision cards of the mvBlueNAOS family in combination with congatec SMARC modules based on Intel Atom, Intel Pentium, and Intel Celeron processors as well as NXP i.MX8 M Plus processors can choose between 6 different camera models with resolutions ranging from 1.6 MP (1456 x 1088) to 24.6 MP (5328 x 4608) and frame rates from 226.5 to 24.1 FPS.
congatec www.congatec.com
November 2021
COT’S PICKS
Marvin Test Solutions Announces New 16-Channel PXI Device Power Supply (DPS)
Unique High-Density Flex-Power Architecture Offers High Performance and Multi-Channel Configuration Flexibility
Marvin Test Solutions, Inc. announced the release of the new GX3116e, 16-Channel Device Power Supply (DPS) / Source Measure Unit (SMU).
The GX3116e DPS is the highest density, most flexible multichannel semiconductor device power supply solution available. The true 4-quadrant operation, isolated outputs, ganging capabilities for higher current, and extensive health monitoring and alarms make this the ideal solution for a multitude of semiconductor test applications.
Kelvin connection sensing on a per-channel basis ensures that the Device Under Test (DUT) receives the expected excitation levels, independent of cabling and other interconnects, while over-current sensing and programmable alarms protect the device under test. Electrically isolated outputs, grouped in banks of eight channels, can be ganged together to achieve higher current levels, and both banks can be ganged together to extend the total overall output current.
“This latest addition to our Semiconductor product portfolio delivers the performance and flexibility that our customers demand their evolving semiconductor test needs,” said Major General Stephen T. Sargeant, USAF (Ret.), CEO of Marvin Test Solutions. “The GX3116e combines unmatched channel density with exceptional source/measure performance, making it ideal for a wide range of current and emerging semiconductor test applications.”
The GX3116e is supplied with a full-featured virtual instrument panel that can be used to interactively program and control the instrument, as well as full documentation and online help files. We also deliver GtLinux, a software package providing support for Linux 32/64 operating systems.
Marvin Test Solutions, Inc. www.marvintest.com
November 2021
COT’S PICKS
Fungible Advances Data Center Economics by Simplifying Secure Disaggregation of High-Performance Scale-Out Flash Storage Using Open Standards
Fungible Inc. announced it is adding new capabilities and products to its Fungible Storage Cluster product portfolio. The Fungible Storage Initiator (SI) cards allow standard servers to access NVMe over TCP (NVMe/TCP) storage targets using the world’s fastest and most efficient implementation of NVMe/ TCP, provide enhancements to the security and usability of the entire data platform, and make deploying NVMe/TCP effortless in existing data centers.
Data centers have harbored inefficiencies for decades. Silos of resources create stranded capacity, while at the same time creating overhead for managing each silo independently. While silos have proliferated due to the unique needs of each application, workloads have also grown more and more data-centric. This has fueled the accelerated growth of infrastructure spending, while generalized hardware has become less effective at meeting the needs of these modern workloads.
Fungible has answered these challenges by creating the world’s most powerful Data Processing Unit, the Fungible DPU™, a new category of processor purpose-built for data-centric workloads. Fungible offers technology to unlock the capacity stranded in silos by disaggregating these resources into pools, and composing them on-demand to meet the dynamic resourcing needs of modern applications. While pooled storage has long been an answer to eliminating local storage silos, it is typically implemented at the expense of performance. This tradeoff is no longer necessary. Built to run on standard NVMe/TCP, The Fungible Storage Cluster enables the benefits of pooled storage without sacrificing performance. Now, with the announcement of Fungible’s Storage Initiator, NVMe/TCP is even easier to adopt, easier to deploy, and even more powerful.
The Fungible Storage Initiator solution is delivered on Fungible’s FC200, FC100, and FC50 cards. Each of these cards is powered by the S1 Fungible DPU, and a single FC200 card is capable of delivering a record-breaking 2.5 million IOPS to its host. These cards, and the Fungible Storage Cluster, are managed by Fungible Composer™, which orchestrates the composition of disaggregated data center resources on demand.
Fungible’s SI solution offers a hardware-accelerated, high-performance approach to disaggregating storage from servers. The SI cards are available in a standard PCIe form-factor, allowing effortless deployment into existing servers. The cards manage all NVMe/TCP communication for the host and in turn present native NVMe devices to the host operating system using standard NVMe drivers. This approach enables interoperability with operating systems that do not natively support NVMe/TCP. When paired with a Fungible FS1600 or other non-Fungible NVMe/TCP storage targets, the SI cards enhance the performance, security, and efficiency of those environments as well as providing the world’s highest performance implementation of standards-based NVMe/TCP.
The benefits of the Fungible Storage Initiator solution include:
Simplicity - Allows modern data center compute servers to finally get rid of ALL local storage, even boot drives, allowing the complete disaggregation of storage from servers.
Security - Seamless, high-performance, multitenant encryption of data from the moment it is first transmitted over the network through its lifetime retention on the Fungible storage target.
Flexibility - Expands the usability of NVMe/TCP to a broader set of customer environments, even those without native NVMe/TCP support.
Savings and Performance - Offloads the processing of NVMe/TCP from the host, freeing up approximately 30% of the general-purpose CPU cores to run applications. This provides significant cost and environmental savings to customers.
“With our high-performance and low-latency implementation, Fungible’s disaggregated NVMe/ TCP solution becomes a game-changer. Over the last five years, we have designed our products to support NVMe/TCP natively to revolutionize the economics of deploying flash storage in scale-out implementations,” said Eric Hayes, CEO of Fungible. “In addition to industry-leading performance, our solutions offer more value and the highest levels of security, compression, efficiency, durability, and ease of use. At Fungible, we continue to disrupt the traditional rigid models by disaggregating compute and storage using available industry standards like NVMe/TCP.”
“NVMe/TCP is rapidly gaining adoption and is a key driver in storage innovation today,” said Ashish Nadkarni, Group Vice President, Infrastructure Systems, Platforms, and Technologies Group at IDC. “It excels in highly demanding and compute-intensive enterprise, cloud and edge data center environments. Companies, such as Fungible, are leveraging NVMe/TCP to deliver the highest throughput, fastest response times, and unrivaled economics for all types of workloads.”
COTSCOTS
ADVERTISERS
Company Page # Website
Annapolis Micro Systems ........................................ 27 ........................................ www.annapmicro.com Diamond Systems ................................................... 15/29 ................................. www.diamondsystems.com GET Engineering ..................................................... IFC ............................................... www.getntds.com Great River Technology ........................................... 4 ..................................... www.greatrivertech.com Holo Industries ...................................................... 5 ................................................ www.holoind.com Pentek .................................................................. BC ................................................. www.pentek.com Per Vices Corporation ............................................ 18 ................................................ www.pervices.com PICO Electronics, Inc ............................................. 9/IBC ..................................... www.picoelectronics.com Pixus Technologies ................................................. 23 ................................ www.pixustechnologies.com Sealevel ................................................................. 12 ................................................ www.sealevel.com SECO ...................................................................... 19 ..................................................... www.seco.com Versalogic .............................................................. IBC ............................................. www.versalogic.com
COTS Journal (ISSN#1526-4653) is published monthly at; 3180 Sitio Sendero, Carlsbad, CA. 92009. Periodicals Class postage paid at San Clemente and additional mailing offices. POSTMASTER: Send address changes to COTS Journal, 3180 Sitio Sendero, Carlsbad, CA. 92009.