Electronica Azi International nr. 4 - 2024

Page 1


Renesas brings the high performance of Arm Cortex-M85 Processor to cost-sensitive applications with new RA8 Entry-Line

MCU Groups

RA8E1 and RA8E2 Deliver Unmatched Scalar and Vector Compute Performance with Best-in-Class Feature Set to Address Value-Oriented Markets

Renesas Electronics Corporation introduced the RA8E1 and RA8E2 microcontroller (MCU) groups, extending the industry’s most powerful series of MCUs. Introduced in 2023, the RA8 Series MCUs are the first to implement the Arm® Cortex®M85 processor, enabling them to deliver market-leading 6.39 Coremark/ MHz 1) performance. The new RA8E1 and RA8E2 MCUs offer the same performance but with a streamlined feature set that reduces costs, making them excellent candidates for high-volume applications such as industrial and home automation, office equipment, healthcare, and consumer products.

The RA8E1 and RA8E2 MCUs deploy Arm Helium™ technology, Arm’s M-Profile Vector Extension that provides up to a 4X performance boost for digital signal processor (DSP) and machine learning (ML) implementations versus MCUs based on the Arm Cortex-M7 processor. This performance uplift enables applications in the fast-growing field of AIoT where high performance is crucial to execute AI models. RA8 Series devices integrate low power features and multiple low power modes to improve power efficiency, even while providing industry-leading performance. A combination of low power modes, independent power domains, lower voltage range, fast wakeup time and low typical active and standby currents enables lower overall system power and allows customers to lower overall system power consumption and meet regulatory requirements.

The new Arm Cortex-M85 core also performs various DSP/ML tasks at much lower power. RA8 Series MCUs are supported by Renesas’ Flexible Software Package (FSP). The FSP enables faster application development by providing all the infrastructure software needed, including multiple RTOS, BSP, peripheral drivers, middleware, connectivity, networking, and TrustZone support as well as reference software to build complex AI, motor control and cloud solutions. It allows customers to integrate their own legacy code and choice of RTOS with FSP, thus providing full flexibility in application development. Using the FSP will ease migration of existing designs to the new RA8 Series devices.

Key Features of the RA8E1 MCUs

• Core: 360 MHz Arm Cortex-M85 with Helium and TrustZone

• Memory : 1MB Flash, 544 KB SRAM (including 32KB TCM w/ ECC, 512KB user SRAM with parity protection), 1KB standby SRAM, 32KB I/D caches

• Peripherals: Ethernet, XSPI (Octal SPI), SPI, I2C, USBFS, CAN-FD, SSI, ADC 12bit, DAC 12bit, HSCOMP, temperature sensor, 8-bit CEU, GPT, LP-GPT, WDT, RTC

• Packages: 100/144 LQFP

Key Features of the RA8E2 MCUs

• Core: 480 MHz Arm Cortex-M85 with Helium and TrustZone

• Memory : 1MB Flash, 672 KB SRAM (including 32KB TCM w/ ECC, 512KB user

SRAM with parity protection+128 KB additional user SRAM), 1KB standby SRAM, 32KB I/D caches

• Peripherals: 16-bit external memory I/F, XSPI (Octal SPI), SPI, I2C, USBFS, CANFD, SSI, ADC 12bit, DAC 12bit, HSCOMP, temperature sensor, GLCDC,2DRW, GPT, LP-GPT, WDT, RTC

• Packages: BGA 224

Winning Combinations

Renesas has combined the new RA8E1 and RA8E2 MCUs with numerous compatible devices from its portfolio to offer a wide array of Winning Combinations, including Entry Level Voice & Vision AI System and Human Machine Interface (HMI) for Appliances. These designs are technically vetted system architectures from mutually compatible devices that work together seamlessly to bring an optimized, low-risk design for faster time to market. Renesas offers more than 400 Winning Combinations with a wide range of products from the Renesas portfolio to enable customers to speed up the design process and bring their products to market more quickly. They can be found at renesas.com/win.

1) EEMBC’s CoreMark® benchmark measures performance of MCUs and CPUs used in embedded systems.

Renesas Electronics Corporation www.renesas.com

3 | Renesas brings the high performance of Arm Cortex-M85 Processor to cost-sensitive applications with new RA8 Entry-Line MCU Groups

6 | Surface Mount Technology best practice guide by A.R.T. helps optimise your assembly line

6 | New eBook from Mouser and Analog Devices Explores Power Efficiency and Robustness in Electronics Design

7 | IAR and SiliconAuto partner to advance the future of cars

8 | Nexperia and KOSTAL form a strategic partnership based on advancing automotive-grade wide bandgap devices

8 | Toshiba and MIKROE develop a safety-focused automotive gate driver board for brushless motors 9 | Nextgen IBC series delivers high power density and efficiency for AI data centers

16 | Renesas interview – Powering the future with the rise of AI in Data Centers and Infrastructure 20 | IoMT – The age of digital healthcare 24 | Leveraging a Hardware Agnostic Approach to Ease Embedded Systems Design: The Basics

30 | Creating High-Performance Cool Running Edge Nodes on FPGAs

| The second standardization wave is rising

| Get a Quick Start with “Windows on Arm” Development

Surface Mount Technology best practice guide by A.R.T. helps optimise your assembly line

Advanced Rework Technology Ltd (A.R.T.), the leading independent provider of IPC-certified & bespoke training services for the electronics assembly industry, has produced a best practice guide to Surface Mount Technology (SMT) that provides insights and practical advice to help optimise the assembly line.

A.R.T. Managing Director & Master IPC trainer, Debbie McDade commented: “Mastering SMT requires a solid understanding of the techniques and processes involved, from the initial application of solder paste to the final inspection of solder joints. Each stage of the assembly process plays a crucial role in ensuring high-quality outcomes, and even minor errors can lead to defects that compromise the functionality of the final product. Whether you’re new to SMT or looking to refine your existing processes, this guide will provide the insights and practical advice you need to optimise your assembly line and stay ahead in the fast-paced world of electronics manufacturing.”

The SMT guide covers: solder paste application; component placement; reflow soldering; automated optical inspection (AOI) and X-ray inspection, and maintenance. It also suggests that to enhance the efficiency and quality of an SMT assembly line, Lean Manufacturing principles which focus on reducing waste, minimising setup times, and improving overall workflow, should be considered.

Adds McDade: “Skilled operators are very valuable, so it is vital to keep your personnel up to date with the latest thinking and practices. A.R.T. delivers training that ensures that your electronics assembly technicians perform to the best of their ability – and develop their skills using best practice.”

Access the SMT best practice guide at: https://rework.co.uk/ blog/in-depth-guide-to-surface-mount-technology/

■ Advanced Rework Technology Ltd (A.R.T.)

https://rework.co.uk

New eBook from Mouser and Analog Devices Explores Power Efficiency and Robustness in Electronics Design

Electronics, Inc., the New Product Introduction (NPI) leader™ empowering innovation, today announces a new eBook in collaboration with Analog Devices, Inc. (ADI), highlighting essential strategies for optimising power systems. In Powering the Future: Advanced Power Solutions for Efficiency and Robustness, subject matter experts from ADI and Mouser offer in-depth analyses of the most important components, architectures and applications in power systems.

In every industry, innovative designers and manufacturers are creating the next technologies that will make our lives smarter, faster, and more efficient. Building in reliable and efficient power systems allows designers to reduce solution size and cost, minimise energy use, and accelerate time to market.

Chapters in the eBook discuss effective electromagnetic interference (EMI) management, reducing equivalent series resistance (ESR) and equivalent series inductance (ESL) in switching power supplies and managing power supply noise with voltage supervisors. Other topics include buck-boost circuits, enhancing system robustness with ideal diodes and eFuses, and leveraging gallium nitride (GaN) technology for improved efficiency.

Mouser

ADI has a long history in the field of power management and offers customers a broad portfolio of solutions, many of which are highlighted in the eBook. The LT3046 linear regulator features ADI’s ultra-low noise and ultra-high power supply rejection ratio (PSRR) architecture for powering noise-sensitive applications. The MAX16162 is an ultra-low current, single-channel supervisory IC designed to monitor the power supply voltage, enabling the target microcontroller or microprocessor to leave the reset state and begin operating.

The LTM8080 is a super low-noise, dual output DC/DC μModule regulator with patented silicon, layout and packaging innovations, specifically designed to power digital loads while reducing switching regulator noise for data converters, RF transmitters, FPGAs, op-amps, transceivers, medical scanners and more. The LT8418 half-bridge GaN driver features integrated top and bottom driver stages, driver logic control, and protections. The driver can be configured into synchronous half-bridge, full-bridge topologies or buck, boost, and buck-boost topologies. The LTC7890/1 synchronous stepdown controllers are high-performance, DC-DC devices that drive N-channel synchronous GaN field-effect transistor (FET) power stages from input voltages up to 100V. These controllers offer a solution to challenges traditionally faced when using GaN FETs. The devices simplify the application design with no protection diodes or additional external components needed compared to silicon metal-oxide-semiconductor fieldeffect transistor (MOSFET) solutions.

• To learn more about ADI, visit https://eu.mouser.com/manufacturer/analog-devices/.

• To read the new eBook, visit https://resources.mouser.com/manufacturer-ebooks/adipowering-the-future-advanced-power-solutions-for-efficiency-and-robustness/.

• To browse Mouser’s extensive eBook library, visit https://resources.mouser.com/manufacturer-ebooks/.

■ Mouser Electronics | www.mouser.com

IAR and SiliconAuto partner to advance the future of cars

IAR, a leader in software solutions and services for embedded development, is pleased to announce its partnership with SiliconAuto. As a Functional Safety (FuSa) solutions partner, IAR will support SiliconAuto’s automotive chip development through IAR Embedded Workbench for Arm, complemented by the C-STAT and C-RUN code analysis tools. This collaboration delivers high integration, accelerating time to market and enhancing safety features in automotive chips, advancing future automotive technology. SiliconAuto B.V., co-funded by Hon Hai Technology Group and leading automaker Stellantis, focuses on designing chip technologies for a variety of automotive systems. Its three core product lines – automotive MCU, SerDes, and SoC –drive advancements in various electronic control units (ECUs) such as body control modules (BCM), camera and display data transmission, and driver assistance systems. In the automotive industry, achieving Functional Safety (FuSa) certification requires rigorous verification and testing. Selecting the right tools is essential to ensure code quality, accelerate development, and improve efficiency. SiliconAuto, prioritizing Functional Safety as a core requirement, chose IAR as its FuSa solutions partner to create a high-performance, reliable development process.

SiliconAuto selected IAR Embedded Workbench for Arm to develop driver applications and verify the normal operation of ICs. This partnership enables SiliconAuto’s MCU to deliver robust safety functions in the BCM controller. IAR Embedded Workbench for Arm offers a complete toolchain, including an optimized compiler and advanced debugging capabilities. The integrated code analysis tools actively identify code issues, improve code quality, and reduce potential security vulnerabilities, ensuring that security gaps are identified and addressed. This integrated solution helps SiliconAuto streamline development, saving time and effort while meeting customer needs for high-quality automotive products. IAR Embedded Workbench for Arm is not only highly integrated but also enforces code quality, boosting development and verification efficiency. It helps customers bring products to market faster, meeting future demands in the automotive market.

■ IAR | www.iar.com

Nexperia and KOSTAL form a strategic partnership based on advancing automotive-grade wide bandgap devices

Nexperia announced that it has entered into a strategic partnership with KOSTAL, a leading automotive supplier, which will enable it to produce wide bandgap (WBG) devices that more closely match the exacting requirements of automotive applications. Under the terms of this partnership, Nexperia will supply, develop, and manufacture WBG power electronics devices which will be designed-in and validated by Kostal. The collaboration will initially focus on the development of SiC MOSFETs in topside cooled (TSC) QDPAK packaging for onboard chargers (OBC) in electric vehicles (EV).

KOSTAL Automobil Elektrik, with over a century of experience, is a key player in the global automotive industry. Nearly one in every two cars worldwide is equipped with KOSTAL’s products, including more than 4.5 million onboard chargers, contributing to advancements in electromobility. Ranked among the top 100 automotive suppliers globally, KOSTAL is recognized for its innovative, reliable, and cost-optimized solutions. Its longstanding partnerships with customers and employees reflect the company’s commitment to quality and collaboration.

“Nexperia has been a trusted supplier of silicon components to KOSTAL for many years and is delighted to enter into this strategic partnership that will now extend to wide bandgap devices”, according to Katrin Feurle, Senior Director and Head of SiC Discretes & Modules. “KOSTAL will assist in validating our devices in its charging applications, thereby providing us with the type of invaluable ‘real-world’ data that will allow us to further enhance their performance”.

Nexperia is among the few companies that is offering a comprehensive range of WBG semiconductor technologies, including SiC diodes and MOSFETs, as well as GaN e-mode and d-mode devices, alongside its established silicon portfolio. With a strong commitment to expanding its commercial WBG offerings, Nexperia is focused on delivering the most suitable products to meet the needs of an increasing range of applications. The company’s focus is to support the responsible use of electrical energy through innovative solutions. Nexperia continues to develop technologies that address the growing demand for efficiency and sustainability in power management.

■ Nexperia | www.nexperia.com

Toshiba and MIKROE develop a safetyfocused automotive gate driver board for brushless motors

Toshiba Electronics Europe GmbH has partnered with MIKROE to integrate its robust TB9083FTG gate-driver IC into the Brushless 30 Click, a compact add-on board for precise and reliable control of brushless DC (BLDC) motors in automotive applications.

Toshiba’s TB9083FTG has been designed in accordance with ISO 26262 (2nd edition) and integrates 9 gate drivers, including 6 for driving MOSFETs to control BLDC motors in the 1000 W range or below. Additionally, it includes 3 drivers for driving external MOSFETs which can be used either for system control or safety relays thus enabling the TB9083FTG to support ASIL-D, the highest level of functional safety for automotive applications. The TB9083FTG also incorporates a built-in charge pump, adjustable current sense amplifiers for each motor phase oscillator circuit and an SPI communication interface for configuration via a host microcontroller unit (MCU).

Enables robust control of brushless DC motors in demanding automotive applications

The Brushless 30 Click board is designed to operate from a wide range of external power supplies ranging from 4.5V to 28V and can deliver a peak output current of up to 10A.

It also features a comprehensive suite of error detection capabilities including undervoltage, overvoltage, over-temperature and an external MOSFET VDS detector making it ideal for demanding automotive applications such as electric power steering (EPS), powered brakes, and automotive pumps where precise motor control is essential.

The Brushless 30 Click board measures only 57.15 mm x 25.4 mm and is fully compatible with the mikroBUS™ socket. It can be used on any host system supporting the mikroBUS™ standard and comes with the mikroSDK open-source libraries to provide ultimate flexibility for system evaluation and customization. An innovative ClickID feature enables a host system to automatically detect and identify the Brushless 30 Click board once it has been connected.

Additional information about the TB9083FTG gate driver IC can be found on Toshiba’s website: https://toshiba.semiconstorage.com/eu/semiconductor/product/automotivedevices/detail.TB9083FTG.html

■ Toshiba Electronics Europe https://toshiba.semicon-storage.com

https://international.electronica-azi.ro

Nextgen IBC series delivers high power density and efficiency for AI data centers

Flex Power Modules introduces the BMR316, a high-performance non-isolated, unregulated DC/DC intermediate bus converter (IBC) specifically designed for AI and ML data center applications that demand intensive computational power. The compact BMR316 is ideal for other high-power IBC applications where board space is limited.

The BMR316 features a fixed 4:1 conversion ratio, efficiently stepping down 48V to 12V. It provides a continuous power output of 1 kW and can deliver peak power up to 3 kW. With a power density exceeding 900 W/cm3 (15 kW/in3) during peak load, the BMR316 is housed in an ultra-compact package measuring just 23.4 × 17.8 × 7.65 mm. It operates within an input range of 38 – 60 V (68 V peak), delivering an output of 9.5 – 15 V.

The BMR316 is designed to work seamlessly with a variety of voltage regulator modules (VRMs) and Point-of-Load (PoL) converters, further converting the intermediate bus to core voltages needed further downstream in AI and datacom centers. These environments require high peak power handling, efficient use of board space, and optimized energy efficiency. With a peak efficiency of 97.7% at half load, the BMR316 enhances cooling performance by reducing thermal demands, and low-profile design make it ideal for cold wall mounting or direct to chip liquid cooling solutions. Additional features include fast load-transient response, high current-monitoring accuracy, and a calculated MTBF of 7.43 million hours. The BMR316 meets the latest IEC/EN/UL 62368-1 safety standards with built-in protection and warning systems. The product is also supported by Flex Power Designer, a popular software tool that offers configuration, performance simulation, and monitoring capabilities, all with full control via a PMBus® interface. The software tool is available for free download at www.flexpowerdesigner.com.

Samples are available now, and the BMR316 is expected to be released to mass production in January 2025.

■ Flex Power Modules | https://flexpowermodules.com

The Dream of Edge AI

At this point, we should have had flying cars. And robot butlers. And with some bad luck, sentient robots that decide to revolt against us before we can cause the apocalypse. While we don’t have that, it is clear that artificial intelligence (AI) technology has made its way into our world. Every time you ask Alexa to do something, machine learning technology is figuring out what you said and trying to make the best determination on what you wanted it to do. Every time Netflix or Amazon recommends that next movie or next purchase to you, it is based on sophisticated machine learning algorithms that give you compelling recommendations that are far more enticing than sales promotions of the past. And while we might not all have self-driving cars, we’re all keenly aware of the developments in that space and the potential that autonomous navigation can offer.

AI technology carries a great promise – the idea that machines can make decisions based on the world around them, processing information like a human might (or in a manner superior to what a human would do). But if you think about the examples above, the AI promise here is only being fulfilled by big machines – things that don’t have power, size, or cost constraints, or to put it another way – they can get hot,

have line power, are big, and are expensive. Alexa and Netflix rely on big, power hungry servers in the cloud to figure out your intent.

While self-driving cars are likely to rely on batteries, their energy capacity is enormous, considering those batteries must turn the wheels and steer, which are big energy expenses compared to even the most expensive AI decisions.

While the promise of AI is great, little machines are being left behind. Devices that are powered by smaller batteries or have cost and size constraints are unable to participate in the idea that machines can see and hear. Today, these little machines can only make use of simple AI technology: perhaps listening for a single keyword or analyzing low dimensional signals like photoplethysmography (PPG) from a heart rate.

Ce s-ar întâmpla dacă micile aparate ar putea vedea și auzi? But is there value in small machines being able to see and hear? It is hard to think about things like a doorbell camera taking advantage of technologies like autonomous driving or natural language processing, but there is an opportunity for less complex, less processing intensive AI computations such as vocabulary recognition, voice recognition, and image analysis.

• Doorbell cameras and consumer security cameras often get triggered by uninteresting events, such as the motion of plants caused by wind, drastic light changes caused by clouds, or even events such as dogs or cats running in front. This can result in false triggers, causing the homeowner to begin to ignore the events. In addition, if the homeowner is traveling in a different part of the world, they are probably sleeping while their camera is alarming to changes in lighting caused by sunrise, clouds, and sunset. A smarter camera could get triggered by more specific events, such as a human being in the frame of reference.

• Door locks or other access points can use facial identification or even speech recognition to grant access to authorized personnel, forgoing the need for keys or badges in some cases.

• Lots of cameras want to trigger on certain events: for instance, trail cameras might want to trigger on the presence of

a deer in the frame, security cameras might want to trigger on a person in the frame or a noise like a door opening or footsteps, and a personal camera might want to trigger with a spoken command.

• Large vocabulary commands can be useful in many applications: while there are plenty of Hey Alexa solutions, if you start to think about a vocabulary of 20 or more words, you can find use in industrial equipment, home automation, cooking appliances, and plenty of other devices to simplify the human interaction.

These examples only scratch the surface: the idea of allowing small machines to see, hear, and solve problems that in the past would require human intervention is a powerful one and we continue to find creative new use cases every day.

What Are the Challenges to Enabling Little Machines to See and Hear? So, if AI could be so valuable to little machines, why don’t we have it yet?

The answer is computational horsepower. AI inferences are the result of the computation of a neural network model. Think of a neural network model as a rough approximation of how your brain would process a picture or a sound, breaking it into very small pieces and then recognizing the pattern when those small pieces are put together. The workhorse model of modern vision problems is the convolutional neural network (CNN).

These kinds of models are excellent at image analysis and are very useful in audio analysis as well. The challenge is that these models take millions or billions of mathematical computations. Traditionally, these applications have a difficult choice to make for implementation:

• Use an inexpensive and low powered microcontroller solution. While the average power consumption may be low, the CNN can take seconds to compute, meaning the AI inference is not real time, and meaning it consumes considerable battery power.

• Buy an expensive and high powered processor that can complete those mathematical operations in the required latency. These processors are typically large and require lots of external compo-

nents including heat sinks or similar cooling components. However, they execute AI inferences very quickly.

• Don’t implement. The low power microcontroller solution will be too slow to be useful, and the high powered processor approach will break cost, size, and power budgets.

Products are now available to nearly eliminate the energy cost of AI inferences and enable battery-powered machine vision. Find out more about the MAX78000, a microcontroller built to execute AI inferences while spending only microjoules of energy.

■ Analog Devices www.analog.com

What is needed is an embedded AI solution built from the ground up to minimize the energy consumption of a CNN computation. AI inferences need to execute at orders of magnitude with less energy than conventional microcontroller or processor solutions and without the assistance of external components such as memories, which consume energy, size, and cost.

If an AI inferencing solution could practically eliminate the energy penalty of machine vision, then even the smallest devices could see and recognize things happening in the world around them. Lucky for us, we are at the beginning of this – a revolution of the little machines.

© ADI

About the author

Kris Ardis is a managing director of Digital Business Unit at Analog Devices. He began his career with ADI in 1997 as a software engineer and holds two U.S. patents. In his current role, Ardis is responsible for processors. He has a B.S. degree in computer science from the University of Texas at Austin.

Engage with like-minded members and ADI technology experts in our online community, EngineerZone®. Expand your network, ask your tough design questions, share your expertise, browse our rich knowledge base, or read about new technologies and the engineers behind them in one of our blogs. Visit https://ez.analog.com

Edge AI

DigiKey

Edge AI offers reduced latency, faster processing, a reduced need for constant internet connectivity and can lower privacy concerns. This technology represents a significant shift in how data is processed and as demand for real-time intelligence grows, edge AI is well-positioned to continue its strong impact in many industries.

The greatest value of edge AI is the speed it can provide for critical applications. Unlike cloud/data center AI, edge AI is not

Revolutionizing real-time data processing and automation

From smart home assistants (think Alexa, Google and Siri) to advanced driver assistance systems (ADAS) that notify a driver when they’re departing from their lane of traffic, the world relies on edge AI to provide real-time processing to these increasingly common and important devices. Edge AI uses artificial intelligence directly within a device, computing near the data source, rather than an off-site data center with cloud computing.

sending data over network links and hoping for a reasonable response time. Rather, edge AI is doing computation locally (often on a real-time operating system), which excels at providing timely responses. For situations like conducting machine vision on a factory line and knowing a product can be diverted within a second, edge AI is well equipped.

Likewise, you wouldn’t want signals from your car to be dependent on the response times of the network or servers in the cloud.

Edge AI for real-time processing

Many real-time activities are driving the need for edge AI. Applications such as smart home assistants, ADAS, patient monitoring and predictive maintenance are notable uses of the technology.

From quick responses to household questions, notifications of a lane departure in a vehicle or a glucose reading sent to a smartphone, edge AI offers swift responses while minimizing privacy concerns.

Vehicle AI

We’ve seen edge AI do well in supply chain, particularly with warehousing and factories for quite some time. There has also been substantial growth for the tech within the transportation industry over the last decade, such as delivery drones navigating through conditions like clouds. Edge AI is also doing great things for engineers, especially in the med-tech sector, a critical area of advancement.

For example, engineers developing pacemakers and other cardiac devices can give physicians the tools to look for abnormal heart rhythms, while also proactively programing devices to offer guidance on when to seek further medical intervention. Med-tech will continue to grow its use of edge AI and build out further capabilities.

Generating edge AI models

As more and more systems in everyday life now have some level of machine learning (ML) interaction, understanding this world becomes vital for engineers and developers to plan the future of user interactions.

The strongest opportunity with edge AI is ML, which matches patterns based on a statistical algorithm.

The patterns could be sensing a human is present, that someone just spoke a “wake word” (e.g., Alexa or “Hey Siri”) for a smart home assistant, or a motor starting to wobble. For the smart home assistant, wake words are models that run at the edge and do not need to send your voice to the cloud. It wakes the device and lets it know it’s time to dispatch further commands.

There are several pathways to generate an ML model: either with an integrated development environment (like TensorFlow or PyTorch) or using a SaaS platform (like Edge Impulse). Most of the “work” in building a good ML model goes into creating a representative data set and labeling it well.

Currently, the most popular ML model for edge AI is a supervised model, which is a type of training based on labeled and tagged sample data, where the output is a known value that can be checked for correctness, like having a tutor check and correct work along the way. This type of training is typically used in applications such as classification work or data regression. Supervised training can be useful and highly accurate, but it depends greatly on the tagged dataset and may be unable to handle new inputs.

Hardware to run edge AI workloads At DigiKey, we are well-positioned to assist in edge AI implementations, as they generally run on microcontrollers, FPGAs and single board computers (SBCs). DigiKey partners with top suppliers to provide several generations of hardware that run ML models at the edge. We’ve seen some great new hardware released this year, including NXP’s MCX-N series, and we’ll soon be stocking ST Microelectronics’ STM32MP25 series.

In past years, dev boards from the maker community have been popular for running edge AI, including SparkFun’s Edge Development Board Apollo3 Blue, AdaFruit’s EdgeBadge, Arduino’s Nano 33 BLE Sense Rev 2 and Raspberry Pi’s 4 or 5.

Neural processing units (NPUs) are gaining ground in edge AI. NPUs are specialized ICs designed to accelerate the processing of ML and AI applications based on neural networks, structures based on the human brain with many interconnected layers and nodes called neurons that process and pass along information. There’s a new generation of NPUs being created with dedicated math processing including NXP’s MCX N series and ADI’s MAX78000

Health device

We’re also seeing AI accelerators for edge devices, a space that is yet to be defined, with early companies of note including Google Coral and Hailo.

The importance of ML sensors

High speed cameras with ML models have functioned in supply chains for quite some time. They have been used for things like deciding where to send products within a warehouse or finding defective products in a production line. We’re seeing that suppliers are creating low-cost AI vision modules that can run ML modules to recognize objects or people.

Although running an ML model will require an embedded system, there will be more products that continue to be released as AI-enabled electronic components. This includes AI-enabled sensors, also known as ML sensors. While adding an ML model to most sensors will not make them more efficient at the application, there are a few types of sensors that ML training can enable to perform in significantly more efficient ways:

• Camera sensors where ML models can be developed to track objects and people in the frame

• IMU, accelerometer and motion sensors to detect activity profiles

Some AI sensors come preloaded with an ML model that is ready to run. For example, the SparkFun eval board for sensing people is preprogrammed to detect faces and return information over the QWiiC I2C interface. Some AI sensors, like Nicla Vision from Arduino or the OpenMV Cam H7 from Seeed Technology, are more open-ended and need to have the trained ML model for what they are looking for (defects, objects, etc.).

By using neural nets to provide computational algorithms, it is possible to detect and track objects and people as they move into the field of view of the camera sensor.

The future of edge AI

As many industries evolve and become more reliant on technology for data processing, edge AI will continue to see more widespread adoption. By enabling faster, more secure data processing at the device level, innovation in edge AI will be profound. A few areas we see expanding in the near future include:

1 Dedicated processor logic for computing neural network arithmetic.

2. Advancement in lower power alternatives compared to cloud computing’s significant energy consumption.

3 More integrated/module options like AI Vision parts that will include built-in sensors along with embedded hardware.

As ML training methods, hardware and software evolve, edge AI is well-positioned to grow exponentially and support many industries. At DigiKey, we’re committed to staying ahead of edge AI trends, and we look forward to supporting innovative engineers, designers, builders and procurement professionals around the world with a wealth of solutions, frictionless interactions, tools and educational resources to make their jobs more efficient. For more edge AI information, products and resources, visit DigiKey.com/edge-ai

Shawn Luke is a technical marketing engineer at DigiKey. DigiKey is recognized as the global leader and continuous innovator in the cutting-edge commerce distribution of electronic components and automation products worldwide, providing more than 15.6 million components from over 3,000 quality name-brand manufacturers.

■ DigiKey www.digikey.com

© DigiKey
Smart device

Powering the future with the rise of AI in Data Centers and Infrastructure

In-depth Interview with Chris Allexandre, Senior Vice President and General Manager of Power, Renesas Electronics

Electronica Azi: Chris, thank you for joining us today. To kick off, could you provide a comprehensive overview of the Power Products Group at Renesas, including its history and current strategic focus?

Chris Allexandre: Absolutely, and thank you for having me. The Power Products Group at Renesas has evolved significantly through various acquisitions of companies renowned for their power management expertise. These acquisitions have brought us exceptional engineering talent, advanced technologies, and valuable intellectual property. We’ve integrated these assets to create a unified portfolio that supports a wide range of power management needs. This integration has allowed us to scale our operations, enhance innovation, and position ourselves strongly for future growth. Our current strategic focus is on expanding our portfolio to serve multiple markets. This includes power management ICs (PMICs), custom charging solutions, computing power components, battery

management systems, discrete and wide bandgap semiconductors, and a growing catalog of products like controllers, drivers, and eFUSEs as well as automotive specific power products. We’re leveraging these capabilities to address mega trends such as data growth, electrification, and energy efficiency.

You've mentioned a commitment to expanding your portfolio. Could you delve deeper into the specific areas of growth you are targeting?

Certainly. Our growth strategy revolves around three primary areas: infrastructure & AI computing power, automotive EV and non EV and Industrial application.

1. Infrastructure & AI: We see tremendous potential in data centers and computing power in both Infrastructure and client. The demand for higher performance and efficiency is driving growth in this sector. AI is a major factor here, as it requires significantly more power than traditional computing. We’re investing in technologies that can meet these high demands and

provide digital power solutions for advanced computing applications.

2. Automotive: The automotive sector, particularly electric vehicles (EVs), presents substantial growth opportunities. We are expanding our solutions to include inverters, onboard chargers (OBCs), DCDC converters, and battery management systems. We’re also focusing on discrete and wide bandgap technologies to support the evolving needs of the automotive industry. We are also leveraging our strong footprint in automotive with MCUs and SoCs to penetrate with dedicated power solutions for automotive like eFuse or intelligent power device (IPD) as well as PMIC attach.

3. Industrial Applications: This includes a broad range of sectors such as renewable energy, home automation, and industrial automation. We’re working on expanding our product offerings to meet the diverse needs of these applications. Again, driver here is attach. Attach to MCUs and MPUs but also to our own products like drivers for IGBTs or GAN or even controllers.

You mentioned the role of wide bandgap technologies. How critical are these for your future plans, and what steps are you taking in this area?

Wide bandgap technologies like silicon carbide (SiC) and gallium nitride (GaN) are indeed crucial for our strategy. These materials are essential for efficient power delivery, especially in high-performance and high-voltage applications.

We recently acquired Transphorm a GaN company, which is a strategic move to enhance our wide bandgap portfolio. This acquisition allows us to offer more advanced solutions. We’re committed to expanding our GaN and SiC offerings, with a focus on integrating these technologies into various applications, including infrastructure and automotive.

We are also investing in the development of next-generation SiC products. Our 6inch Takasaki fab is being retrofitted for SiC production, and we have secured a $2 billion contract to ensure a stable supply of epi wafers. We will continue to work for reducing RDSon over temperature to build a strong foundation for long-term success in this area.

Could you explain the concept of “Winning Combinations” and its significance in your strategy?

The "Winning Combinations" concept is all about integrating our various products to create comprehensive solutions that provide more value to our customers. By combining our power management ICs with other components such as controllers and drivers, we can offer tailored solutions that simplify design and implementation for our customers.

This approach not only enhances our value proposition but also helps our customers achieve better performance and efficiency in their applications. For example, we provide more than 15 Winning Combinations, including eight real board implementations, which combine our GaN FETs with our controllers and drivers. This integration helps us deliver a more complete solution to our customers, making their design processes easier and more efficient. This outlines our overall power strategy which is about leveraging our scale and footprint. The aim is to ensure we pull ourselves and attach more products to Renesas products so we increase content and win power market share.

How is Renesas addressing the growing demands in the AI sector, and what are your expectations for this market?

AI is significantly changing the landscape of power requirements. AI systems for the cloud AI computing area, particularly AI GPUs, demand much more power compared to traditional CPUs. While high-end CPUs may require around 500 watts per CPU, AI GPUs are already consuming 1.2 kilowatts and could reach up to 3 kilowatts in the future. We will provide advanced digital power solutions for these high-end AI chips. We leverage our strong existing presence in the server market and are accelerating our engagement across the AI ecosystem.

On the other hand, we’re leveraging our advanced PMICs and power management solutions for the AI edge computing area. We are actively engaged with multiple AI PC platforms and expect substantial growth in AI-related power solutions over the next few years. For instance, we anticipate that AI PCs with local accelerated inferencing will make up 70% of total shipments by 2030. This shift will drive the need for new power architectures, and Renesas is well-positioned to meet these needs with our innovative solutions.

WITH CHRIS ALLEXANDRE,

Could you provide more details on Renesas' approach to the automotive market and how you plan to capitalize on the growth of electric vehicles?

In the automotive market, we have a comprehensive portfolio that addresses both traditional and electric vehicle applications. Our strategy involves expanding our solutions for EVs, including inverters, onboard chargers, DC-DC converters, and battery management systems. We are also enhancing our discrete and wide bandgap offerings to support the growing demands of the automotive sector.

How is Renesas adapting its business model to address challenges and opportunities in the power products market?

Our approach to adapting involves a few key strategies. First, we focus on continuous innovation and expanding our product portfolio to meet the evolving needs of the market. This includes investing in new technologies and developing comprehensive solutions that address customer requirements. Second, we are committed to diversification. We are expanding our product offerings beyond automotive into industrial applications and exploring new

We are leveraging our strong presence in MCUs and SoCs by integrating them with our power solutions. This includes expanding our product offerings and supporting innovations in the EV space. Although the current growth rate for EVs is slightly below expectations, we anticipate significant growth over the next 5 to 10 years. We are prepared to support this growth with scalable solutions and by continuing to expand our product portfolio.

geographic markets. For instance, we are increasing our presence in North America, Europe, India, and South Korea. Lastly, we are adopting a solution-oriented approach. By combining our products to create tailored solutions, we make it easier for customers to implement our technologies and achieve better results. This approach not only helps us capture new opportunities but also ensures that we provide maximum value to our customers.

What are your key goals for the Power Products Group over the next few years, and how do you plan to achieve them?

Our key goals include expanding our portfolio, driving innovation, and achieving substantial growth in our focus areas – particularly infrastructure, AI, and automotive. We aim to more than triple our power revenue by 2030 through strategic diversification and leveraging our strengths in both traditional and emerging markets.

To achieve these goals, we are focusing on several key areas:

First, expanding our product portfolio: We will continue to develop and integrate new technologies, such as wide bandgap materials, and expand our offerings to meet the growing demands of various markets.

Second, driving innovation : We are committed to staying at the forefront of technological advancements and providing innovative solutions that address emerging needs.

Third, capturing growth opportunities: We will focus on high-growth areas like AI and EVs, and leverage our existing strengths to capture a larger share of these markets.

Fourth, enhancing customer solutions: By offering integrated solutions and winning combinations, we aim to provide more value to our customers and simplify their design processes.

Overall, our strategy is to balance growth with innovation and ensure that we are well-positioned to meet the future demands of the power products market.

Thank you, Chris, for providing such a detailed overview of Renesas' Power Products Group and its strategic direction.

Thank you for the opportunity to share our vision and strategy. It’s an exciting time for the Power Products Group, and we are looking forward to driving growth and delivering value to our customers in the years to come.

■ Renesas Electronics www.renesas.com

Renesas @ electronica 2024: Hall B4 / Booth 179

IoMTthe age of digital healthcare

The IoMT involves collecting and analyzing data from Internet-connected things, such as devices, equipment and facilities and using it to glean new knowledge about patients’ conditions. Possible IoMT applications include direct management of patients’ conditions through measuring factors such as activity level, blood pressure and sleep. As well as improving their lives directly, using the IoMT can help doctors and hospitals work more efficiently and achieve better outcomes for patients.

The IoMT is expected to help extend healthy life expectancy, reduce medical labor shortages, and improve the quality of medical and nursing care.

As well as its use in areas such as logistics, industry and consumer products, there is a growing interest in using IoT in healthcare and medical applications. This is known as the Internet of Medical Things or IoMT. The IoMT market is expected to grow significantly in the future, rising from USD 113 billion in 2021 to reach USD 341.17 billion by 2028. [1]

IoMT Device and System Configuration

An IoMT service is built from hardware, applications, and networks, as shown in Figure 1. The main components may include:

• Sensor Layer:

– IoMT devices such as wearable sensors with network connectivity

• Communication Layer:

– Gateways such as mobile devices and wireless LAN (IEEE 802.11x/Wi-Fi®) routers

– Cloud aggregating data

– Networks (public network or Internet) connecting between gateway and cloud

• Application Layer:

– Applications (implemented in cloud or on mobiles) providing services such as visualization of sensing data

Mobile phones are a good basis for IoMT services, principally because so many people own them. Mobiles are also easy to connect to IoMT devices such as smartwatches using Wi-Fi and Bluetooth® and can also connect to the cloud via 4G (LTE) or 5G public networks.

With their ability to run applications, mobiles can support various functions.

Capturing Data

From the IoMT device, sensing data is sent to mobiles or gateways via Wi-Fi or Bluetooth. Some IoMT devices have builtin SIM cards and can connect directly to public networks.

Acquired data such as respiration rate, body temperature, pulse rate, and blood pressure are basic information called vital signs, with blood oxygen saturation (SpO2) sometimes included among these.

One of the major advantages of biometric data sensing by healthcare devices is that it is minimally invasive, requiring no blood sampling or body implant. A basic technology is photoplethysmography (PPG), which optically detects changes in blood vessel volume. An LED on the back of the smartwatch emits light (mainly green) at blood vessels in the wrist, with reflected light received by a photodetector. The smartwatch uses signal processing to extract regular fluctuations from the various noise components to obtain the pulse rate. When green and red LEDs are used together, transcutaneous arterial blood oxygen saturation (SpO2) can be estimated from the degree of hemoglobin binding.

The respiration rate can be estimated from the pulse rate using the body's respiratory sinus arrhythmia, in which the pulse rate increases slightly during inhaling and decreases slightly during exhaling.

Blood pressure is estimated from blood flow based on pulse rate. In addition, sleep status is determined from body movements detected by the smartwatch accelerometer.

Challenges of Using IoMT Devices

There are many challenges in achieving successful IoMT applications, principally connected to services, hardware, and communications.

In terms of services, managing the measurement accuracy of acquired physical data and ensuring the security of cloud-aggregated personal data are major considerations. Other challenges are compliance with national and regional radio laws and regulations and obtaining certification.

IoMT devices can connect wirelessly to mobiles and IoT gateways and maintaining this communication connectivity is critical if they are to collect accurate medical data. Devices such as smartwatches and smart shoes use Bluetooth for easy pairing with mobiles and other devices and for low power consumption - in the

2.4 GHz frequency band, Bluetooth is assigned as a license-free low-power radio station. Wi-Fi (IEEE 802.11x) is implemented for IoMT devices in stationary locations, such as smart scales, beds, and other going-to-bed/getting-up sensors, and surveillance cameras.

With hardware, one of the key challenges for IoMT devices is downsizing. For example, a smartwatch, which is similar in size to a standard wristwatch, must contain a battery, charging circuit, microcontroller, communication function, display, and other components, as well as various sensors. Limited battery power also dictates the use of a design with low power consumption.

Other important components include an analog front-end to amplify weak electrical signals output by photodetectors and accelerometers, and a filtering process to separate noise components from the required information.

High-Frequency Noise Testing and Compliance with Radio Equipment Standards

Commercialization of IoMT devices requires compliance with various test standards for high-frequency noise. An emission test is used to verify that highfrequency electromagnetic field noise from the IoMT device does not affect other equipment, along with an immunity (disturbance) test to verify that the IoMT device is not itself affected by such noise. Electromagnetic field noise emitted from the electronic circuit can interfere with and distort the weak signals from built-in sensors and can cause Wi-Fi and Bluetooth communication errors.

A typical source of electromagnetic noise in IoMT devices is switching power supplies (DC/DC converters) that generate harmonic noise. Clocks for microcomputers and memory are also noise sources.

Figure 2

Noise countermeasures include physical separation of electronic circuits and antennas, adding EMI filters, configuration of board layouts and layers, and using physical shields.

Applicable tests for consumer IoMT devices are CISPR 32 Class B as defined by the International Special Committee on Radio Interference (CISPR) for emissions testing and CISPR 35 for immunity testing. In addition, IoMT devices with Wi-Fi and Bluetooth must comply with each national legislation on regulating radio use.

Security and Personal Data

Cyber security measures are another challenge. For example, in March 2023, the US Federal Food and Drug Administration (FDA) issued new guidelines[2] for medical device vendors. When developing new medical devices, the guidelines recommend or mandate designs that take cyber security into account, creation of SBOM (software bill of materials) and vulnerability assessment.

Prompt provision of security updates throughout the product life is also expected, while non-compliant products are expected to lose their marketing authorization.

Another issue is compliance with laws on protection of privacy information. Data collected by IoMT devices and analyzed by cloud or mobile applications that can identify individuals could be classed as personal information and require special protection.

Prospects for IoMT

Healthcare and medicine are converging in the IoMT market, where significant future growth is expected. IoMT services should help extend healthy life expectancy and improve quality of life, as well as encouraging people to work for as long as they can. With a healthier population and reduced social security costs, financial burdens on the state tax paying population could be reduced significantly.

Telemedicine using IoMT is expected to become reality soon, offering medical services to many people, including those in remote areas and developers will need to produce advanced IoMT devices and services to meet these expectations.

■ Anritsu Corporation www.anritsu.com

About the author

Dr. Yamazaki joined Anritsu in 2008. He has over 10 years of experience as a test and measurement equipment engineer for the optical and photonics industries. He has worked in hardware and software development throughout his career and is currently engaged in digital marketing.

Dr. Yamazaki received B.E., M.E., and Dr.E. degrees from Hosei University, Tokyo, Japan. From 2005 to 2008, he was a Research Fellow of the Japan Society for the Promotion of Science.

References:

[1] “Global Digital Health Market Forecast (2021–2028)”, Global Information, Inc.

[2] Cybersecurity in Medical Devices: Refuse to Accept Policy for Cyber Devices and Related Systems Under Section 524B of the FD&C Act.

* The Wi-Fi® is a registered trademark of WiFi Alliance®.

* Bluetooth® trademarks and logos are the property of Bluetooth SIG, Inc. and Anritsu uses these marks under license.

Leveraging a Hardware Agnostic Approach to Ease Embedded Systems Design:

THE BASICS

This article demonstrates an approach that accelerates the prototyping phase of embedded system design. It will illustrate how to utilize a hardware agnostic driver in combination with a sensor to make component selection much easier for an entire embedded system. This article describes the components, the typical software structure of an embedded system, and the driver implementation. The subsequent article, “Leveraging a Hardware Agnostic Approach to Ease Embedded Systems Design: Driver Implementation”, will further detail the execution.

Using a hardware agnostic driver allows designers to choose the type of microcontroller or processor to manage the sensor without a dependence on hardware. The benefit of this approach is offering the option to add software layers on top of the basic one provided by a supplier, as well as simplifying sensor integration. This article will use an inertial measurement unit (IMU) sensor as an example, but the approach is scalable to other sensors and components. The driver is configured using the C programming language and tested with a generic microcontroller.

Component Selection

IMU sensors are mostly used for motion detection and to measure the intensity of movements through accelerations and rotational speeds. The ADIS16500 IMU sensor (Figure 1) was selected in this exercise as it allows for a simplified, cost-effective way to integrate accurate, multi-axis inertial sensing into industrial systems, compared with the complexity and investment associated with discrete designs.

The ADIS16500 evaluation board.

Figure 1

The main applications are:

• Navigation, stabilization, and instrumentation

• Unmanned and autonomous vehicles

• Smart agriculture and construction machinery

• Factory/industrial automation, robotics

• Virtual/augmented reality

• Internet of moving things

The ADIS16500 is a precision, miniature microelectromechanical system (MEMS) IMU that embeds a triaxial gyroscope, a triaxial accelerometer, and a temperature sensor. See Figure 2. It is factory calibrated for sensitivity, bias, alignment, linear acceleration (gyroscope bias), and point of percussion (accelerometer location). This means that the sensor measurements are accurate over a broad set of conditions. This interface allows the microcontroller to write and read the user control registers, as well as read the output data registers from where the accelerometer, gyroscope, or temperature sensor data can be acquired. For that reason, all the software and firmware required to manage the interface has been developed. Figure 2 shows the data ready (DR) pin. This pin is a digital signal that indicates when new data is available to be read from the sensor. The DR pin can be easily managed by a microcontroller, as it can be considered as an input through a general-purpose input/output (GPIO) port.

From a hardware perspective, the IMU sensor and microcontroller will be connected using the SPI interface, which is a 4-wire interface consisting of the nCS, SCLK,

DIN, and DOUT pins. The DR pin should be connected to one of the microcontroller’s GPIOs. The IMU sensor also needs a voltage supply that is between 3V and 3.6V, so 3.3V is sufficient.

Understanding the Typical Software Structure of an Embedded System

Understanding the generic software and firmware structure of an embedded system is essential to interfacing with a sensor driver. This will help the designer to build a software module that is flexible and easy enough to integrate into any project. Moreover, the driver must be implemented in a modular way, such that the designer can add higher level functions relying on existing ones.

The software structure of an embedded system is pictured in Figure 3. In Figure 3, the hierarchy begins with the application layer, which is where the application code is written. The application layer includes a main file, application modules that rely on the sensor, and modules that rely on peripheral drivers that manage processor configuration. Additionally, within the application layer, there are all the modules related to the tasks that the microcontroller has to process. For example, this includes all the software that manages a task with interrupt or polling, a state machine, and more. That layer level will be different depending on the type of project, so different projects have different codes implemented in it. Within the application layer, all the sensors of the system are initialized and configured in accordance with their data sheets.

All the public functions offered by the sensor’s drivers are invokable. For example, the read of a register from which data can be output, or a procedure that is writing a register that will change a setting/calibration.

Figure 2
An SW/FW structure of embedded systems.
ADI Figure 3

Below the application layer is the sensor’s driver layer, which has two types of interfaces. At this level, all functions invokable from the application layer are implemented. Moreover, the function’s prototypes are inserted in the driver header file (.h). So, by looking into the header file of a sensor’s driver, you can understand the driver’s interface and so its invokable functions from higher levels. The lower level layers will be interfaced with peripheral drivers that are specific and dependent on the microcontroller that manages the sensor. The peripheral drivers include all the modules that manage the microcontroller’s peripherals such as SPI, I2C, UART, USB, CAN, SPORT, etc. or modules that manage processor internal blocks such as timers, memories, ADCs, etc. They can be called low level functions because they are strictly related to the hardware. For example, each SPI driver is different considering different microcontrollers. Let’s look at the ADIS16500 as an example. The interface is the SPI, so its driver will be wrapped with the microcontroller’s SPI driver. This will be the same for different sensors and different interfaces. For example, if another sensor has the I2C interface, then similarly the wrapping with the I2C driver of the microcontroller will take place in the sensor’s initialization procedure.

Below the sensor’s driver level are the peripheral drivers, which differ for each type of microcontroller. In Figure 3, there is a split between peripheral drivers and low level drivers. In essence, the peripheral drivers offer the functions of reading and writing through the available communication protocols. Because the low level driver will manage the physical layer of the signals, there’s a strong dependence on the hardware that the designer uses. Usually peripheral and low level driver layers are generated from the integrated development environment (IDE) of the microcontroller thorough the visual tools, depending on the evaluation board on which the microcontroller is mounted.

Driver Implementation

A hardware agnostic approach enables the use of the same driver in different applications, and hence different microcontrollers or processors. This approach is dependent on how the driver is implemented. To understand the driver implementation, first, we will look at the interface, or the sensor’s header file (adis16500.h) pictured in Figure 4.

The header file contains useful public macros. This includes register’s addresses, SPI max speed, default output data rate (ODR), bitmasks, and the output sensitivity of the accelerometer, gyroscope, and temperature sensor, which are related to the number of bits (16 or 32) with which the data is represented. These macros are reported in Figure 4. Only a few register’s addresses are shown to provide an example. The code the article is referring to is available in the appendix

three fundamental functions, or SPI transmission and reception functions and the delay function needed between two SPI accesses to produce the right stall time. These code lines also show the prototype of the function that can be pointed to. The SPI transmission function takes a pointer to the value to be transmitted as input and it returns that can be checked to see if the transmission was successful. The same can be said for the SPI reception function that takes a pointer to a variable, as input, where

Figure 3 in the appendix shows all the public variables and public type declarations that can be used by every module including the adis16500.h. Here, new types are defined to manage data more efficiently. To provide an example, the ADIS16500_XL_OUT type is defined as a structure containing three floats, one for each axis (x, y, and z). There is also an enumeration that allows the sensor to be configured in different ways, giving the designer the flexibility to choose the configuration that best suits their needs. The most interesting part here is the section that makes the driver hardware agnostic. At the beginning of the public variables part (Figure 3 in the appendix), there are three crucial type definitions: pointers to

the value read in reception will be stored. The delay function takes a float as input representing the number of microseconds that the designer wants to wait, and has no return (void). In that way, the designer can declare these three functions with these specific prototypes, at the application layer (in the main file for example). Once declared, they can assign the three functions to the fields of an ADIS16500_INIT private structure. To better understand this last step an example is provided in Figure 2 in the appendix. SPI transmitter, receiver functions, and delay function are declared as static in the main file, so at application level. They are dependent on peripheral driver functions, so the dependence on hardware is outside the sensor driver.

Macros displayed in the ADIS16500 header file (adis16500.h). © ADI
Figure 4

The three functions are assigned to the fields of this variable that are pointers to functions. In this way, the designer can wrap the sensor and microcontroller without modifying the sensor driver code. If the designer changes the microcontroller, they only need to adjust the main file by substituting the low level functions inside the three static functions with the appropriate functions for the new microcontroller. This approach makes the driver hardware agnostic because the designer does not need to change the sensor’s driver code. Low level functions like spiSelect, spiReceive, spiUnselect, chThdSleepMicroseconds etc., are usually already available from the IDE of the microcontroller. In that specific case the microcontroller evaluation board used was SDP-K1, which embeds an STM32 F469NIH6 Cortex®-M4 microcontroller. The IDE indeed was ChibiOS, which is a free Arm® development environment.

Figure 4 in the appendix shows prototypes of the invokable functions from the application level. Those prototypes are in the header file of the sensor’s driver (adis16500.h), along with all the other software and firmware discussed in figures 2 and 3 in the appendix. First, there is the initialization function (adis16500_init) that takes a pointer to an ADIS16500_INIT structure as input and returns a status code indicating whether the initialization was successful. The implementation of the initialization function is done in the source file (adis16500.c) of the sensor’s driver. Figure 5 in the appendix shows the code for the adis16500_init function. First a type called ADIS16500_PRIV is defined, which contains at least all the fields of ADIS16500_INIT structure, and then a private variable called _adis16500_priv of that type is declared. Within the initialization function all the fields of the ADIS16500_INIT structure passed by the application layer will be assigned to the private variable’s fields _adis16500_priv This means that any subsequent calls to the sensor driver will use the SPI write and read functions, and the processor delay function, that were passed in by the application layer.

This is a key point because it is what makes the sensor driver hardware agnostic. If the designer wants to change the microcontroller, they only need to change the functions that they pass to the adis16500_init function. They do not need to modify the sensor driver code itself.

At the beginning of the initialization function the initialized field of _adis16500_priv variable is set to false because the initialization process has not yet been completed. At the end of the function before the return it will be set to true. Every time the designer calls another public function (Figure 4 in the appendix) the following check is performed: if the _adis16500_ priv.initialized is true it can proceed, if it is false it will immediately return an error called ADIS16500_RET_VAL ERROR. This is to prevent users from calling a function without first initializing the sensor driver. Continuing with the initialization function discussion, the following steps are performed:

1. Check the product ID, which is known a priori, by reading the ADIS16500_REG_ PROD_ID register.

2. Set the Data Ready (DR) pin polarity by writing the ADIS16500_REG_MSC_CTRL register in the appropriate bits field, with the value passed from the application layer (main.c).

3. Set the sync mode by writing the ADIS16500_REG_MSC_CTRL register in the appropriate bits field, with the value passed from the application layer (main.c).

4 Set the decmation rate by writing the ADIS16500_REG_DEC_RATE register, with the value passed from the application layer (main.c).

The initialization function depends on the read and write register functions (Figure 6 in the appendix). That is why the above four routines are done after the assignments to the _adis16500_priv variable. Otherwise, when the read or write register functions are called, they would not know which SPI transmitter, receiver, and processor delay functions to use.

Referring to Figure 4 in the appendix, there are other public functions that can be invoked after the initialization function. A description of the functionality of the implemented routines is given below, showing the low level ones. The second part of the article will go through details of other driver’s implemented functions. All of the following functions must be called only after the initialization function. For this reason, a double check will be done at the beginning of each function to see if the sensor has been initialized or not. If the answer is no, then the procedure immediately returns an error.

• adis16500_rd_reg_16

This function is used to read a 16-bit register. Its implementation is available at Figure 6 in the appendix. The inputs are ad that is a uint8_t variable representing the address of the register to be read and *p_reg_val that is a pointer to a variable of uint16_t type, that represents where the read value will be assigned. To do a read of a register through the SPI protocol, two SPI accesses are needed; the first to transmit the address, the second to read back the value of the addressed register. In between the two accesses a stall time is required, that is why a delay function is needed. During the first access we transmit the read/write bit, in that case is 1 (R = 1, W = 0), with the register address shifted by 8 bit plus 8 bit at 0, so the following sequence:

| AD0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

Where AD stands for address and R/W is the read/write bit.

After a delay, the function reads the value through the SPI and passes it to the input pointer. The registers of the ADIS16500 have a high address containing the high value (8 most significant bits) and a low address containing the low value (8 low significant bits). In order to get the entire value (low and high) of 16 bits it is sufficient to use the low address as ad, because the low and high addresses are consecutive.

• adis16500_wr_reg_16

This function is used to write a 16-bit register. Its implementation is available at Figure 6 in the appendix. The inputs are ad which is a uint8_t type variable representing the address of the register to be written and reg_val which is a uint16_t type variable, that represents the value to be written in the register. As for reading function low and high addresses and values are to be taken into consideration. For this reason, according to the data sheet, writing the ADIS16500’s register requires two SPI accesses in transmission. The first will send the R/W bit equal to 0 followed by low register address followed by low value, so the sequence will be the following:

R/W | AD6 | AD5 | AD4 | AD3 | AD2 | AD1 | AD0 | D7 | D6 | D5 | D4 | D3 | D2 | D1 | D0 |

Where D stands for data.

The second SPI transmitter access will send the R/W bit equal to 0 followed by high register address followed by high value, so the sequence will be the following:

R/W | AD14 | AD13 | AD12 | AD11 | AD10 | AD9 | AD8 | D15 | D14 | D13 | D12 | D11 | D10 | D9 | D8 |.

The write and read register functions could actually also be defined as private and therefore not visible and invokable from outside the driver software module. The reason they are defined as public is to enable debugging. This allows the designer to quickly access any register in the sensor for reading or writing, which can be helpful for troubleshooting problems.

• adis16500_rd_acc

This function reads x, y, z acceleration data from the output data registers, and returns their values in [m/sec2]. Its implementation is available at Figure 7 in the appendix. The input is a pointer to an ADIS16500_XL_OUT structure, which simply embeds three fields: x, y, z acceleration, expressed as a float type. The way the acceleration is read is the same for all the three axis, the only differences are the registers to be read. Each axis has its own: x axis has to be read on x-acceleration output data register, y and z axis accordingly. The acceleration value will be represented with a 32-bit value, so the registers to be read are two. One for the most significant 16 bits and one for the least significant 16 bits. For this reason, having a look to the code, there are

two register reading accesses with appropriate shift and OR bit operations. These operations allow the entire binary value to be stored on a private int32_t variable called _temp.

At this point binary to two-complement conversion will take place. After the conversion the two-complement value is divided by the sensitivity expressed in [LSB/(m/sec2)] so that the final value will be the acceleration expressed in [m/sec2]. This value will be registered in the x, y, or z field of the pointer to the structure that has been passed as input.

adis16500_rd_gyro

The gyroscope reading function does exactly the same as the acceleration reading function. Obviously, it will read x, y, z gyroscope data expressed in [°/sec]. Its implementation is reported in Figure 8 in the appendix. The input of the function is, as for acceleration case, a pointer to an ADIS16500_GYRO_OUT structure embedding x, y, and z gyroscope data, expressed as float type. The registers read are the gyroscope output data registers. The binary value will be represented with 32 bits, and the same steps as for acceleration are required to reach the two-complement value. After the binary to two-complement conversion, the value will be divided by the sensitivity expressed in [LSB/(°/sec)], so that the final value will be expressed in [°/sec], and it will be registered in the x, y, or z field of the pointer to the structure that has been passed as input.

Conclusion

In this article, a typical software/firmware stack of an embedded system has been illustrated. The IMU sensor’s driver implementation was introduced. A hardware agnostic approach offers a repeatable method for various sensors or components, even if the interfaces (SPI, I2C, UART, etc.) are different. The subsequent article “Leveraging a Hardware Agnostic Approach to Ease Embedded Systems Design: Driver Implementation” explains the sensor driver implementation in further detail.

About the author

Giacomo Paterniani earned a biomedical engineering degree at University of Bologna. He completed his master’s degree in electronics engineering at University of Modena and Reggio Emilia. After graduating, he spent a year as a research fellow at University of Modena and Reggio Emilia. In April 2022, he joined Analog Devices’ graduate program as a graduate field applications engineer. In April 2023, he became an FAE.

■ Analog Devices www.analog.com

Engage with like-minded members and ADI technology experts in our online community, EngineerZone®. Expand your network, ask your tough design questions, share your expertise, browse our rich knowledge base, or read about new technologies and the engineers behind them in one of our blogs. Visit https://ez.analog.com

Creating HighPerformance Cool Running Edge Nodes on FPGAs

With Edge nodes, one of the major criteria is often low power use - but why does this matter and what are the implications?

Some application developers might not recognize this as a problem, as their devices work on cabled power but that is not the whole story. Lower power consumption also means lower self-heating of the device – this means that the FPGA or semi-conductor device running the edge node will stay cooler, avoiding the need for a fan, which are notorious for failing. Fewer components mean a smaller physical size that can fit into a smaller enclosure.

Another consideration is that FPGAs are semi-conductors, and if they get hot, this can also reduce the lifetime of the devices. Keeping them cooler gives longer mean time to failure and a lower FIT rate. A rule of thumb in the semi-conductor industry that is if you are reducing the junction temperature by approximately ten degrees, it also approximately halves the FIT rate. To some extent, it is a tradeoff between a lower power consumption and thus a longer lifer, with a given amount of features, or a higher power consumption that allows additional features in the design.

A lower FIT rate in the hardware, due to lower power consumption and lower temperature, also makes it easier to achieve certification of the device.

Efficiency means longevity

What's the architecture behind power efficiency and why does that really matter? Comparing FPGAs from Microchip to typical FPGAs, Microchip FPGAs are built on non-volatile technology. This means the bit stream, the configuration of the FPGA, is stored directly in the active cells - this is always inside the FPGA and is not lost when it is powered down. The next time you power up, the configuration is immediately available. On SRAM devices, the cells need to be continuously recharged, and they also suffer some leakage, causing self-heating. For similarly sized devices,

FPGAs typically use about 50% of the power consumption of the alternative architectures.

The non-volatile technology of Microchip FPGAs also gives immunity against single event upsets, such as neutron strikes and another particle strikes which may change the functionality of the FPGA. This removes the need for mitigation schemes required by other technologies.

If we look at the same application, the same FPGA complexity and the same design on three different FPGAs, on the left in the diagram we see PolarFire FPGA.

The other two are competitive FPGAs where we have the same design, the same number of look-up tables, the same blocks run on a similarly sized FPGA board at the same room temperature. In thermal tests, estimations were around about 45°C for PolarFire, around about 65°C for the 16 nm device and about 70°C to 80°C for the 28 nm competitor.

What does that imply for mean time before failure?

Let’s consider a design running one of our SoC systems and compare it with a SRAMbased SoC. With the same design, same temperature sweep and the power consumption recorded, the mean time to failure curve for each is shown below. The PolarFire SoC at the ambient temperature of 50°C reaches about 70°C and the SRAMbased system reaches about 110°C.

For the SRAM-based SoC, reaching 110°C, the FIT-rate would be around 107. The PolarFire SoC with its 70°C of junction temperature shows only about ten FIT, so a couple of times those ten degrees difference makes a significant difference to the FIT rate. This disparity in the FIT rates makes a significant difference in the reliability of the device out in the field.

sophisticated tasks, such as interfacing from the sensor, doing the processing, possibly some machine learning, encoding, and connecting to the network.

One application where power consumption is very important is in thermo-cameras. These are typically handheld and need a long battery life. They are also relatively small so developers want to have a small package which can support elevenby-eleven packages. In addition, because of the non-volatility of the FPGA configuration, we don't need a configuration memory, saving more space on the board.

Another application is professional drones. This very often requires a combination of motor control, sensor fusion and communication. Again, small power consumption is needed to obtain maximum flying time compared to that achievable with the deterministic FPGA fabric. The FPGA board for this application features the MPF 100 and the small eleven by eleven package plus a line driver for CoaXPress at five gigabits per second. A design interfaces with the Sony sensor, the MIPI CSI-2 converts the raw data into CoaXPress and then drives the internal high-speed transceivers to the line driver.

When power gets critical

There are several applications where performance per watt is important. These include cameras, industrial vision and smart vision. A common case is the need for cameras in very small form factors, that are still required to perform some

The small amount of self-heating helps by preventing interference to the thermal sensor and again interfacing via the MIPI interfaces. This is where the benefits of the PolarFire FPGA come in. In a small housing, we have very optimized packages that support running on very small PCBs.

Additionally, we have a RISC-V processor for housekeeping. There are Microchip tools available which can help develop FPGA applications and analyze their power consumption.

As we have seen, Microchip FPGAs provide a power advantage over the competition, offering a significantly lower power consumption of 50%. With lower power consumption comes less risk to the system.

Tools are available from the Microchip website and from other vendors, allowing direct comparisons to ensure you get the low power FPGAs you need.

For more information: https://www.microchip.com/en-us/products/fpgas-and-plds

■ Microchip Technology www.microchip.com

Get a Quick Start with “Windows on Arm” Development

This article explains the OS selection criteria that lead to using Windows for Arm and reviews the different versions of Windows available for consideration. It then introduces the EPC-R3720IQ-AWA12 Windows on Arm Development Kit from Advantech and describes how it provides a seamless environment to accelerate development. It includes tips for getting started and points to Microsoft tools that can be used with the kit.

Much of the existing infrastructure is based on Windows in applications such as industrial automation and healthcare. For developers creating low-power, lowcost edge devices for these sectors, Windows on Arm® is an obvious choice, as it brings the Windows platform to the efficient Arm architecture.

However, one major challenge in creating Windows on Arm systems has been a lack of suitable development kits. Although the operating system (OS) has long been available on various board-

level Internet of Things (IoT) and embedded computing systems, these offerings typically require significant hardware engineering before coding can begin.

Developers need a box PC-style solution that comes pre-loaded with Windows on Arm and integrates all the components required to start application development. This would reduce setup time and complexity, enabling developers to focus on application development and testing without worrying about initial software installation and configuration.

Why use Windows instead of Linux or an RTOS?

When choosing an OS, developers have many options, including Linux and various real-time OSs (RTOSs). One common reason for choosing Windows over these alternatives is the extensive range of software and libraries available. This is a critical consideration for environments with legacy Windows infrastructure.

Windows also offers a mature development ecosystem, with comprehensive tools and application programming interfaces (APIs) such as Visual Studio and the .NET framework. Programmers can choose from a wide range of programming languages like C++, Python, and Node.js, and can access various Microsoft Azure services to build out sophisticated functionality quickly.

Linux shares some of these advantages, but configuring and maintaining a Linux build can require considerable effort. Furthermore, Linux distributions can vary widely, leading to challenges in the development process.

In contrast to Windows and Linux, RTOSs emphasize efficiency. They typically lack advanced features like rich graphical user interfaces (GUIs) and the broad ecosystem that full-featured OSs provide.

Ultimately, if developers seek a robust, feature-rich, and secure OS with a mature development ecosystem, Windows presents a compelling option. However, Windows is available in many forms, and it is essential to understand the differences.

Understanding the Windows options

Microsoft offers several variations of Windows. Table 1 shows some of the key distinctions between the different editions.

WINDOWS ON ARM

For the EPC-R3720IQ-AWA12, Advantech selected Windows IoT Enterprise. One of the advantages of Windows IoT Enterprise is its compatibility with the touch-friendly Universal Windows Platform (UWP) and traditional Win32 apps. This flexibility allows developers to choose the app model best suited to their needs. Windows IoT Enterprise also offers advanced security features that improve reliability:

• Device lockdown capabilities allow administrators to restrict the device to running only authorized apps.

• Secure boot ensures that the device starts up only with trusted software.

• BitLocker encryption helps protect sensitive data.

The OS also offers enterprise-grade management tools that enable centralized support of deployed devices. These tools simplify the maintenance and security of large-scale IoT deployments.

Many of these features are not supported in the more compact Windows IoT Core. This edition is intended for lightweight, single-purpose devices with limited resources. It removes features such as a GUI and support for traditional Win32 applications, making it more suitable as a companion OS for complex devices. Conversely, standard Windows Pro offers a rich feature set but cannot be customized for IoT deployments. It is also not available with LTSC support for longlife devices.

Why use Windows on Arm?

Historically, the Windows OS was tied to the x86 architecture. Today, the OS also runs on Arm processors, and this option opens new design possibilities.

GET A QUICK START WITH “WINDOWS ON ARM” DEVELOPMENT

The primary advantage of Windows on Arm is efficiency. Arm processors are known for their low power consumption, making them well-suited for battery-powered devices and applications where thermal management is a concern. Arm-based systems also tend to emphasize cost efficiency, making them an attractive option for large-scale IoT deployments.

Getting a quick start with a Windows on Arm dev kit

As noted above, one of the drawbacks to Windows on Arm has been the lack of ready-to-use hardware. The EPC-R3720IQAWA12 solves this problem by delivering a box PC pre-installed with Windows 10 IoT. As shown in Figure 1, the dev kit is housed in a rugged 174 × 108 × 25 millimeter (mm) enclosure. This enclosure accommodates mounting brackets and can be deployed in the field if desired.

Setting up the dev kit

Setting up the EPC-R3720IQ-AWA12 dev kit is a straightforward process. The following bullets lay out the key steps, starting with the basic setup:

1. A monitor, keyboard, and network should be connected via the HDMI, USB, and Ethernet ports, respectively.

2. The dev kit will automatically start the Windows 10 IoT setup process on the first boot. Once this is complete, the user will be presented with the Windows desktop environment.

3 The user must download and install Visual Studio from the Microsoft website to set up the development environment. During installation, the user must select the components required for developing Windows IoT applications and any other necessary workloads, such as .NET or UWP.

At the heart of the dev kit is NXP Semiconductors ' MIMX8ML8DVNLZAB system-on-chip (SoC) that is based on a quadcore Arm Cortex-A53 processor capable of running at 1.8 gigahertz (GHz) (it runs at 1.6 GHz on the EPC-R3720IQ-AWA12).

The SoC features a 2.3 tera operations per second (TOPS) neural processing unit (NPU), making it well-suited for artificial intelligence (AI) and machine learning (ML) workloads at the edge.

The development kit has 6 gigabytes (Gbytes) of memory, 16 Gbytes of storage, and expansion options via slots for MiniPCIe, M.2, Micro SD, and Nano SIM.

Regarding connectivity, the dev kit offers dual Gigabit Ethernet (GbE) ports, one USB 2.0 port, one USB 3.2 Gen 1 port, an HDMI port, and a serial port supporting CAN FD.

4. Any required software development kits (SDKs) and runtimes should be installed. For example, if .NET 6 or .NET 7 are needed, the appropriate runtimes should be downloaded from the Microsoft developer portal or through Visual Studio’s installer.

5 After installing the necessary tools, Visual Studio should be configured for Windows IoT development to ensure the correct versions of the Windows SDK and tools are installed.

Depending on the application needs, additional configurations may be required:

1. An antenna should be attached to the dev kit’s built-in connector if wireless networking is needed. For cellular connectivity, a SIM card should be provisioned and installed.

2. Any peripherals connected through the M.2 slot or other I/O ports should be tested, ensuring that the necessary drivers and software are installed for these peripherals.

3 The appropriate Azure IoT Hub or other cloud services must be configured if the application involves cloud connectivity. This involves setting up an Azure account, creating resources with Azure, and configuring the development kit to communicate with these resources.

The user can now move on to application development and deployment. Development can be started by creating a new project or opening an existing one in Visual Studio. Applications can be developed, run, and tested directly on the device. If users plan to debug applications remotely from a development PC instead, they should set up remote debugging. This involves configuring the remote debugging tools on both the dev kit and the PC.

Conclusion

Windows on Arm offers many compelling advantages for complex IoT devices. The EPC-R3720IQ-AWA12 dev kit gives developers a quick path to creating applications for this OS, and the hardware can also be used directly for deployment in some cases. As shown, getting started with the dev kit is a simple process, enabling developers to begin application development with minimal setup.

About the author

Rolf Horn, applications engineer at DigiKey, has been in the European Technical Support group since 2014 with primary responsibility for answering any development and engineering related questions from customers in EMEA, as well as writing and proof-reading German articles and blogs on DigiKey’s TechForum and maker.io platforms. Prior to DigiKey, he worked at several manufacturers in the semiconductor area with focus on embedded FPGA, microcontroller and processor systems for industrial and automotive applications. Rolf holds a degree in electrical and electronics engineering from the University of Applied Sciences in Munich, Bavaria and he started his professional career at a local electronics products distributor as system-solutions architect to share his steadily growing knowledge and expertise as trusted advisor.

Reference:

“Getting Started with Windows 10 IoT Enterprise Using the Advantech EPC-R3720, an Arm-Based Embedded PC with NXP i.MX 8M Plus”

■ DigiKey www.digikey.com

Figure 1
The EPC-R3720IQ-AWA12 is a compact box PC powered by an Arm processor running Windows 10 IoT.

Siemens strengthens leadership in industrial software and AI with acquisition of Altair Engineering

Siemens has signed an agreement to acquire Altair Engineering Inc., a leading provider of software in the industrial simulation and analysis market. Altair shareholders will receive USD 113 per share, representing an enterprise value of approximately USD 10 billion. The offer price represents a 19% premium to Altair’s unaffected closing price on October 21, 2024, the last trading day prior to media reports regarding a possible transaction. With this acquisition Siemens strengthens its position as a leading technology company and its leadership in industrial software.

“Acquiring Altair marks a significant milestone for Siemens. This strategic investment aligns with our commitment to accelerate the digital and sustainability transformations of our customers by combining the real and digital worlds. The addition of Altair’s capabilities in simulation, high performance computing, data science, and artificial intelligence together with Siemens Xcelerator will create the world’s most complete AI-powered design and simulation portfolio,” said Roland Busch, President and CEO of Siemens AG. “It is a logical next step: we have been building our leadership in industrial software for the last 15 years, most recently, democratizing the benefits of data and AI for entire industries.”

“The acquisition of Altair is highly synergistic, underpinning Siemens’ stringent capital allocation, balancing investments and shareholder returns on the basis of a strong balance sheet. The transaction is expected

to be EPS accretive two years post-closing,” said Ralf P. Thomas, CFO of Siemens AG

“This acquisition represents the culmination of nearly 40 years in which Altair has grown from a startup in Detroit to a worldclass software and technology company. We have added thousands of customers globally in manufacturing, life sciences, energy and financial services, and built an amazing workforce, and innovative culture,” said James Scapa, Altair’s founder and CEO. “We believe this combination of two strongly complementary leaders in the engineering software space brings together Altair’s broad portfolio in simulation, data science, and HPC with Siemens’ strong position in mechanical and EDA design. Siemens’ outstanding technology, strategic customer relationships, and honest, technical culture is an excellent fit for Altair to continue its journey driving innovation with computational intelligence.”

By adding Altair’s highly complementary simulation portfolio, with strength in mechanical and electromagnetic capabilities, we are enhancing our comprehensive Digital Twin to deliver a full-suite, physicsbased, simulation portfolio as part of Siemens Xcelerator. Altair’s data science and AI-powered simulation capabilities allow anyone, from engineers to generalists, to access simulation expertise to decrease time-to-market and accelerate design iterations. Additionally, Altair’s data science capabilities will unlock Siemens’ industrial domain expertise in product lifecycle and manufacturing processes.

Significant synergies and EPS accretive

The transaction will strongly increase Siemens’ digital business revenue by +8%, adding EUR ~600 million to Siemens’ digital business revenue of EUR 7.3 billion as reported in fiscal year 2023. Siemens expects to achieve significant revenue synergies especially from cross-selling of the highly complementary portfolios and from providing Altair full access to Siemens’s global footprint and global industrial enterprise and customer base with a revenue impact of more than USD 500 million p.a. mid-term growing to more than USD 1.0 billion p.a. long-term.

Moreover, Siemens aims to achieve cost synergies on a short-term basis, with an EBITDA impact of more than USD 150 million p.a. by year two post-closing.

The transaction is expected to be EPS (prePPA) accretive by year two post-closing. The acquisition will be fully cash-financed from Siemens’ existing resources and its capacity to fully finance the transaction based on Siemens’ strong balance sheet, as underlined by its exceptional rating, which Siemens is committed to maintain.

Preemptive deleveraging is supported by significant cash proceeds from the already closed Innomotics divestment. In addition, Siemens has substantial financing potential from the sale of shares in listed entities. Closing of the transaction is subject to customary conditions and is expected within the second half of calendar year 2025.

Siemens www.siemens.com

The second standardization wave is rising

The OSM (Open Standard Module) specification, officially adopted by the SGET (Standardization Group for Embedded Technologies e.V.) in 2021, heralded the world’s first manufacturer-independent standard for solderable Computer-on-Modules. This is now being followed by a second wave of standardization as OEMs seek to switch from proprietary solutions to a unified standard across the entire range of ultra-low-power application processors.

Solderable Computer-on-Modules are not entirely new, with various manufacturers having offered proprietary Computer-onModules that can be SMT assembled for some time. Unlike classic Computer-on-Modules that require manual THT assembly, these solderable modules allow OEMs to benefit from fully automated assembly and test processes. However, standardization of such modules has been lacking until now. Consequently, each module comes in a different form factor and features different pinouts. This leads to vendor lock-in and long-term availability depending on a single manufacturer. There are no independently maintained specifications, design guides, or a broad developer community, let alone open-source initiatives.

Development houses specializing in carrier boards for solderable Computer-on-Modules are also missing, as is the entire ecosystem common in the traditional Computer-onModules scene. OEM customers must either integrate the modules themselves or rely on the module manufacturers to implement the design, which requires a great deal of trust and carries a higher design risk than with standardized Computer-on-Modules.

OSM targets ultra-low-power processors

With the availability of the OSM standard, things are now changing. And the shift is occurring at an opportune time, as more and more applications are being implemented with ultra-low-power processors up to 8 Watts, which can generally be designed without heat sinks. The Internet of Things (IoT) and the associated desire to make as many devices as possible ‘smart’, for example to manage them via smartphone apps, is drastically increasing the need for simpler integration of such application processors – even for industrial batch sizes, for which Computer-onModules are essentially designed. In general, Computer-on-Modules offer an application-ready computing core with all specified interfaces in a fully validated package, including all standard drivers and the necessary OS support. This significantly shortens the time-to-market and reduces NRE costs immensely.

Standardized Computer-on-Modules also provide typical advantages such as secondsource availability, scalability across processor generations, manufacturers, and even architectures, thus ensuring long-term availability and safeguarding investments.

This offers OEMs a high level of design security and maximum return on investment. By deploying Computer-on-Modules, costs can also be saved in relation to circuit boards, as a high current density often only exists directly at the processor. Once it is distributed on the carrier board, fewer layers can be used.

It’s not only the connector that costs money

However, when it comes to leveraging ultra-low-power processors, pluggable Computer-on-Modules are not the most efficient choice as they require several additional investments that partly negate the advantages of the cheaper processors. These investments include procurement of the actual connector, THT assembly, screw fixes and locknuts, as well as equipment to test the modules. In addition, test stations for pluggable COMs require regular replacement of the test board for larger series production runs, as the connectors are specified for a limited number of mating cycles – usually no more than a few hundred. Packaging is also more complex, involving antistatic pressure-sealed bags, bubble wrap or foam packaging, and cardboard boxes, with each module often individually packaged. RMA management is not only time-consuming; it also wears out the test stations. In this respect, the concept of pluggable Computer-on-Modules is only truly worthwhile for high-performance processors costing around 70 euros, where the additional expenses are less significant, accounting for 5-10% of the cost per module. For more expensive processors, the percentage is even lower. OSM modules increase cost efficiency

Conversely, the cheaper ultra-low-power processors require significantly less costly solutions to bring them to market as application-ready modules at an affordable price. After all, the design-in is generally less complex, making full-custom designs more viable. A clearly more cost-effective solution is the use of solderable Computer-on-Modules, also known as System-on-Modules in the ARM segment. They can be assembled, tested, and packaged fully automatically. When standardized, the equipment to test the modules can also be standardized and used across all modules, banishing wear and tear. Stored in blister packs, shipped, and prepared for SMT assembly, the entire complex packaging management and associated costs are also eliminated. This makes modules a viable solution even for simple 32-bit processors that cost only a few euros/dollars.

Full-custom designs become less attractive

It is also worth noting that the design decision is significantly shifting from fullcustom designs towards module and carrier board solutions. With THT-assembled modules, the amortization limit for the finished embedded system is typically around 20,000 units. However, the potential cost savings from SMT assembly are so significant that the break-even point is expected to increase to around 200,000 units. These are only rule-of-thumb values, which can vary depending on the setup. What is certain, however, is that solderable Computer-on-Modules can be used to address significantly larger-volume production runs, which expands the target market immensely and therefore also increases demand. The larger target group, in turn, enables faster market growth, leading to lower prices thanks to better economies of scale. All of this, coupled with availability in a standardized, non-proprietary format, promotes competition, which ultimately benefits OEMs and is why the proprietary market for solderable Computer-on-Modules is coming under increased pressure with the launch of the OSM standard.

Clear pin specifications

However, the OSM standard includes an often-misunderstood provision, namely

that there are no mandatory pin assignments. Without a more detailed explanation, this might lead to the assumption that everyone can do as they please, raising the question: Where’s the difference between proprietary and standardized Computer-on-Modules? This lies in the optional specifications, which ultimately provide a rather stringent master plan for the assignment of pins. The OSM form factor ‘Large’ has up to 660 pins (even COMHPC Server modules only have around 21% more pins!) and it is clearly defined which pin can execute what. However, it is not obligatory to assign pins – and this offers immense advantages.

100% interoperability

It means OSM modules are 100% interoperable and can be deployed on any OSMcompatible carrier. The optional specifications also make it possible to offer highly dedicated and therefore very heterogeneous SoCs on a single standard because 600 pins provide enough space for all common interfaces. One potential issue is that the optional interface specification states that five Ethernet interfaces are only possible for size L modules and upwards. So, there is a clearly defined range of functions for each module size, which could have been different for some processors. For example, highly specialized switch controllers that would also fit

on size M modules. Ultimately, the only difference between standardized OSM modules and previous Computer-onModules is that there are fewer mandatory specifications, which some processor manufacturers had used to deny competitors access to the Computer-onModule business by exerting influence on the specification of the pinouts. This prevented competitors from leveraging the multiplier effect of Computer-onModules. However, by replacing mandatory with optional interfaces, the standard becomes compatible for almost all ultra-low-power SoCs, which are available in highly heterogeneous designs.

Full flexibility

This makes the OSM standard exceptionally open and future-proof – even for very dedicated solutions with automotive interfaces or integrated FPGAs, for instance. New or not yet common interfaces can in the future simply be assigned to unused pins or replace obsolete ones. For functions that will never exist in combination, pins can be assigned multiple times. Such flexibility creates the necessary design security in the heterogeneous field of highly integrated ultra-low-power SoCs.

Leading manufacturer support SGET members Advantech, Aries, Avnet, F&S, Geniatech, iesy, iWave, and Kontron are the first manufacturers to offer official OSM modules. The ecosystem surrounding the modules is also growing. The first carrier board developers are already advertising that they offer not only classic COMs but also designs with OSM modules. Manufacturers such as iWave and Yamaichi are also selling test adapters for OSM modules. Application-ready Gen2 Computer-on-Modules supporting the OSM standard are already available in numerous variants:

One carrier board for all OSM modules

All these modules can be tested on the same carrier board, provided they support the full range of OSM pin functions. This homogenizes the design-in and simplifies the re-use of PCB layouts. One test system can be used for all OSM modules, without the wear and tear effects associated with connectors. Costs decrease and flexibility increases when evaluating a wide variety of processors. It becomes possible to build highly heterogeneous product families using one basic layout, which can result in enormous economies of scale.

THE SECOND STANDARDIZATION WAVE

Even tricky tasks such as the carrier board antenna design can be reused, as OSM has incorporated these signals into the pin specification as well, enabling connectorless antenna designs. Today, most OSM modules still come with pre-tinned and non-tinned contact pins. However, the even more advantageous BGA variants are likely to appear more frequently in the future, once the second wave of standardization has expanded. From certain volumes upwards, investing in the simpler and reliable SMT process will ultimately be worthwhile to gain a competitive advantage and offer added value. Note that SGET membership is required to manufacture OSM modules. Nevertheless, the specification is open-source and is provided under Creative Commons license, both in terms of hardware and software. This also increases trust in the OSM standard. It will be interesting to see whether there will also be OSM modules for Arduino or Raspberry Pi in the future.

The OSM standard pushes the break-even point for full-custom designs towards 200,000 units.

It’s certainly something that would be possible from a technical perspective.

■ SGET https://sget.org

The currently available OSM modules are extremely heterogeneous, yet can all be tested on the same carrier.

OSM vendor support of processors

ESP32 Xtensa

Intel Atom (Elkhart Lake, Apollo Lake)

NXP i.MX 8M Mini

NXP i.MX 8M Nano

NXP i.MX 8M Plus

NXP i.MX 8ULP

NXP i.MX 8X Lite

NXP i.MX 91

NXP i.MX 93

Qualcomm Snapdragon QCS6490

Renesas RZ/A3UL

Renesas RZ/Five

Renesas RZ/G2L

Renesas RZ/G2UL

Rockchip PX30

Rockchip RK35

STM32MP13x

Test socket

TI AM33x Sitara

TI AM62Ax Sitara

TI AM62x Sitara

TI DRA821U

TI AM62x Sitara

Yamaichi

Infineon launches ModusToolbox™ Motor Suite for simplified motor control development

Infineon Technologies AG announced the launch of the ModusToolbox™ Motor Suite, a comprehensive solution of software, tools and resources for developing, configuring and monitoring motor control applications. With its versatility across motor types, the solution enables developers to bring high-performance motor control applications to market quickly and efficiently. The suite supports industrial, robotics, and consumer applications such as home appliances, HVAC, drones, and light electric vehicles.

“With the launch of ModusToolbox Motor Suite, we are empowering developers to create innovative motor control applications with ease and efficiency,” said Steve Tateosian, Senior Vice President of IoT and Industrial MCUs at Infineon. “By providing a comprehensive solution that expands the capabilities of our ModusToolbox ecosystem, we are giving developers the tools they need to focus on what matters most: building better products.”

The suite streamlines development and testing, providing real-time parameter monitoring for valuable insights into motor performance, efficiency, and reliability. This enables engineers to quickly identify issues, optimize designs, and enhance overall functionality. It offers effortless and accurate board setup, comprehensive signal analysis, and customized status monitoring.

ModusToolbox Motor Suite features automatic detection and recognition of supported boards and kits, real-time visualization and monitoring of motor control

related signals. It also provides optimized system performance with pre-configured algorithms and real-time monitoring of critical system parameters. Additionally, the solution offers sophisticated algorithms that ensure control robustness across many motor types and applications. It currently includes out-of-the-box support for Infineon’s XMC7000 micro-

Infineon will showcase a demo featuring the ModusToolbox Motor Suite at electronica 2024 in Munich from 12 to 15 November (Hall C3, Booth 502).

Infineon at electronica 2024

At this year’s electronica in Munich, Infineon presents innovative solutions that are helping to shape an all-electric society.

controllers and will further expand its capability to include industrial MCUs, with PSOC™ Control integration confirmed for upcoming releases.

In addition to its comprehensive feature set, the ModusToolbox Motor Suite is designed to be highly versatile and easy to use. It provides a seamless graphical user interface for motor control development and an easily adaptable, hardwareabstracted motor control core library.

Visitors can explore sustainable technologies that are transforming the mobility and automotive landscape, enabling sustainable buildings and smarter living, and promoting the growth of artificial intelligence with minimal environmental impact.

More information is available at www.infineon.com/electronica.

Infineon Technologies www.infineon.com

Wireless performance testing key to growth of extended reality (XR) in industry

Testing extended reality for business and industrial markets

Extended Reality (XR) – a collective term for diverse technologies that include Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) – has expanded from the gaming world to industrial and business applications. In industry and business, XR can be applied to training, learning, simulation, remote monitoring, maintenance, repair, and more.

XR is expected to show an annual growth of 20% to 30% through 2028.

For example, in its Extended Reality Global Market Report 2023, Global Information, Inc. projects that the XR market will reach an annual growth rate of 29.1% and be worth US$123.77 billion in 2027.

AR overlays video of the real world with computer-generated images and videos. AR typically uses smart glasses or smartphones and does not enable manipulation of the overlaid content, which is static in most cases.

A typical AR application is in the maintenance of facilities and equipment. Here, AR allows the operator to view a manual via smart glasses while still being able to work with both hands and not take their eyes off the equipment. The heads-up display (HUD) used in automotive applications is also considered to be a type of AR device.

VR immerses the user in a computer-generated virtual space and requires VR goggles that often block out the real world.

Applying MR to make plant operations and training more efficient. The difference from AR is that MR recognizes virtual touch operations on a menu or dashboard displayed in mid-air. Objects such as components and facilities are represented as 3D models, with which the user can interact by rotating them, moving them, and so on.

Users can interact with characters and objects in this virtual space. An example of the use of this technology is Building Information Modelling (BIM), which uses 3D models in all stages of construction for planning, surveying, design, construction, management, and maintenance.

MR combines the real world with virtual objects, such as menus and characters. It is a user-interactive technology that requires the use of dedicated MR goggles that support gesture recognition, such as virtual touch, on a menu displayed in mid-air. The 3D graphics generated by MR are ideal for providing operators with instructions on a production line or enabling collaboration among project members who need to share information about the shape and design of a product. To drive training and work efficiency, there is a growing trend to apply MR to manufacturing, maintenance, and repair processes, given that it can provide 3D representations of components and work procedures. It can also be used to include the know-how of experienced workers in a system. Figure 1 summarises the key aspects of these technologies.

XR Technical challenges

XR devices need to respond to user actions and inputs in real time, including 3D video content, leading to strict latency requirements.

One method of suppressing delay involves sending uncompressed video data from the host system and then displaying that video, as is, on the XR device. This, however, means that improving the data throughput of the physical layer of wireless communication is a key challenge to overcome if XR devices are to transmit and receive huge amounts of uncompressed data such as video and 3D graphic content.

For example, the Wi-Fi 5 or IEEE 802.11ac standard specifies a maximum data throughput of 6.9 Gbps, which is barely above the 6 Gbps requirements for stereoscopic XR using uncompressed data. However, newer standards such as Wi-Fi 6/6E (11ax) specify a maximum data throughput of 9.6 Gbps. In addition, Wi-Fi 7 (11be) offers a theoretical maximum throughput of 46 Gbps. However, realworld data throughput speeds are typically much lower than the theoretical maximum possible throughput.

Another challenge is the coexistence of wireless communication technologies and high-density integration in XR devices.

XR devices have multiple wireless communication interfaces, including wireless LAN and Bluetooth®, for transferring 3D graphics and motion sensor data.

Furthermore, 5G NR technologies with eMBB, mMTC, and URLLC are also under consideration for application to XR. It is said that 5G NR technology will be essential to future XR devices. If these multiple wireless communication technologies are to coexist in a single device, however, XR device developers face the challenge of addressing the noise and radio interference that each of these technologies generate when in use. XR devices require the high-density integration of multiple wireless communication modules in a limited space. Moreover, noise sources such as power supplies, signal processing, fans, and motors are mounted in a small enclosure close to the communication module. The resulting noise may increase the communication error rate, causing reduced communication speeds and data loss. Other country- and region-specific radio usage regulations, as well as compliance with all relevant standards, such as 3GPP and IEEE, also need to be considered.

Ensuring wireless communication performance

The wireless performance testing listed below will be vital to developers who take on the technical challenges of wireless communication outlined above.

• Wireless signal strength

• Signal quality such as receiving sensitivity and modulation accuracy

• Stability

Figure 2

TESTING EXTENDED REALITY FOR BUSINESS AND INDUSTRIAL MARKETS

For example, the Anritsu Wireless Connectivity Test Set can evaluate the TRx RF characteristics such as Tx power, receiving sensitivity (PER), and modulation accuracy (EVM) for IEEE 802.11a/b/g/n/ac/ax/be (2.4, 5, and 6 GHz bands) devices. The MT8862A supports both Network and Direct modes. Network mode, which is a distinctive feature of the MT8862A, can be used to test the wireless performance indices by directly simulating a real-world network connection and completing the wireless connection

between the DUTs and the MT8862A acting as an access point (AP) or station (STA). Network mode provides an easy-to-use test environment that does not require DUT control, and is ideal for product development, design validation, and end-product verification. On the other hand, Direct mode is ideal for prototyping and product development, in that the MT8862A supports fast measurements as the DUT is controlled directly from an external PC and is optimised for mass production.

Bluetooth technology is widely used for communication between XR devices and controllers and must meet the Bluetooth SIG RF performance requirements. The Anritsu Bluetooth Test Set MT8852B is an industry-standard RF test solution certified by the Bluetooth SIG. The test set provides production tests for a wide range of products that integrate Bluetooth technology. It supports Basic Rate (BR), Enhanced Data Rate (EDR), and Bluetooth low energy (BLE) measurements for transmit power, frequency, modulation, and receiver sensitivity, as required by the Bluetooth RF test specifications.

RF performance in 5G NR can be evaluated using the Radio Communication Test Station MT8000A. The MT8000A test platform provides all-in-one support for RF measurements as well as protocol and application tests in the FR1 (to 7.125 GHz) and FR2 (millimeter-wave) bands. The MT8000A enables both millimeter-wave band RF measurements and beamforming tests using the call connections specified by 3GPP.

Conclusion

XR technology is evolving rapidly as are the wireless technologies on which it will rely. Next-generation XR goggles will require a low-latency performance in the order of a single-digit ms. To achieve this in 5G and beyond networks, the use of multiaccess edge computing (MEC) is being considered, where data is processed on edge servers located close to the XR device without the use of the cloud. Wireless performance testing is important to facilitate the development of XR and bring the next generation of devices to the market. To this end, Anritsu offers a comprehensive range of industry-leading test sets that provide engineers with the advanced test capabilities required for current and nextgeneration wireless technologies. Particularly pertinent to XR is the rollout of Wi-Fi 6/6E and the introduction of Wi-Fi 7. Longer-term advances in 5G and beyond are also expected to greatly enhance communication.

■ Anritsu Corporation www.anritsu.com

Siemens launches fully electronic e-Starter with semiconductor technology

Siemens Smart Infrastructure launches its first fully electronic starter with semiconductor technology.

The SIMATIC ET 200SP e-Starter offers short-circuit protection that is 1000 times faster and is virtually wear-free compared to conventional solutions such as circuit breakers or fuses. This ensures optimal protection for motors as well as other types of loads and the applications in which they are used. The e-Starter also features the application-friendly Smart Start and full integration into the Totally Integrated Automation (TIA) concept. The compact device can be used worldwide, requires minimal space in the control cabinet, and is easy to install. In industries such as food and beverage, intralogistics, and mechanical engineering, high efficiency motors are used in demanding applications, for example to drive conveyor systems or pumps. Malfunctions and failures can quickly lead to considerable damage and costs. Against this backdrop, motor starters play an important role: They not only switch motors reliably, but also protect them against overload and short circuits.

The SIMATIC ET 200SP e-Starter uses semiconductor technology with silicon carbide metal-oxide semiconductor field-effect transistors (SiC MOSFETS), which enables ultra-fast and wear-free switching. Because of the short-circuit protection device they are equipped with conventional feeder solutions have a comparatively slow response time. As a result, the device often

needs to be replaced when a short circuit occurs. In contrast, the e-Starter detects short circuits extremely quickly and switches off in less than 4 μs. This makes it approximately 1,000 times faster than conventional components. The device offers unlimited short-circuit shutdown and does not need to be replaced after being tripped, which increases availability and significantly reduces warehousing costs for replacement parts.

High inrush currents are typical for highefficiency motors, e.g. those in energy efficiency classes IE3 and IE4, and can lead to unintended trips of the protection device. The phase-optimized switching and Smart Start of the SIMATIC ET 200SP e-Starter neutralize the inrush currents and significantly reduce the starting currents and therefore the electrical load on the grid during start-up. In addition, the torque surges that occur during a direct start are minimized as well, noticeably reducing the mechanical wear. This means that less maintenance work is required – a valuable benefit for applications with a high switching rate. Machine and plant manufacturers and system integrators benefit from the seamless integration of the e-Starter into the market-leading automation concept Totally Integrated Automation (TIA). Diagnostic functions come as a standard, enabling detailed system diagnostics without the need for programming. Unlimited data availability and engineering using SIMATIC STEP 7 in TIA Portal simplify project planning,

parameterization and commissioning. Automatic re-parameterization makes it easy to replace devices during ongoing operation (hot swapping).

The new fully electronic e-Starter with semiconductor technology

Minimal use of materials, energy efficiency, and durability all combine to make the SIMATIC ET 200SP e-Starter a highly sustainable product, earning it the Siemens EcoTech label. In addition to its use of recycled materials, the e-starter offers lower energy consumption and wear-free switching for a longer and more efficient service life.

The new starter will make its public debut at the Smart Production Solutions (SPS) trade fair held from November 12 to 14, 2024 in Nuremberg, Germany. The Siemens booth will be located in Hall 11.

Siemens www.siemens.com

Robotics and industrial drives

REQUIRE TECHNICAL DIVERSITY, MAXIMUM PROCESSING POWER, AND A HIGH LEVEL OF SECURITY

Industrial plants pose numerous development challenges. To overcome them, the right microcontroller is crucial in addition to extensive up-front designs, rigorous testing, and compliance with industry standards and regulations.

Authors: Andreas Heder, Field Application Engineer la Rutronik

Panagiotis Venardos, Senior Manager of Industrial MCUs la Infineon

When developing high-end applications – such as robotics, industrial drives, and in applications for electric vehicles –energy, performance, efficiency, and security are of paramount importance. The choice of the optimal microcontroller contributes significantly to achieving these goals. It needs to be high-quality grade, flexible, powerful, and efficient and have features that allow it to adapt to a demanding environment that is constantly changing.

The demands placed on controls in modern industrial plants are becoming increasingly complex and the volumes of data being processed are growing all the time. This poses enormous challenges for developers of such controls. Besides processing these volumes of data efficiently, the systems must also maintain the integrity of the data. The efficient management and allocation of resources within the CPU as well as the use of the internal and external memory are of great importance.

In addition, there are various real-time specifications in industrial applications. To ensure that all tasks are performed safely and securely within these periods, delays and errors must be kept to an absolute minimum. In round-the-clock production, this can be difficult to implement, for example due to regular software updates, the frequency and duration of which are not always known.

For uninterrupted operation of the entire system in an industrial environment, several

key functions and integrations are required to ensure reliability, performance, and compatibility with specific application requirements. This includes using industrial-grade components that are characterized by a long service life and an extended temperature and voltage range. The microcontroller must also support the right interfaces and associated communication protocols and be compatible with a wide range of industrial software tools and libraries.

One device that meets all these criteria is Infineon’s 32-bit XMC7000 microcontroller. It is based on the Arm Cortex M7 processor core and was primarily developed for industrial purposes. As such, it is equipped with various peripherals, such as CAN-FD, TCPWM, and Gigabit Ethernet, and features for hardware security. Its lowpower modes extend down to 8 μA. Thanks to its wide temperature range of –40°C to +125°C, the XMC7000 offers a high level of resistance in harsh industrial environments. To meet design requirements as precisely as possible, the XMC7000 ensures high scalability in terms of the number of processor cores and the size of the flash memory and RAM and comes with four package/pin types and 17 part number variants. A robust local communication network is required for reliable and secure interoperability of all the important

components for motor and power control, such as motors, drives, controls, and sensors. For this purpose, the XMC7000 provides standardized communication interfaces such as CAN-FD, serial communication blocks (SCB), and Ethernet interfaces. An external memory, an SDHC interface, an I2S/TDM interface, and numerous I/Os facilitate integration and communication between various devices and platforms.

In most cases, tasks such as the acquisition of sensor data or the control of external power semiconductors must be performed in real time. To meet such requirements, the XMC7000 is equipped with up to two Arm Cortex M7 cores with clock rates of up to 350 MHz, up to 8 MB Flash, and up to 1 MB SRAM. In addition, there is 256 kB of Work Flash, which, in contrast to the Code Flash, is optimized for significantly more frequent reprogramming.

Protection against cyber threats

Increasing connectivity and comprehensive data exchange in manufacturing and automation environments inevitably lead to cyber threats. Engine and power control systems are particularly vulnerable to these threats, and attacks can severely disrupt production processes and pose a great risk to sensitive data. Given these risks, security measures such as secure-

over-the-air (SOTA) firmware upgrades and secure boot are critical when it comes to ensuring the right firmware runs securely. Fixed anchors, including encryption, access controls, and intrusion detection systems, also help protect against these threats. These functions are performed by the integrated Arm Cortex M0+, which executes these tasks in real time.

A/D converters, timers/counters, and PWMs (TCPWM) are essential components

To support applications with multi-axis drives and the synchronous sampling of analog sensor signals, the MCU has three independent ADCs with upstream multiplexers based on the principle of a successive approximation register (SAR) with the lowest latency for real-time sampling. The XMC7000 also has a high number of TCPWM blocks that can be used flexibly. For example, for driving three-phase asynchronous motors, the average voltage applied to the motor can be fine-tuned by cleverly adjusting the duty cycle of the PWM signal to achieve optimum performance and responsiveness.

For this purpose, the TCPWM blocks are interconnected at hardware level and offer a variety of possibilities for parameterization.

In addition, there are special PWM modules for motor control, which offer various functions, such as extended quadrature, asymmetrical PWM generation, and dead-time adjustment. In addition, the XMC7000 has further special I/O features, referred to as smart I/Os. They can be parameterized as

digital connection logic (AND, OR, XOR, and predefined lookup tables). Input signals can thus be processed without intervention of the CPU. This makes it possible, for example, to detect a certain pattern on one or more pins in the controller’s energy-saving mode and to react to it (safety circuit).

Development tools

There are many software solutions for the XMC7000 that make it easier for the user to develop motor control or energy conversion applications, for example. Infineon provides the ModusToolbox development platform for this purpose, which contains software tools and resources to simplify the design process. It can be used as a stand-alone or fully integrated version with the Eclipse-based IDE. The user-friendly device configurator enables consistent development across multiple industrystandard platforms, such as Eclipse, VS code, and IAR. In addition, ModusToolbox includes a set of development tools, libraries, and embedded runtime assets. It is available free of charge and supports many other Infineon products.

■ Rutronik www.rutronik.com

KEY FEATURES OF THE XMC 7000

• 32-bit MCU

• As a single or dual core based on the 350 MHz Arm Cortex M7 and 100 MHz Arm Cortex M0+ for cryptography

• Up to 8 MB Flash, up to 1 MB SRAM, and I/D cache

• Voltage range: 2.7–5.5 V

• Extended temperature range, up to 125°C

• Interfaces: CAN FD with up to ten channels, SCB with up to eleven channels

• eMMC, SMIF (QSPI/HS-SPI), 10/100/1000 Mbps Ethernet with up to two channels

• AD converter

• Up to 96 channels based on three 12-bit A/D converters using the principle of a successive approximation register (SAR ADC)

• Timer

• Motor control with up to 15 channels, 87-channel 16-bit TCPWM (Timer/Counter/Pulse Width Modulation), 16-channel 32-bit TCPWM

• Timer for event generation

• Package: 100/144 and 176-pin TQFP, LFBGA-272

© following Infineon sources
The XMC7000 from Infineon has everything a microcontroller for industrial applications needs.
Figure 2 © Infineon

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.