MAILING ADDRESS 3120 W Carefree Highway, Suite 1-640 • Phoenix AZ 85087
COVER
Some industry voices have begun whispering that we should be preparing for the coming of Industry 5.0 – or maybe it’s already here. Is the future now, or is the hype train leaving the station?
Brand new 2024 Embedded Computing Resource Guide begins on Page 30.
WEB EXTRAS
DevTalk with Rich and Vin: Requirements-Management Tools
The Dawn of Industry 5.0: High Innovation or Ignoble Hype
By Ken Briodagh, Editor in Chief
Is the 5th Industrial Revolution upon us? Well, every facet of the connected enterprise space, from materials creation and mining, through supply chain and manufacturing, to retail and end uses; and in every layer from the Edge to the Cloud, and IoT through Embedded, has been deep in the Fourth Industrial Revolution for many years now.
But recently, some industry voices have begun whispering that the tide is moving out again, and that we should be preparing for the coming of Industry 5.0 – or maybe it’s already here.
A market research firm called Brand Essence Research recently published a whole report on the idea and defined the concept as “the next evolutionary step in manufacturing, building upon the foundations laid by Industry 4.0.”
If Industry 4.0 was all about automation and data through the integration of IoT and AI, it seems that the proponents of 5.0 are saying that the next evolution is going to be about bringing humans back into these smart systems to create human-machine hybrid systems with a goal of even more adaptable and efficient manufacturing systems.
Which, to me (spoilers), seems not very much like an “evolution.”
To be fair, however, let’s look at what the Industry 5.0 folks are saying. Nicholas Jones, technical advisor at Wake Industrial, recently wrote a post for Embedded Computing Design about the impacts and issues driving the predicted shift.
The key advancements that Jones sees as integral to Industry 5.0 include advanced sensor technologies that enable real-time monitoring and control of manufacturing processes that can detect almost any parameter and provide operational insights leading to predictive maintenance. Enhanced connectivity and edge computing are also part of this trend, he writes, including 5G for real-time data processing and more reliable transmissions.
AI and ML are the inside baseball of 5.0, as far as I can tell. These “smart” technologies seem to be at the heart of how it’s being delineated. Equally integral are
Human-Machine Interfaces (HMI), as indicated in the above definition, and are being seen as enablement technologies that allow human operators to interact with machines intuitively. These include augmented reality (AR) and virtual reality (VR), and so-called collaborative robots, or cobots, that are designed to work alongside human workers.
To me, all those key “advancements” seem iterative at best, rather than revolutionary. These, to me, are next steps from the key technologies for Industry 4.0 and don’t represent a major jump forward. Technological inflec tion points like mechanization, electrification, nuclear energy, and interconnectivity, the internet, and computing were definitional for the previous four “revolutions” and I’m not seeing that great leap forward here, yet.
The benefits of these innovations are powerful and clear, however. Humans and machines working together, leveraging connectivity, algorithmic artificial intelligence and machine learning, robotics, and all these tools, will certainly lead to improved productivity and operational efficiency. And that’s all to the good, I think.
My objection comes because I see a risk in calling technologies that are still part of ongoing work in Industry 4.0, transformational or revolutionary leaps forward. It feels like marketing rather than engineering or economics, and that risks diluting the heft of truly revolutionary technology coming down the line.
When I think about what might constitute such a leap forward, a technological feat that will transform all processes the way previous industrial revolutions did, I see a few options.
Quantum computing could certainly be such a disruptive technology – upending all processing and security and data transmission systems currently in use. No-loss energy storage and battery technology could certainly be such an advancement. Materials science discovering the silicon replacement or achieving room-temperature superconductors will change everything.
These are the kind of revolutionary technologies I’ll be waiting for before I call for a change to Industry 5.0. Don’t agree? Good. Argue with me on social media and at the upcoming embedded world North America.
Mitigating AI/ML Risks in Safety-Critical Software
By Mark Pitchford, LDRA
Artificial intelligence (AI) and machine learning (ML) are the newest frontiers for developers of safety-critical embedded software. These technologies can integrate and analyze data at a massive scale and support capabilities with human-like intelligence. As functional safety practitioners who are used to risk mitigation processes and techniques decades in the making, developers working in this field must adapt to the huge promise of AI/ML without compromising safety at any level of the systems they are building.
What Are Artificial Intelligence and Machine Learning?
ChatGPT, GitHub Copilot, Amazon Q Developer, and similar generative AI tools have caused plenty of buzz and confusion around what AI/ML actually is. For safety-critical development, AI/ML encompasses a broad range of capabilities with seemingly limitless applications from coding assistants to major in-vehicle features.
The Oxford English Dictionary (OED) defines AI as the “capacity of computers
or other machines to exhibit or simulate intelligent behavior; the field of study concerned with this.” The OED defines machine learning as the “capacity of computers to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and infer from patterns in data.”
Types of Artificial Intelligence
AI algorithms are classified as “narrow” or “general”. Narrow (or weak) AI performs specific tasks and lacks general human-like intelligence. IBM’s Deep Blue is perhaps the most famous example of weak AI, as are modern chatbots, image recognition systems, predictive maintenance models, and self-driving cars. Automated unit test vector generation is a form of weak AI as it “simulates intelligent behavior” by deriving test stubs and control from existing code. Figure 1 shows a set of test vectors created automatically through software, eliminating the need for humans to spend time creating them.
By contrast, general (or strong) AI performs a variety of tasks and can teach itself to solve new problems as if it were relying on human intelligence. Such systems have yet to be developed, and there is much debate over whether they are possible in our lifetime (Star Trek’s main computer notwithstanding).
Types of Machine Learning
ML can be classified by the types of data facilitating its algorithms, either “labeled” or “unlabeled.” Labeled data refers to data that has been annotated in some manner with correct outcomes or target values. This type of data is generally more difficult to acquire and store than unlabeled data.
The four main types of machine learning are:
1. Supervised Learning: The algorithm is trained on labeled data such that the correct output is provided for each input. The experience gained from mapping the inputs to the outputs provides a basis for predictions on new data.
2. Unsupervised Learning: The algorithm is given unlabeled data and identifies patterns or structures for use by applications.
3. Semi-Supervised Learning: The algorithm is trained on a combination of labeled and unlabeled data.
4. Reinforcement Learning: The algorithm learns to make sequences of decisions based on a reward system, receiving feedback on the quality of its decisions and adjusting its approach accordingly.
Applications in Functional Safety
Although the different types of ML require different levels of human input, they align with functional safety standards proven to yield sufficiently reliable software within the context of their deployment. For example, the IEC 62304 “Medical device software – software life cycle processes” functional safety standard is typical of a “requirements first” approach embodied by supervised and semi-supervised learning.
This standard does not insist on any specific process model, but it is often represented as a V-model as shown in Figure 2.
Industry-Specific Adaptations for AI/ML
The International Medical Device Regulators Forum (IMDRF) publishes a document defining a systematic risk classification approach for software intended for medical purposes. Known as “Software
as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations,” the document classifies a given device’s risk according to a spectrum of impact on patients, as shown in Figure 3.
This includes factors such as the software’s intended use, the significance of the information provided by the software for making medical decisions, and the potential consequences of software failure.
Test vectors created by the LDRA tool suite (source: LDRA)
FIGURE 1
FIGURE 2
As this classification is agnostic of the methodology used to create the software, medical device developers can apply these guidelines to determine the level of requirements and regulatory scrutiny necessary for software based on AI/ML-based techniques.
For its part, the automotive industry is taking a more proactive approach, developing new standards to accommodate the growth of AI/ML applications:
› ISO/CD PAS 8800, Road Vehicles – Safety and artificial intelligence: This standard will define safety-related properties and risk factors impacting the insufficient performance and malfunctioning behavior of AI.
› ISO/CD TS 5083, Road Vehicles – Safety for automated driving systems –Design, verification, and validation: This document will provide an overview and guidance on the steps for developing and validating an automated vehicle equipped with a safe automated driving system.
A Method to Mitigate AI/ML Risks in Safety-Critical Applications
In modern systems, the outputs of AI/ML-based components will be sent to software built with non-AI techniques, including systems with humans in the loop. This supports the segregation of domains familiar to safety-critical developers where AI/ML components are contained, and non-AI components are designed to mitigate risks with cross-domain interactions.
IEC 62304:2006 +AMD1:2015 permits this approach, stating that “The software ARCHITECTURE should promote segregation of software items that are required for safe operation and should describe the methods used to ensure effective segregation
of those SOFTWARE ITEMS.” It further states that segregation is not restricted to physical separation, but instead “any mechanism that prevents one SOFTWARE ITEM from negatively affecting another.” This suggests that software separation between AI/ML components and traditional components is valid.
Current test tools can support risk assessment and mitigation of these crossdomain interactions. As illustrated in Figure 4, taint analysis can validate data flows coming from AI/ML components into traditionally developed software.
The European Aviation Safety Agency (EASA) document, “Artificial Intelligence Roadmap: A human-centric approach to AI in aviation,” has additional suggestions to ensure AI safety:
› Include a human in command or in the loop.
› Monitor AI/ML output through a traditional backup system.
› Encapsulate ML within rule-based approaches.
› Monitor AI through an independent AI agent.
Mitigating AI/ML Safety Risks Starts with Today’s Tools
Developers of safety-critical systems are cautious about AI/ML algorithms and are looking for ways to mitigate risks proactively. For teams holding themselves back from adopting AI/ML, existing functional safety principles, such as domain segregation, can be effective in mitigating risks. Existing tools can also be used to determine the influence of AI/ML on traditionally developed software items.
Mark Pitchford is a chartered engineer with over 20 years of experience in software development, more lately focused on technical marketing. Proven ability as a developer, team leader, manager and in pre-sales, working in functional safety- and security-critical sectors. Excellent communications skills, demonstrated through project management, commissioning, technical consultancy, and marketing roles.
Software as a Medical Device (SaMD) impact on patients with “I” representing the lowest risk and “IV” the highest (Source: IMDRF)
FIGURE 3
FIGURE 4
RELIABLY AT YOUR SIDE
The TQ Group presents itself as a leading technology service provider and electronics specialist at embedded world North America. Brandon Aumiller, Director Sales Embedded & EMS of TQ-Systems USA INC. reports on the company’s plans and strategies.
You brought your signature module wall to ewNA24. An impressive display of your extensive product portfolio. Next to Intel’s Meteor Lake and NXP’s i.MX9-Series you still showcase PowerPC’s and i.MX28. Why do you keep them around so long?
Brandon: Long-term supply capability is one of our company principles. That wall is only a snapshot of our more than 50 available module types, we’re always working on the latest technological advances, such as artificial intelligence and machine vision, but we also want to continue to support our customers who trusted us for over 30 years, a lot has come together over time and there’s more to come.
How do you decide on new modules?
We listen to our customers and their needs and then review the roadmaps of our technology partners Intel, Texas Instruments, and NXP. Once we have identified a processor that is particularly interesting as a basis, we then consider how we can make the full potential of that processor available to our customers in the best possible way – as a standards-based module or as a proprietary solution. Sometimes we offer two mechanically different versions of the same processor, some customers prefer pluggable others soldered modules, depending on the industry.
Do you prefer one industry in particular?
No, from engineering firms to aerospace or medical companies, we work with customers of all sizes and from all sectors. The key is to spot the tech needs and trends that are going to be important in the future and make them available to everyone. That’s why, in addition to our modules, we offer many other services for truly complete product lifecycle support – from E²MS to compliance testing and environmental management.
So, sustainability is an issue.
Yes, and it’s not just about the environment, but also about customer relationships. From day one, our two company founders knew that TQ would only grow if our customers were successful in the long term and grew as well. It’s our job to support them in this. It’s a sustainable strategy, even if the term didn’t exist when we started.
In addition to organic growth, you can also grow through acquisitions.
Of course, you can buy additional revenue by acquiring companies – that usually pleases shareholders, but not always customers. Customers like reliability. We focus on strengthening our service offerings. For example, we invested in an independent test laboratory to help our customers with global compliance testing and approvals. As a company still run by the founders, who are also the owners, the corporate strategy is very longterm. This will continue with the next generation of our founders who are involved in the business. This is typical for a German mid-sized company.
So the traditional “Made in Germany”?
Yes, but we’ve also learned that being close to our customers is more important now than ever before, that’s why, in addition to our existing US based sales and technical support which we’ve had for years, we‘re also increasing our US based manufacturing capabilities. The current geopolitical situation means customers are increasingly looking for reliable partners unaffected by conflict in Asia or elsewhere. We recognize that and want to support our customers where they are. For us, the reliability and quality of our products and our business relationships go hand in hand.
What MCU Manufacturers Can Do to Accelerate AI Adoption
By Henrik Flodell, Alif Semiconductor
The year 2024 has seen a torrent of announcements about new large language model (LLM)-based artificial intelligence (AI) products from companies such as OpenAI and Google. In consumer and computing endpoints such as smartphones and personal computers, AI has not just joined the mainstream: arguably it is the mainstream.
So it is striking that embedded device manufacturers have not yet widely adopted AI or machine learning (ML) in endpoint devices. There are multiple reasons for this, not least the constraints on compute resource and power consumption in traditional microcontrollers, applications processors and systems-on-chip (SoCs).
But device designers’ hesitancy about embedding ML algorithms has slowed the momentum behind ML in IoT and other types of devices that are normally based on an MCU. This hesitancy is understandable, but the strong appeal and novel capabilities of AI software in consumer devices suggest that quicker adoption of ML technology in the IoT could offer huge potential value.
So, what should the manufacturers of MCUs be doing to help OEMs overcome the technical and operational barriers to the acceptance by embedded device designers of ML software?
The Uncertainty Principle: How ML Takes Embedded Designers Out of Their Comfort Zone
If embedded device designers are hesitant about building ML algorithms into MCUbased devices, it is in part because their training and development methods are adapted for a completely different kind of software that is deterministic and programmatic.
A classic real-time control function receives an input such as a sensor’s temperature measurement and performs a specified action, such as to turn off a device if the temperature exceeds a safety threshold. The MCU has evolved to become the preeminent hardware basis for this type of real-time and deterministic control function. An MCU based on a RISC CPU such as an Arm Cortex-M core provides the guaranteed latency and high-speed sequential execution of functions required in applications from motor control, to sensor data processing, to display control.
For the application developer, the writing of traditional programmatic ‘if/then’ software code has a logical thread, and its operation is bounded by knowable conditions that can be explicitly defined. Once debugged, the code has an entirely predictable and dependable output in a device such as a security camera or power tool – classic applications for MCU-based control. Today, such devices offer new scope to add value through the addition of ML functions to the existing control functions. And this ML software can run at the edge, directly in the MCU.
In the security camera, for instance, real-time monitoring for potential intruders can be automated: powerful cloud-based ML software can accurately detect and analyze the behavior of people in the field of view. But the cost and power consumption of such a camera can be dramatically reduced if the video feed is pre-scanned by a local
processor which can distinguish human shapes from other objects in the field of view. This means that the camera will only trigger the system to upload frames to the cloud when the video feed contains potentially relevant information, rather than continuously uploading the entire feed.
ML offers similar transformative value in an electric drill. Equipped with an ultrawideband (UWB) radio, the drill can receive RF signals which vary depending on the material that it is boring into. This enables the drill to verify that the correct type of bit is in use, and to detect hidden hazards such as a water pipe buried in a wall.
To write traditional programmatic software to perform this type of function effectively might be impossible, and would certainly require an unfeasible amount of development, effort, and time. But to train an appropriate neural network to recognize the pattern of, for instance, the reflections of a UWB transmission from a copper pipe embedded in concrete, is a relatively trivial task. Likewise, large, open-source training datasets and neural networks are readily available for ML systems to detect the motion of people in a video stream.
But the entire approach to training such an ML algorithm is alien to the approach of a classically trained developer of programmatic software for MCUs or embedded processors. Instead of structuring, writing, and debugging code, for ML algorithms the embedded developer has to think about:
› How to assemble an adequate training dataset.
› The evaluation and selection of a suitable neural network.
› The configuration and operation of a deep learning framework, such as TensorFlow Lite, that is compatible with the target MCU or other hardware.
While programmatic software might be improved through processes such as debugging and detailed code analysis, optimizing the operation of ML inferencing is an entirely different type of task: it involves the analysis of factors such as the potential to improve the accuracy and speed of inferencing by increasing the size of the training dataset, or by improving the quality of the training dataset’s samples.
And the output from an ML algorithm is also qualitatively different from that of classic software: it is probabilistic, providing an answer to a question (‘Is a copper pipe buried in the wall?’) in the form of an inference drawn with a certain degree of confidence. It is not deterministic – the answer is probably right, but could be wrong.
So, to gain the value of ML functionality, designers of MCU-based devices have to adopt a new development method, and accept a new type of probabilistic rather than deterministic output. This is unfamiliar territory: it is unsurprising if the embedded world has been somewhat hesitant about implementing ML.
What can MCU manufacturers do to ease the transition to an AI-centric embedded world, and to make the operating environment friendly to ML software?
Three Features of an ML-Friendly MCU
For Alif Semiconductor, this is an existential question, since the company’s mission since its founding in 2019 has been to provide manufacturers of embedded and IoT devices with a new range of MCUs and fusion processors which provide the best AI/ML at the lowest power.
On Alif’s analysis, three key features of an MCU give manufacturers the best chance of succeeding with new ML-based products.
1. Provide a hardware environment that helps rather than hinders the operation of neural networking algorithms. A RISC CPU is at the heart of the control functions of an MCU, but its sequential mode of operation is inimical to the execution of a neural network’s MAC cycles. A neural processing unit (NPU), on the other hand, is optimized for MAC execution and other neural networking operations. An MCU architecture which has one or more NPUs operating alongside one or more CPUs provides the best basis for the fast and low-power inferencing required for edge AI applications. Testing against standard industry benchmarks for ML functions such as voice activity detection, noise cancellation, echo cancellation, and automatic speech recognition shows that a combination of an Arm Cortex-M55 CPU and Ethos-U55 NPU provides a 50x improvement in speed to inference compared to a high-end Cortex-M7 CPU alone, and a 25x reduction in power consumption.
2. Allow the device design team to work on ML applications in a familiar development environment. For the control functions performed by a CPU, the MCU market has succeeded in consolidating the choices down to one architecture: Arm’s Cortex-M. But some MCU manufacturers complement the CPU with a proprietary NPU, forcing users to leave the familiar Arm environment for the ML portion of their designs.
Eventually, it is highly likely that the MCU market will converge on Arm for the NPU as well as the CPU. An MCU with an Arm Ethos NPU alongside a Cortex-M CPU enables developers to share the same Arm tools and software ecosystem across both the control and ML portions of the application.
3. Enable early-stage experimentation with popular ML application types.
The probabilistic nature of ML inferencing lends itself to a trial-and-error approach to proof-of-concept development, based on the use and refinement of open-source neural networking models and training datasets.
This is why Alif Semiconductor provides its AI/ML AppKit, development hardware which is pre-configured for the collection of vibration, voice, and vision data, and is supplied with a broad set of demonstration systems for various AI use cases.
The kit features a 4” color LCD, an integrated camera module, PDM and I2S microphones, and inertial sensors. Device driver and operating system packs, as well as demonstration applications and examples, are published on the GitHub platform.
Today’s opportunity to make embedded devices more valuable
The opportunity to bring the transformative capability of ML to embedded devices is available now: the technology is real and ready for mainstream deployment. While adoption might previously have been slowed by the lack of availability of ML-native MCUs operating in a familiar Arm development environment, this reason for hesitating
to implement ML no longer applies. The introduction of products such as the Ensemble family of MCUs and fusion processors from Alif Semiconductor, which feature a choice of single- and multi-core Arm Cortex-M55 CPU and Ethos-U55 NPU-based product options, has given embedded developers a new, ML-friendly hardware and development platform.
With a development tool such as the AI/ML AppKit in hand, it is time to take the plunge into the world of machine learning at the edge!
Henrik Flodell is an experienced international manager with solid technical background. Thrives in fast-paced environments that require combining needs and requirements from different sources. An expert at managing multiple projects, deadlines and relationships with partners. Holds 20+ years professional experience leading and working with crossfunctional teams. Quickly grasps new concepts and transforms into creative solutions that drive results.
TQ - Always the right embedded solution
Modules – SBCs – BoxPCs
Can be used for AI/ML applications
Security and Safety suported
Developed for a long life cycle
Prepared for Cloud and Edge
Scalable computing power for individual applications
Future-oriented electronics that excite!
Visit us at Booth 2029!
tq-group.com/en/embedded
You Deserve a Better IP Experience
By Alexander Mozgovenko, Senior Sales Engineer, CAST
For 30 years, CAST has helped shape the reusable IP market that embedded systems developers depend on today. It started as a simple idea: “Hey, let’s make these simulation models synthesizable.” It became the enabler that makes today’s billiontransistor SoCs possible across all industries.
IP-based system design promises significant strategic advantages:
› Reduce Risk: Use proven IP cores rather than developing every component from scratch.
› Save Time and Reduce Cost: Accelerate time-to-market, leverage expertise, and focus on innovative architectures instead of reinventing the wheel.
› Go Beyond Your Expertise: Differentiate systems with leading-edge technologies or comply with new standards using IP developed by domain experts.
Unfortunately, not every IP core – or IP provider – delivers on these promises.
Common IP Disappointments
You probably know how this can go.
Some IPs just fail, making you wonder if they were ever really tested. Certain cores are so hard to integrate that you wish you had developed the function yourself. Others come with a high cost or limiting restrictions that make your finance and legal departments unhappy. And for some, getting technical support answers is a slow, grueling process that drains the enthusiasm out of your team.
At CAST, we instead strive to give every customer A Better IP Experience.
Commitment to A Better IP Experience
Going beyond correct IP implementation, our goal of giving you a better IP experience means three main things:
High Quality in Every Core
We work hard to make every IP core available from CAST one of the best you’ll find anywhere. We pursue this quality at every step, from coding practices to enforcing time-tested quality standards to rigorous testing both internally and via industry plugfests and other means.
Flexible Licensing that Fits Your Needs
We start with a fair, competitive price and then offer licensing terms that are typically royalty-free and can vary to suit a wide range of customer needs. From one-time use of a core in a single product through multi-product repeat-use plans and FPGA to ASIC upgrades, we always succeed in finding the optimal terms for each customer.
Quick, Effective Customer Support
You shouldn’t have to wait days or weeks for a support reply. Instead, we have a global, multi-time-zone team, a fastresponse and service-oriented culture, and access to actual IP developers. Our statistics speak for themselves: we average just 15 hours from your ticket submission to our first response, which often includes the answer or information you need to keep going.
A Wide Range of IP
We know you’re faced with sourcing multiple IP cores for every project. We may not have every core you need, but we do improve your IP experience by offering a wide range of cores from a single trusted provider. Our product line features:
› Processors: RISC-V and 8051 families for low-power embedded systems, including Functional Safety.
› Compression: hardware engines for data, image, and video compression to optimize bandwidth and storage.
› Interfaces: automotive, Ethernet, communication, and mobile interfaces to enable effective connectivity.
› Peripherals: controllers and fabric for memory and device control.
› Security: encryption and SoC solutions for robust data protection.
This wide range of IP simplifies your procurement process through “one-stop shopping.” We can also boost your time savings by pre-integrating multiple cores from CAST into complete drop-in subsystems.
You Deserve Something Better
CAST employees and partners around the world live and breathe the Better IP Experience mantra. With returning customers making up over 50% of our sales, we know system developers (and their management) appreciate this approach.
Don’t you also deserve A Better IP Experience?
Visit us at www.cast-inc.com then let’s discuss your next IP needs.
Standards: A Deep Dive Into Autonomous Vehicle Safety and Security
By Simran Khokha, Infineon
The advent of autonomous vehicles (AV) marks a shift in automotive technology, presenting advancements in mobility and introducing challenges in ensuring operational safety and security. As these vehicles edge closer to widespread adoption, the role of sophisticated safety mechanisms and stringent standards becomes imperative.
This expanded discussion delves into the critical aspects of Advanced Driver Assistance Systems (ADAS), the ISO 26262 safety standard, and the innovative contributions of Infineon’s diverse lockstep technology in shaping the future of autonomous vehicle safety.
ADAS:
A
Pillar of Autonomous Safety
ADAS are foundational to the functioning of autonomous vehicles, providing essential capabilities that range from environmental detection to automated decision-making. These systems utilize a complex array of sensors, including LIDAR, radar, and cameras, which comprehensively view the vehicle’s surroundings. Integrating these sensors with sophisticated algorithms enables AV to perform tasks such as adaptive cruise control, automatic braking, and more complex maneuvers like autonomous parking.
A critical component of ADAS is sensor fusion, the process of integrating data from various sensors to form a coherent understanding of the vehicle’s environment. This integration is vital for accurately detecting and responding to dynamic road conditions. Real-time sensor data processing ensures that the vehicle reacts appropriately to obstacles, road signs, and traffic patterns, reducing the likelihood of accidents and enhancing overall road safety.
ISO 26262 Standard: Ensuring Functional Safety in AV
The ISO 26262 standard is a cornerstone of modern automotive safety, explicitly addressing the functional safety requirements of electronic systems in road vehicles. This standard is crucial for developing AV, as it sets forth a comprehensive framework
for ensuring that the electronic systems integral to AV operation are free from unacceptable risks. It is an international standard that prescribes rigorous guidelines that span the entire lifecycle of automotive development – from initial design through implementation to testing 1 and ongoing maintenance.
ISO 26262 emphasizes a systematic approach to risk management by requiring developers first to identify hazards, assess their potential impact, and then implement measures to mitigate these risks. It mandates a thorough analysis of possible failure modes for each electronic component and system, considering software errors, hardware failures, and unexpected interactions among various components. Principles that prioritize safety, integrity, and reliability govern the design and development of electronic systems under ISO 26262. This includes the adoption of fail-safe and
fault-tolerant architectures, which ensure that even in the event of a system failure, the vehicle can either maintain a safe state or bring itself to a controlled stop.
The standard also specifies detailed protocols for testing and validating the safety of electronic systems. The standard requires rigorous pre-deployment testing under simulated conditions, extensive post-deployment testing, and real-world monitoring to ensure that safety benchmarks remain met throughout the vehicle’s operational life. Techniques such as hardware-in-the-loop (HIL) simulations and software-in-theloop (SIL) testing ensure that systems behave as expected under various conditions and interactions.
Detailed Overview of ASIL
Automotive Safety Integrity Levels (ASIL) are a critical component of ISO 26262, which help assess the severity of potential hazards and the likelihood of their occurrence. Each level, from ASIL A to ASIL D, dictates specific safety requirements and measures. ASIL D is the highest safety requirement, typically associated with systems whose failure would likely result in fatal accidents. The classification into different ASILs
influences the design decisions and testing rigor required to validate the safety of system components.
Infineon’s Diverse Lockstep Technology
Infineon’s diverse lockstep technology is unlike conventional lockstep architectures that duplicate identical cores for error detection, because it introduces a patented approach involving two diverse cores. These heterogeneously designed cores execute the same tasks using different architectural strategies, significantly enhancing error detection capabilities by minimizing common-cause failures. Diverse cores instantly identify discrepancies in processing outputs, enabling immediate corrective actions. This increases the reliability of critical safety functions and adheres to the highest functional safety standards as prescribed by ISO 26262.
AS
THESE TECHNOLOGIES EVOLVE,
THEY WILL CONTINUE
TO
DRIVE AUTONOMOUS VEHICLES’ SAFETY AND SECURITY STANDARDS, ENSURING THAT THEY CAN INTEGRATE SEAMLESSLY AND SAFELY INTO OUR DAILY LIVES.
Rigorous Testing and Validation Strategies
The complexity and critical nature of autonomous vehicle functions necessitate extensive testing and validation to ensure adherence to safety standards. Methods such as fault injection and stress testing are employed to evaluate the robustness of vehicle systems under abnormal conditions or in response to intentional faults. These tests help identify vulnerabilities and ensure the system can gracefully handle errors without compromising vehicle safety.
Integration of Cybersecurity in AV Safety
With the increased connectivity in autonomous vehicles, cybersecurity becomes an integral component of their safety strategy. Developing secure boot mechanisms, intrusion detection systems, and end-to-end encryption are essential to protect against unauthorized access and cyber-attacks. These cybersecurity measures are designed to work with physical safety systems, ensuring a holistic safety architecture.
Technological innovation and stringent safety standards pave the way toward fully autonomous vehicles. Systems like ADAS and standards such as ISO 26262 form the backbone of vehicle safety, while Infineon’s diverse lockstep technology exemplifies the advanced engineering required to achieve reliability in autonomous operations. As these technologies evolve, they will continue to drive autonomous vehicles’ safety and security standards, ensuring that they can integrate seamlessly and safely into our daily lives.
Simran Khokha has half a decade worth of experience in upcoming Consumers Product tech in major semiconductor giants, including several years in product development and designing within APAC region, and now Product management and Marketing in Europe ( Germany), focusing across a host of segments including Automotive, IOT, and Consumer Electronics. Has previously worked in Design of first of its kind SoC used for AI/ML, self-driving cars and future applications.
Embedded Edge Devices are Getting Smarter, More Efficient
By Ken Briodagh, Editor in Chief
The Edge is nothing new. Edge networking, Edge processing, and most recently Edge intelligence have been where many companies have been innovating and expanding capabilities and product lines for years.
The reasons for this are many, and it’s not the exclusive home of innovation, but there’s a merging of historically disparate horizontal layers driving much of the innovation at the edge. These layers are IoT, Embedded, and AI, and they’re combining into one fused horizontal technical layer[1] that will likely inform every deployment or upgrade at the Edge for the foreseeable future. This is simply because enterprises and end users want their applications to be unified and simple to operate and manage, but also powerful, functional, and capable. That means, they need the power of embedded processing and efficient energy management, Sensor Fusion for IoT connectivity and data collection, and command and control automation from AI and ML algorithms. All of these are being required by customers to flow through single UI and that’s driving this horizontal convergence.
Let’s look at some of the new innovations at the edge that are contributing to this evolution.
Intelligence
Ceva recently announced an extension of its Intelligent Edge IP with new TinyML optimized NPUs for AIoT devices. The Ceva-NeuPro-Nano NPUs are designed to be ultra-low power and deliver powerful performance in a small, Edge-friendly area for consumer, industrial, and enterprise products, the company said.
The market for TinyML is growing along with the horizontal technology convergence. ABI Research has predicted that more than 40 percent of TinyML shipments will be
powered by dedicated TinyML hardware rather than all-purpose MCUs by 2030. Ceva has aid that intends to be a leader in that space and is starting now with the new NeuPro-Nano NPU.
“OEMs are trying to force more features into SoCs, but running out of compute,” said Chad Lucien, VP and GM of the sensors and audio business unit at Ceva. “We’re constantly running up against situations where a customer is using an MCU that might not fit the function.”
The company’s new Embedded AI NPU architecture reportedly is fully programmable and executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This brings intelligence into
any Edge device without sacrificing power efficiency because it’s optimized, self-sufficient architecture is designed to deliver superior power efficiency with a smaller silicon footprint
“Ceva-NeuPro-Nano opens exciting opportunities for companies to integrate TinyML applications into low-power IoT SoCs and MCUs and builds on our strategy to empower smart edge devices with advanced connectivity, sensing and inference capabilities,” said Lucien. “The Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI to the very edge, resulting in intelligent IoT devices with advanced feature sets that capture more value for our customers.”
Ceva-NeuPro-Nano NPUs are available for licensing today[2].
IoT
The most popular and desired IoT application right now is vision, be it via LiDAR, motion sensing, or most commonly, cameras. With that in mind,
STMicroelectronics has unveiled a new image sensor ecosystem for advanced camera performance called ST BrightSense.
According to the recent announcement, BrightSense is designed to enable quicker and smarter designs of compact power-efficient products for factory automation, robotics, AR/VR, and medical applications, all of which are arguably the most critical non-consumer Edge use cases.
The company also released a set of plug-and-play hardware kits, evaluation camera modules and software at the same time, reportedly to ease development with the BrightSense global-shutter image sensors. BrightSense image sensors sample all pixels simultaneously, ST said, which means the global-shutter sensors can capture images of fast-moving objects without distortion and reduce power when coupled to a lighting system, unlike a conventional rolling shutter.
ST said its CMOS-backed sensors enable backside-illuminated pixel technology and high image sharpness to capture fine details, even in motion. Applications include barcode reading, obstacle avoidance in mobile robots, and face recognition. Since form factor is always a key concern at the edge, the company used a 3D-stacked construction to allow for a small die area to ease integration anywhere space is limited[3].
Embedded
Processing and power management are the heart of embedded and they make up the engine driving both IoT and AI at the Edge. Staying with vision, robotics in manufacturing and warehousing, and ADAS and other automated driving systems make vision systems critical infrastructure, and the processing needs are incredible, while form factors need to be compact and robust for these mobile edge devices operating in sometimes intense conditions.
One recent innovation in this area comes from indie Semiconductor, which recently announced its iND880xx product line[4] that is designed specifically for the specialized requirements of Advanced Driver Assistance Systems (ADAS) and vision sensing applications.
The company said that it has seen latency rates in initialization and processing hinder ADAS camera systems and keep them from achieving real-time safety capabilities. That’s why indie Semiconductor says its iND880xx family was built to attack latency specifically through its proprietary technology that reportedly supports simultaneous low-latency processing of four independent sensor inputs that can deliver throughput up to 1400 megapixels per second[5].
Of course these aren’t the only examples, just some of the most recent ones. It’s important for engineers and developers to consider all three of these pillars of IoT, AI, and Embedded when designing new products or upgrading existing lines. It’s becoming ever clearer that keeping these layers siloed is going the way of the dodo and Blockbuster. It’s time to think about how you’re going to get to Blu-ray.
Arduino vs CircuitPython for Microcontroller Programming
By Jeremy S. Cook, Jeremy Cook Consulting
If you’re starting out in the world of microcontrollers and dev boards, you may find yourself faced with a fundamental question: Arduino or CircuitPython? Some of the same considerations will also apply to MicroPython1 but I’ll table that discussion to keep the scope reasonable.
The CircuitPython language can run numerous development boards, which generally need to be flashed with a UF2 file2. Arduino can refer to development boards made by Arduino (or many others, depending on your definition), and/or that are programmed with Arduino’s derivative of C (Arduino C), often using the Arduino IDE. This article will focus on the languages themselves.
Article TL;DR: Arduino C computing performance is better. CircuitPython is easier to program and modify. Which is actually better and depends on the situation and the person doing the coding.
Compiled Language (Arduino C) Vs. Interpreted Language (CircuitPython)
Arduino C is a compiled language, meaning that when you write a program, your computer then compiles it, AKA turns it into machine code for the microcontroller to use. CircuitPython is an interpreted language, meaning that it stores the actual human –or even AI – written code on the microcontroller. It then interprets this human code at runtime. This runtime interpretation requires significant computing resources.
Arduino C can run on more devices than CircuitPython per CircuitPython’s greater resource needs. If you specifically need to work with a certain device, first check to see what language(s) it uses. If it’s Arduino C-only, the choice may already be made for you.
Arduino C Programming
With machine code loaded directly onto the microcontroller, it doesn’t have to figure out or interpret the C programming at runtime. This allows for fast and efficient microcontroller usage. If you’re familiar with C, Arduino C should be easy to learn, but starting from scratch can be challenging.
The other big challenge with using Arduino C is that the IDE needs information on each board/microcontroller to interpret code for that individual system. It also needs the required libraries to concoct a compiled mass of 1s and 0s. While setting up a programming environmnet for Arduino won’t be a huge challenge for experienced programmers, it can represent a significant learning curve for beginners.
PLS CHECK FIXED HYPHENS?
Additionally, since a board programmed in Arduino C doesn’t have the original code stored onboard (only machine code) you’ll need to find a copy of this elsewhere to make changes. That’s a lot of work if you just want to change a value or two.
CircuitPython Programming
CircuitPython takes a different approach, in that your actual code is loaded onto the microcontroller, which it then interprets at runtime. Naturally, this adds a significant amount of overhead. It can take CircuitPython code around three times as long to do a calculation as its Arduino C equivalent3. So, if computing resources are limited, CircuitPython may not be your answer.
The advantage of CircuitPython is its ease of programming and modification. CircuitPython is generally considered easier to learn than Arduino C – though to be fair, there are more resources available for learning the Arduino ecosystem, as it is much older. Perhaps more importantly,
since code is stored directly on the CircuitPython board – and since it shows up as USB storage when connected to a computer – it can be read and modified directly. Just open the program’s text file (generally code.py) with the Mu editor, or even a generic text editor4, make changes, save, and it reboots running the new code.
So, changing variable X doesn’t require tracking down the original code, libraries, and board definition. Just make the changes and save. If you need an additional library or other files, you can drag them onto the CircuitPython drive as you would with a
Arduino C programming in the Arduino IDE / Image Credit: Jeremy Cook
FIGURE 1
CircuitPython programming in the Mu editor / Image Credit: Jeremy Cook
FIGURE 2
CHECK
USB thumb drive. To program a board from scratch, you will first need to load the appropriate UF2 file.
Should You Use Arduino C or CircuitPython? It depends. I’m personally more comfortable with Arduino C as I’ve used it for longer, but I tend to drift back and forth depending on my focus at the time.
For instance, my MIDI Spoon Piano, seen in the video below, uses a Raspberry Pi Pico. Here I could use my Pico Touch 2 PCB5 with it, along with unmodified code from todbot6, making coding a matter of flashing and copying everything correctly. Of course, this drop-in coding did take some careful hardware consideration.
On the other hand, when I decided to make the world’s smallest MIDI controller, which runs on an ATtiny85 and is powered by a CR2016 battery, CircuitPython was out of the question. In this case I wrote my own code7, and included the excellent TinySnore library8 to keep it running on its minuscule power source.
All that being said, the best choice for your system depends on the situation. If fast development time is a priority and computing/power resources are plentiful, then CircuitPython is worth consideration. If actual computing performance is a bigger issue, Arduino C is probably the better choice. You’ll also need to consider your particular programming strengths. If you’re starting from absolute scratch, CircuitPython would generally be considered easier to learn, however, there is a massive amount of learning resources available for Arduino C.
If I had to choose just one, I’d probably go with Arduino C. Given that I do a monthly Developing With Arduino class9, it would be hard not to. However, I like both programming methodologies, and wouldn’t be too happy about limiting my choices!
Jeremy Cook is a freelance tech journalist and engineering consultant with over 10 years of factory automation experience. An avid maker and experimenter, you can follow him on Twitter, or see his electromechanical exploits on the Jeremy S. Cook YouTube Channel!
While you’re learning about Arduino and CircuitPython, maybe it’s time to dive deeper into the fundamentals of serial communication within the Arduino ecosystem.
When it’s time to set up, troubleshoot, and communicate with your Arduino board, serial communication is the go-to tool in your programming arsenal. In this online training session, presenter Jeremy Cook will discuss how serial protocols work and how to use them effectively, with a focus on basic UART communication. Cook is a tech journalist who spent over a decade in manufacturing automation. This training will include hands-on examples, allowing you to implement your new knowledge immediately after completing the one-hour session!
Watch the webcast: http://bit.ly/4gfESkQ
Building Resilient Systems with Continuous Monitoring
By Rahul Kumar, Chetu
In today’s hyper-connected world, system failures can have devastating consequences. A 2014 study by Gartner reveals that the average cost of IT downtime is $5,600 per minute, which translates to over $300,000 per hour. Such staggering figures underscore the critical need for systems that can withstand disruptions and continue to operate effectively.
Building resilient systems is not just a luxury, but a necessity, in our technology-driven era. At the heart of this resilience lies continuous monitoring, a proactive approach that ensures systems remain robust, fault-tolerant, and recoverable in the face of challenges. Here we’ll dive into the concept of system resilience and highlight the role of continuous monitoring in helping to achieve it.
System Resilience
When we refer to system resilience, we’re talking about the ability of a system to maintain its core functions and recover quickly from disruptions, whether they are due to hardware failures, software bugs, or external threats. In modern embedded systems, resilience is crucial because these systems often operate in environments where downtime can lead to significant financial losses, compromised data integrity, or even safety hazards.
1. Robustness: This characteristic ensures that a system can handle a wide range of operational conditions without failure. Robust systems are designed with redundancy and fail-safes to prevent minor issues from escalating into major problems.
2. Fault Tolerance: Fault tolerance is the system’s ability to continue operating correctly even when one or more of its components fail. This is usually achieved through redundant components and error-correcting mechanisms that allow the system to detect and compensate for faults automatically.
3. Recoverability: Recoverability is the capacity of a system to return to normal operations after some disruption. This can mean effective backup and recovery procedures, as well as the ability to quickly identify and fix the root cause of the failure.
Continuous Monitoring
Continuous monitoring is a process where systems are constantly observed for performance, security, and operational anomalies. In embedded systems, this might involve using sensors, software agents, and other external monitoring tools to collect realtime data on system behavior and environment conditions. The goal with this is to identify and address potential issues before they can have a significant impact on system performance or availability.
So, why the focus on continuous monitoring? Here are a few key benefits:
1. Early Detection: Continuous monitoring allows for the early detection of issues and potential failures. By analyzing real-time data, system administrators can
be notified of early warning signs and take preventive measures before minor issues escalate into significant problems.
2. Real-Time Performance Tracking:
Continuous monitoring can provide a constant stream of data on system performance, enabling administrators to track key metrics and ensure that the system is operating within its optimal parameters. This realtime insight helps in maintaining high performance and efficiency, owing to the adage “what gets measured gets managed”.
3. Proactive Maintenance: With continuous monitoring, maintenance can be proactive rather than reactive. Systems can be serviced based on actual performance data and predictive analytics, reducing the likelihood of unexpected failures and extending the lifespan of critical components. Combined with performance tracking, this can help in optimizing system performance and resource utilization.
By integrating continuous monitoring into embedded systems and IoT networks, organizations can build resilience, ensuring that their systems remain robust, fault-tolerant, and capable of swift recovery in the face of disruptions. This proactive approach not only minimizes downtime and its associated costs but also enhances overall system reliability and performance.
Implementing Continuous Monitoring Tools
Implementing continuous monitoring in embedded systems can involve a number of combinations of hardware and software solutions designed to provide real-time data and analytics. Some of the key tools and technologies include:
1. Sensor Networks: These consist of various sensors that collect data on environmental conditions, system performance, and operational
parameters. Sensors can monitor temperature, humidity, vibration, and other important metrics that impact system reliability.
2. Embedded Monitoring Software: Software agents embedded within the system firmware or operating system continuously collect and report data. Examples include SNMP (Simple Network Management Protocol) agents and custom monitoring scripts tailored to your system’s needs.
3. IoT Platforms: IoT platforms like AWS IoT and Azure IoT Hub provide well-tested infrastructure for collecting, processing, and analyzing data from embedded devices. These platforms offer real-time analytics, alerting mechanisms, and integration capabilities with other systems.
4. Data Loggers: Data loggers are typically standalone devices that record data over time from various sensors and components. They are particularly useful in environments where continuous network connectivity is not available, and thus can store data locally for later analysis. For example, Onset’s HOBO and National Instrument’s data loggers are widely used for environmental monitoring.
5. Edge Computing: Edge computing solutions, like NVIDIA’s Jetson platform and or even Raspberry Pi, can process data locally on the device rather than sending it all to the cloud. This approach reduces latency and bandwidth usage, enabling faster response times and more efficient monitoring
Integration Strategies
Integrating continuous monitoring tools into existing embedded systems requires careful planning and execution. Here are some strategies to help simplify the process:
1. Assess System Requirements: Begin by assessing the specific monitoring needs of your system. Identify the key metrics to monitor, the frequency of data collection, and the acceptable latency for alerts and responses.
2. Select Appropriate Tools: Choose monitoring tools and technologies that best fit your system requirements. Consider factors such as compatibility with existing hardware, ease of integration, and scalability.
3. Develop Custom Monitoring Scripts: For specialized monitoring needs, develop custom scripts or agents that can collect and report data specific to your system. Ensure these scripts are optimized for minimal resource usage to avoid impacting system performance.
4. Utilize IoT Platforms: Leverage IoT platforms to centralize data collection and analysis. These platforms offer robust APIs and integration tools that make it easier to connect embedded devices and streamline data processing.
5. Implement Edge Computing: Where applicable, use edge computing to process data locally. This approach reduces the load on central systems and ensures faster detection and response to issues.
6. Regular Testing and Validation: Conduct regular testing and validation of the monitoring setup to ensure accuracy and reliability. Simulate various failure scenarios to verify that the system responds appropriately to alerts.
Challenges
Implementing monitoring in embedded systems is far from straightforward and comes with its own set of challenges. Here are a few issues you’ll likely need to consider for your own implementation:
1. Resource Constraints: Within these environments we typically have limited processing power, memory, and storage. To address this, you’ll need to optimize monitoring agents and scripts for low resource usage and prioritize critical metrics to minimize the data collected and processed.
2. Network Connectivity: Not all environments get to have hardwired network connectivity. Use data loggers to store data locally during connectivity outages and synchronize it once the connection is restored.
3. Security Concerns: To perform continuous monitoring, systems will be collecting and transmitting potentially sensitive data, which can be vulnerable to cyber threats. Always use strong encryption protocols for data transmission and storage, and regularly update monitoring software to patch vulnerabilities.
4. Scalability: As systems grow, the volume of monitoring data can become overwhelming. A system that starts out with 10 devices may scale up to 1000+ in a short period of time. Use scalable IoT platforms and edge computing solutions to manage data efficiently and implement data aggregation techniques to reduce the volume of data sent to central systems.
Case Study: Warehouse Product Management
In a warehouse managing a commodity product, continuous monitoring was implemented to enhance system resilience and operational efficiency. The warehouse system handled various tasks like tracking inventory, product movements, and continuous quality monitoring to ensure product integrity.
The implementation involved using HOBO data loggers to monitor environmental conditions such as temperature and humidity, crucial for maintaining product quality during storage and handling. Additionally, MadgeTech data loggers tracked the performance of automated handling equipment, monitoring metrics like shock and vibration levels to prevent damage to products during transportation.
Data from these loggers was collected and processed using AWS IoT Greengrass, which enabled edge computing capabilities. This setup allowed for real-time data analysis and immediate response to any anomalies detected in both environmental conditions and equipment performance. For instance, if a temperature logger detected a deviation from the optimal range, an alert was triggered for immediate action to prevent product degradation.
As for integration, Raspberry Pi devices were used as edge nodes to collect and process data from various sensors throughout the warehouse. The edge nodes themselves were monitored using an uptime monitoring tool to process and send alerts during downtime. Custom monitoring scripts were developed to handle specific requirements, like tracking product movement through different stages of the warehouse process. This proactive approach allowed the warehouse management team to address minor issues before they escalated, as well as improve the handling of product, resulting in an 18% reduction in lost inventory due to damage.
Best Practices
Setting Up Effective Monitoring
Effective monitoring begins with selecting the right metrics. In a warehouse environment, key metrics might include temperature, humidity, equipment load, cycle times, and error rates. Defining appropriate thresholds for these metrics based on historical data and industry standards is crucial. For example, setting a humidity threshold helps prevent product spoilage due to excessive moisture.
Deploying reliable sensors and data loggers ensures accurate data collection. Highquality sensors, regularly calibrated, maintain data integrity. Custom monitoring scripts tailored to the warehouse’s specific needs can optimize resource usage and focus on the most critical metrics.
Data Analysis and Response
Real-time data processing is important for immediate detection of anomalies. Implementing edge computing solutions, such as AWS IoT Greengrass, allows data to be processed locally, reducing latency and enabling rapid response to issues.
Automated alerts notify administrators of abnormal conditions, and these alerts should be prioritized based on severity to ensure critical issues are addressed promptly.
Analyzing historical data using IoT platforms helps identify trends and inform predictive maintenance strategies. Storing and analyzing historical data provides insights into patterns that can improve system performance and resilience.
Regular Updates and Reviews
Regular updates to monitoring software and firmware are essential to incorporate the latest features and security patches, ensuring the monitoring setup remains effective and secure. Periodic reviews of the monitoring framework ensure it aligns with evolving system requirements and operational goals, adjusting metrics and thresholds based on new insights.
By continuously improving the monitoring framework based on feedback from collected data, organizations can enhance system performance and resilience, proactively addressing potential issues and maintaining optimal operation. This approach not only minimizes downtime but also ensures the highest quality of managed products, aligning with the dynamic needs of modern warehouse management.
Future Trends
Emerging Tech
Just like with any tech, the landscape is evolving rapidly, driven by advances in emerging technologies. One of the most significant trends is the integration of artificial intelligence (AI) and machine learning (ML) into monitoring systems. AI and ML algorithms can analyze vast amounts of data better than traditional methods, identifying patterns that might go unnoticed by human analysts or rulebased systems. This capability enables predictive maintenance, where potential issues are identified and addressed before they cause system failures, significantly enhancing system resilience.
Another advancement is the use of IoT devices with edge computing capabilities. These devices process data locally,
reducing latency and bandwidth usage. This trend is particularly important in environments where real-time response is critical, such as in industrial automation and smart warehouses. Additionally, advancements in 5G technology will provide faster and more reliable connectivity, enabling more robust and widespread deployment of continuous monitoring solutions.
Blockchain technology is also making its way into continuous monitoring, offering enhanced security and data integrity. By providing a tamper-proof record of all monitoring data, blockchain can ensure that data is reliable and has not been altered, which is crucial for compliance and audit purposes.
Future Challenges
As continuous monitoring technologies advance, several challenges are likely to emerge. One of the primary challenges will be managing the sheer volume of data generated by increasingly sophisticated monitoring systems. Efficiently
storing, processing, and analyzing this data will require significant computational resources and advanced data management strategies.
Another challenge will be maintaining interoperability between different monitoring systems and devices. As more manufacturers develop their own monitoring solutions, ensuring these systems can communicate and work together seamlessly will be essential. Industry standards and protocols will play a crucial role in addressing this issue.
Conclusion
Continuous monitoring is an incredibly important part of building resilient systems, providing the real-time insights needed to maintain high performing systems and to swiftly address potential issues. By using advanced tools and technologies, integrating effective monitoring strategies, and adhering to best practices, organizations can significantly enhance the reliability and efficiency of their systems.
As we look to the future of the space, the integration of AI, ML, edge computing, and blockchain will further transform continuous monitoring, offering new opportunities and challenges. By staying ahead of these trends and preparing for the associated challenges, organizations can ensure that their monitoring solutions remain robust, secure, and effective.
Kumar is a Search Engine Optimization team lead with over 8+ Years of experience. Highly skilled in Search Engine Optimization (On Page + Off Page), Technical SEO, UI/UX experience, Core Web Vital, Google Search Console, Semrush, Ahref, Website Auditor, Internet Brand Marketing, E-commerce Website (WordPress, Magento), Google Analytics, Inbound Marketing, Email Outreach Techniques, Develop and Execute Link Building Strategy, Keyword Research.
Modern Embedded Design: Sustainability Throughout the Lifecycle
By Zeljko Loncaric, Segment Manager Infrastructure at congatec
The environment, the protection of nature, a shift in demographics, skilled labor shortages, challenges to security – our society is facing fundamental change. Key technologies like digitization and artificial intelligence can smooth the path for change by enabling smart edge devices that promote a green, sustainable, and livable future. Embedded systems are at the core of all these applications, and their use is rapidly increasing – as is the speed of innovation. Given this, it is all the more important to design, develop, and produce these systems in a more sustainable manner.
Embedded systems and edge servers collect, analyze, and transmit data as an integral part of the telecommunications and 5G infrastructure. Used across all industries and application areas, they act as a driver for future technologies and greater sustainability (Figure 1). Providing ever higher performance in an ever more compact design, they enable autonomous mobile robotics, more advanced medical diagnostics and therapies, as well as innovative solutions for the automation industry. All these applications serve to optimize industrial processes and controls and save resources. As this improves both the profitability and efficiency of applications, embedded systems can help solve our societal challenges.
Shorter development cycles and replacements mean more e-waste
Rapid technical advances, increasingly powerful processors, and specialized processing units like GPGPUs (General Purpose Graphics Processing Units) and NPUs (Neural Processing Units) on the one hand, and ever higher data processing requirements for IIoT-enabled, networked devices, and AI-driven applications on the other, are leading to shorter and shorter development cycles. To stay competitive in the face of such rapid innovation, secure their position as trend setters, and develop new business models, companies must regularly invest in new systems. Experts have already been witnessing this trend in recent years and expect it to continue. For example, as per a recent global server market study, data research institute International Data Corporation (IDC) expects the market to grow by almost 12% in the coming year compared to this year (1 IDC Quarterly Server Tracker, 2023Q1).
The alternative: Modular Computer-on-Modules and Server-on-Modules
Any data center modernization that involves the replacement of entire rack systems creates massive amounts of e-waste – the exact opposite of sustainable and efficient use of resources. The good news is that there is another way. Modular designs based on Computeron-Modules and Server-on-Modules offer a cost-effective and attractive alternative that avoids the need for complete rack replacement. Such platforms make it possible to upgrade existing hardware with the latest processor technology, while retaining all other components such as the carrier board, power supply, and housing. This reduces the number of components that must be replaced and disposed of or recycled at great expense. It also saves end customers
money: Server manufacturer Christmann cites 50% cost savings for upgrades using standardized Server-on-Modules versus full server replacement.
More computing power –less energy consumption
The same arguments that apply to servers are true for other application areas: Robots, medical devices, and industrial applications also gain from higher performance and innovative technologies when outfitted with the most advanced modules, such as those needed for processing complex AI algorithms. Modular systems offer a distinct advantage for upgrading to new technologies and performance levels: Replacing just the Computer-on-Module is far more efficient than replacing entire devices.
In view of rising energy costs and the proliferation of mobile systems, the fact that new processors and modules are generally more energy-efficient is another compelling argument for upgrades. More efficient use of resources and energy is particularly important for fixed 24/7 installations. And shorter charging times and cycles increase the availability of mobile applications like autonomous robots and driverless vehicles.
Advantages for new business models
Upgrading with the latest modules also holds great promises for pay-per-use and as-a-service providers. They can offer cutting-edge hardware platforms with maximum performance for lower upfront investment. This secures a competitive advantage for providers as well as their customers. It also reduces total cost of ownership for the hardware and maximizes return on investment. This makes the subscription economy model lucrative for both providers and customers. Utilized optimally, server installations featuring the latest hardware will ultimately also promote sustainability and efficient resource use.
Longer lifecycles for industry
Modular designs are particularly crucial in industrial applications: Longevity often plays a key role here – especially if the embedded system was customized or adapted specifically for the application. In a worst case scenario, a discontinued processor can require an entirely new development or a costly redesign. However, modular designs using application-specific carrier boards and standardized Computer-on-Modules allow even decades-old applications and designs to be upgraded with new processors. Proven legacy applications can be kept up to date through continuous processor upgrades to provide advanced features and computing performance. As software defines much of the functionality
today, module upgrades extend the lifecycle of entire applications. The result is greater environmental sustainability.
COM-HPC: Standard for modular electronics design
Theory shows that modular electronics provide enormous potential for increased performance, cost savings, sustainability, interchangeability, and upgradeability. COM-HPC Computer-on-Modules from congatec exemplify such scalable designs in practice. They are specifically developed for high-bandwidth, high-performance client and edge server applications that earlier Computer-on-Module specifications cannot address. For this purpose, the COM-HPC modules support a wide range of processors besides GPGPUs, AI accelerators, ASICs and FPGAs. This guarantees maximum flexibility, scalability, and upgradability for current and future designs (Figure 2). High I/O bandwidth and transmission speeds are other crucial
With standardized Computer-on-Modules, customized designs can also be upgraded with new technologies at any time. Thanks to the precisely specified height, cooling in a completely closed system is no problem.
Leading market research companies forecast an annual growth rate of 15.7% for the global edge computing market. Sustainable designs are particularly important here.
FIGURE 1
FIGURE 2
features. This includes PCIe up to the current 5th generation, USB 4/Thunderbolt 4, and 100 Gigabit Ethernet. The COM-HPC standard was created by the PCI Industrial Computer Manufacturers Group (PICMG), with congatec as co-initiator, and is designed specifically for embedded edge applications. The module standard is available in several form factors – from the upcoming COM-HPC Mini standard with a footprint of just 95x70 mm, and the COM-HPC Client standard with three different PCB sizes and up to 49 PCIe lanes, to the COM-HPC Server in footprints D (160x160 mm) and E (160x200 mm) (Figure 3).
Server capabilities for embedded designs COM-HPC Server is the first standard expressly developed for edge server requirements. The modules combine server-level computing power, up to 64 PCIe lanes, and high Ethernet bandwidth with the advantages of ruggedness. Unlike conventional servers, which are confined to air-conditioned server rooms, these embedded servers can be installed near the applications themselves, even in harsh ambient temperatures and operating environments. This makes them ideal for edge applications needing high power, huge data flows, and high-speed processing with low latencies. Up to 1 TB of SDRAM memory facilitates this. Systems utilizing congatec COM-HPC Server modules are perfect for edge data processing in autonomous vehicles, collaborative robots, smart infrastructure applications, and performance-hungry factory automation.
Product series, easy upgradability, and the pursuit of even greater sustainability Christmann t.RECS servers provide an excellent example of how to optimally utilize scalable COM-HPC modules (Figure 4). Thanks to the wide module selection, it is possible to optimally adapt the servers to specific requirements and to develop entire product series with scalable functionality. Upgrading the servers to add more performance as requirements increase, or to instantly leverage new processor features, is easy – a simple module swap will do the trick. This makes the t.RECS servers a prime example of how a modular design strategy helps to
COM-HPC is the most widely scalable Computer-on-Module standard. Five different footprints cover almost the entire range of sustainable embedded designs, from extremely compact low-power applications to high-performance client designs to highly powerful embedded servers.
Modular edge servers such as the Christmann t.RECS server with three COM-HPC Server and Client slots to plug in suitable modules can save massive costs and materials when a technology upgrade is needed, compared to a complete system replacement.
maximize sustainability. These edge servers not only deliver the necessary computing power for AI applications and other future technology innovations that make our lives more sustainable; they also check every box in terms of sustainable design and optimal resource utilization.
The goal is to continue along this path to find other ways to improve sustainability even further – e.g., by using more eco-friendly materials and additive technologies in PCB manufacturing, shortening supply chains, improving electronics recycling, and reducing e-waste overall. congatec continuously works to optimize its own offerings in all these respects.
In the rapidly changing technology universe, embedded designers might be looking for an elusive component to eliminate noise, or they might want low-cost debugging tools to reduce the hours spent locating that last software bug. Embedded design is all about defining and controlling these details sufficiently to produce the desired result within budget and on schedule.
Embedded Computing Design (ECD) is the go-to, trusted property for information regarding embedded design and development. embedded-computing.com
2024 RESOURCE GUIDE
Daedalean AI Accelerator (DAIA)
The Daedalean AI Accelerator (DAIA) is the world's first aerospacecertifiable AI accelerator, designed to meet the needs for onboard certifiable hardware for Al-based applications. DAIA is ideal for building computer vision systems for aerial navigation, traffic detection, landing, and much more. The module was designed in compliance with DO-254 / DO-178C, providing reduced time-to-market for safetycritical applications.
• Real-time image processing
• Convolutional Neural Network (CNN) acceleration
• Certifiable for aviation
• High operating frequency and optimized computational efficiency
• Better performance per Watt even compared to GPUs
Ą 16 x 16 GEMM for a total of 16 x 16 x 12 = 3072 multiplications/clock
Ą 12 batches (i.e., tiles) processed in parallel
Ą 1x/4x lane/s PCIe Gen3
Ą 4 GB DDR4 SDRAM at up to 2666 MHz
Ą 1 x 2.5 Gbps CoaXPress interface
Ą Below 25W TDP
Ą Drivers for Linux (evaluation/development) and embedded operating systems (DO-178C)
Ą CNN compiler toolchain
ks@daedalean.ai
www.linkedin.com/company/daedalean/
Embedded Hardware, including Boards and Systems
TS-4100 System-on-Module
The TS-4100 is an extremely low-power, high-performance System-onModule powered by NXP i.MX 6UltraLite with the Arm® Cortex®-A7 core operating up to 696 MHz. Typical power usage is about 1 W, packed with up to 1 GB RAM, 4 GB eMMC flash, 32-bit programmable off-load engine, microSD with UHS support (up to 60 MB/s), Wi-Fi and Bluetooth modules with built-in antenna, and many industry standard interfaces.
It targets applications with strict power requirements yet needs a highperformance system with wireless connectivity, like industrial Internet of Things gateways, medical, automotive, industrial automation, smart energy, and many more.
All-New Edge AI Systems with up to 14th Gen Intel® Core™ power and dual 450W GPU acceleration
ASUS IoT introduces significant enhancements to its PE series of edge AI computers. The series, now upgraded with the latest 14th Gen Intel® Core™ processors, offers a comprehensive portfolio tailored to handle the demands of AI computing at the edge. The integration of up to 14th Gen Intel® Core™ i9 65W CPUs delivers unparalleled performance for machine vision, high-resolution graphics, and other data-intensive tasks while optimizing energy efficiency. The updated lineup, which includes the PE8000G, PE6000G, PE4000G, PE5101D, and PE5100D models, represents a comprehensive solution engineered for the most challenging scenarios.
The PE8000G, the series' flagship model, supports dual 450W GPUs, enabling it to perform high-throughput computing and real-time AI inferencing with remarkable efficiency in rugged environments where reliability and power are paramount. The entire PE series has been thoughtfully designed with a wide 8-48V DC input range and includes built-in ignition power control and monitoring, providing flexibility in deployment and ensuring consistent performance across various environments.
Further enhancing the system's capabilities, the new processors support dual DDR5-5600 SO-DIMM memory modules up to 64GB, which significantly increases transfer speeds and improves power efficiency. This advancement, paired with ECC technology, ensures exceptional reliability and system stability.
ASUS IoT's commitment to robust design shows in the rugged, fanless, anti-vibration features, and wide temperature support and low power consumption, making the systems ideal for applications like factory automation, computer vision, intelligent video analytics, and autonomous vehicles, to deliver industry-leading edge computing solutions that harness the power of AI technology for various industries.
FEATURES
Ą Latest Intel CPUs: Up to 14th Gen Intel Core i9 processors up to 65W for exceptional performance at the edge
Ą Dual 450W GPUs: Support up to two high-performance PCIe x16 450W GPU cards for advanced real-time AI inferencing at the edge
Ą Ultrafast memory on tap: Ready for up to 64 GB DDR5 5600 SO-DIMM RAM for up to 50%-faster transfer speeds 8%-better power efficiency
Ą Wide-range DC support: Inputs of 8V to 48V accepted plus built-in ignition power control for flexible power options in diverse deployment scenarios
embeddedTS is proud to introduce the TS-7100-Z, our smallest single board computer in an optional DIN-mountable enclosure that measures 2.4" by 3.6" by 1.7", powered by the Arm® Cortex®-A7 based iMX6 UltraLite CPU. It ships with industry-standard interfaces, including Ethernet, USB, RS-232, RS-485, and CAN.
For wireless connectivity, the TS-7100-Z comes with WiFi and Bluetooth module, as well as a NimbeLink/Digi cellular modem and mesh network socket.
FEATURES
Ą NXP i.MX 6UltraLite 696 MHz Arm® Cortex®-A7 based SPC with FPU
Development Module with AMD-Xilinx Artix UltraScale+ FPGA and USB 3.0
The XEM8310-AU25P development module with AMD-Xilinx Artix™ UltraScale+ FPGA offers integrators a turnkey solution with fast (340+ MB/s) USB 3.0 host interface using the Opal Kelly FrontPanel® SDK. Designed for both prototype / proof-of-concept and production deployment, the highly-integrated device includes on-board power supplies, DDR4 memory, and support circuitry in a compact form factor with commercial-off-the-shelf (COTS) availability.
The FrontPanel SDK greatly simplifies hardware / software communication with a comprehensive and easy-to-use API and broad operating system and language support. Opal Kelly SOMs reduce time-to-market, allow teams to focus on their core competencies, and simplify supply chains. Opal Kelly has been ISO 9001:2015 certified since 2019.
Typical applications include:
• ●Data acquisition
• ●Test & measurement, instrumentation, and control
• ●Machine vision / machine learning / AI
• ●Software-defined radio (SDR)
• ●Digital communications and networking
• ●Data security
Opal Kelly Incorporated www.opalkelly.com sales@opalkelly.com
FEATURES
Ą AMD-Xilinx Artix UltraScale+ XCAU25P-2FFVB676E
Ą SuperSpeed USB 3.0 port for high-bandwidth data transfer
The Flexio Fanless Industrial Embedded Computer features RS-232/ 422/485 serial ports, USB 3.2 Gen 2 ports Type A, USB 3.2 Gen 2 ports Type C (capable of DisplayPort), Gigabit Ethernet port, 2.5 Gigabit Ethernet port, DisplayPort, and HDMI port as a base system with flexible, configurable I/O options to fulfill specific application requirements. With a fanless design, the Flexio has a 0° to 50°C operating temperature range. Fueled by an Intel Core processor, the Flexio Fanless Industrial Embedded Computer is intentionally designed to provide power, connectivity, and performance. This combination makes the Flexio an ideal controller for industrial automation, control, point-of-sale, test and measurement, medical, and security applications. The Flexio features a solid-state design for an extended lifespan, increased reliability, and quiet fanless operation. Additional Flexio computers include options for an Atom processor and diverse I/O configurations.
https://www.sealevel.com/product-category/flexio/
Sealevel Systems, Inc. www.sealevel.com
sales@sealevel.com
FEATURES
Ą (4) RS-232/422/485 Serial Ports
Ą (3) USB 3.2 Gen 2 Ports Type A, and (1) USB 3.2 Gen 2 Ports Type C
congatec expands its modular edge server ecosystem with a µATX server carrier board and new COM HPC Server-on-Modules based on the latest Intel Xeon processors. The µATX server board for COM-HPC modules was developed for compact real-time servers that are used in edge applications and critical infrastructures. The board can be flexibly scaled with the latest high-end COM-HPC Server modules from congatec which are equipped with the latest Intel Xeon D-1800 and D-2800 processors.
The ecosystem also offers various comprehensive cooling solutions, including passive cooling for small chassis. In addition to customizing the conga-HPC/uATX server carrier board, the service package also includes customer-specific BIOS/UEFI and real-time hypervisor implementations as well as expansion with additional IIoT functionalities for digitization purposes.
Ą µATX carrier board conga-HPC/uATX server features up to 100 GbE and bandwidth, x8 and x16 PCIe expansion for processing AI-intensive workloads via GPGPUs or other compute accelerators, 2x M.2 Key M slots for NVMe SSDs and a M.2 Key B slot for compact AI accelerators or communication modules for WiFi or LTE/5G.
Ą conga-HPC/sILL and conga-HPC/sILH Server-on-Modules based on Intel D-1800 LCC and D-2800 HCC processor series features Intel Speed Select technology, up to 22 cores with higher clock speeds, firmware-integrated hypervisor, and real-time capability, which is provided by TCC, TCN and optional SyncE support.
sales-us@congatec.com
www.linkedin.com/company/congatec/
Modular edge server ecosystem
SMARC modules based on NXP i.MX 95 processor series
New high-performance computer-on-modules (COMs) with i.MX 95 processors from NXP, benefit from straightforward scalability and reliable upgrade paths for existing and new energy-efficient edge AI applications with high security requirements.
In these applications the new modules offer the advantages of up to three times the GFLOPS computing performance compared to the previous generation with i.MX8 M Plus processors. The neural processing unit called 'eIQ Neutron' doubles the inference performance for AI accelerated machine vision. Hardware-integrated EdgeLock® secure enclave simplifies the implementation of in-house cyber security measures. conga-SMX95 SMARC modules are designed for an industrial temperature range of -40°C to +85°C, are robust in mechanical terms and optimized for cost- and energy-efficient applications. The integrated high-performance eIQ Neutron NPU makes it possible for AI accelerated workloads to be performed even closer to the local device level.
Development Module with AMD-Xilinx Artix UltraScale+
FPGA and USB 3.0
The XEM8310-AU25P development module with AMD-Xilinx Artix™ UltraScale+ FPGA offers integrators a turnkey solution with fast (340+ MB/s) USB 3.0 host interface using the Opal Kelly FrontPanel® SDK. Designed for both prototype / proof-of-concept and production deployment, the highly-integrated device includes on-board power supplies, DDR4 memory, and support circuitry in a compact form factor with commercial-off-the-shelf (COTS) availability.
The FrontPanel SDK greatly simplifies hardware / software communication with a comprehensive and easy-to-use API and broad operating system and language support. Opal Kelly SOMs reduce time-to-market, allow teams to focus on their core competencies, and simplify supply chains. Opal Kelly has been ISO 9001:2015 certified since 2019.
Typical applications include:
• ●Data acquisition
• ●Test & measurement, instrumentation, and control
• ●Machine vision / machine learning / AI
• ●Software-defined radio (SDR)
• ●Digital communications and networking
• ●Data security
Opal Kelly Incorporated www.opalkelly.com
FEATURES
Ą AMD-Xilinx Artix UltraScale+ XCAU25P-2FFVB676E
Ą SuperSpeed USB 3.0 port for high-bandwidth data transfer
The Kontron COMh-m7RP (E2) is a high-performance Computeron-Module (COM) designed for demanding edge computing and industrial applications. Powered by Intel® Xeon® and 13th Gen Intel® Core™ processors with up to 14 cores, it offers robust processing, enhanced security, and reliability in harsh environments. With a compact form factor (95 mm x 70 mm) and extended temperature range, it’s ideal for automation, transportation, and defense systems.
The module includes up to 64 GB of LPDDR5 memory, 16 PCIe lanes (with optional Gen5 support), and versatile I/O options such as USB4, Thunderbolt, and DisplayPort. Additionally, it sup ports industrial-grade temperature ranges and includes onboard NVMe SSD options for enhanced storage capabilities. Dual 2.5 Gb Ethernet with TSN ensures high-speed con nectivity and precise time synchronization, making it a versatile choice for embedded and industrial applications.
High-performance server class Kontron K9051-C ATX Motherboard
In the modern digital landscape, where high-performance computing (HPC) is critical for advancing industrial automation, artificial intelligence (AI), data sequencing, and imaging technologies, a powerful server-class motherboard is essential. Addressing the needs of the industry, Kontron introduces its latest K9051-C741 AT, engineered to support Intel® 4th/5th Gen Xeon scalable processors, with capabilities of up to 64 cores, 768 GB of memory, and a variety of expansion options.
The K9051-C741 ATX server-class motherboard with Intel C741 chipset, comes with five PCIe Gen 5 expansion slots, independent memory channels, and supports a broad range of I/O options making it ideal for today’s most demanding applications. The board has eight separate RDIMM sockets supporting a maximum memory density of 768 GBytes at up to 5600 MT/s. This makes the K9051-C741 ATX motherboard wellsuited for HPC needs including high core count processors, large memory banks and support for modern data interface requirements.
Kontron’s server-class motherboard is also equipped with two 10GBASE-T RJ45 Ethernet ports, ensuring fast and reliable network con nectivity. For storage the K9051-C ATX sup ports up to six SATA 6G and M.2 PCIe/NVMe connections including support for RAID con figurations. In addition, the board supports high speed peripherals via six USB 3.1 Gen 1 ports.
End-To-End Technological Solutions: From Edge to Cloud
Leveraging decades of expertise in turnkey embedded computing design, manufacturing, Internet of Things (IoT) and Artificial Intelligence (AI) solutions, SECO enables businesses with smart solutions that accelerate digital transformation.
Edge Computing
SECO edge platforms feature state-of-the-art processing technologies, based on x86, Arm®, NPU, and FPGA architectures. Products include computer-on-module (COM) and single board computer (SBC) embedded boards, mountable human-machine interface (HMI) assemblies, and fully packaged devices – offthe-shelf or customized.
SECO HMI panel PCs are available with various processor, installation, and display size options from 7” to 21.5”. The newly available Modular Vision series provides flexibility in optimizing processing performance, peripherals, screen size, and cost.
Off-the-shelf COM and SBC products feature leading processing technologies (NXP, Intel®, Qualcomm®, AMD, MediaTek) compliant with major standards (SMARC, QSeven®, COM-HPC®, COM Express®, Pico-ITX) or application-optimized. SECO also offers system integration-ready boxed fanless embedded computers, and communication gateways.
The latest SECO solutions include:
• SOM-SMARC-QCS6490 and -QCS5430 featuring Qualcomm® processors (for industrial automation and AI at-the-edge)
• SOM-SMARC-ADL-N/ASL with Intel ® Atom ™ x7000E Series (for AIoT applications)
• SOM-COM-HPC-A-MTL and SOM-COMe-BT6-MTL with Intel® Core™ Ultra Processors (for AI and high bandwidth sensor fusion IoT end devices)
• SOM-SMARC-MX95 with NXP processors (for low-power, headless or simple display handheld devices)
• SOM-COMe-BT6-RPL-P with 13th Gen Intel® Core processors (for high performance industrial control and IoT)
• SOM-SMARC-Genio510 with MediaTek processors (for industrial automation)
• SBC-pITX-EHL with Intel® Atom™ x6000E Series (for AIoT applications)
• Titan 300 TGL-UP3 AI fanless computer featuring 11th Gen Intel® Core™ processors with a 120 TOPS Axelera AI engine (for edge computer vision)
SECO www.seco.com
Intelligent Platform Solutions
SECO makes the implementation of IoT solutions easy with Clea, an edge and cloud modular software suite that harnesses field data. Based on open source software, Clea is highly scalable, portable, cloud agnostic, and cost-effective. With instances running on edge devices managed by a cloud-based software hub, Clea facilitates real time infrastructure management, analytics, predictive maintenance, secure remote software updates, and more. Deploy edge AI apps and value-added revenuegenerating subscription services via a monetization framework. Clea OS simplifies the deployment of edge IoT devices. Based on Yocto and available for SECO and non-SECO edge platforms, Clea OS includes IoT data orchestration, device management, and real-time security monitoring – ready for easy integration with application software.
FEATURES
Ą Off-the-shelf embedded products: standard form-factor COMs, SBCs, mountable HMI devices, and fanless computers that reduce development risk and time-to-market.
Ą Operating systems for edge devices: Linux, Android, Windows, RTOS.
Ą Customized computing platforms: custom-designed circuitry, software, and devices that meet unique product requirements.
Ą Clea edge and cloud IoT software suite: highly scalable open source solution for harnessing field data, managing devices, and monetizing valueadded services.
Ą Clea OS: Yocto-based operating system for SECO edge products that streamlines the deployment of edge IoT devices.
Ą Embedded AI: using Clea, deploy smart algorithms that autonomously operate on edge devices – cloud not needed.
Ą Product development: design and production of rugged high reliability electronic devices, including rugged tablets, devices, and industrial equipment.
Ą World-class electronics manufacturing: ISO 9001 and 13485 certified.
SM2S-ASL
The new MSC SM2S-ASL module features the Intel Atom® processors x7000RE/C Series (codenamed “Amston Lake”). The CPU architecture is based on the same Efficient-cores and Intel® UHD graphics driven by Xe architecture as the 12th Gen Intel® Core™ processors, thus easing up application migration across Intel® CPU performance and power ranges. With support for up to eight processor cores, the module fits to a wide range of applications including point-of-sales terminals, digital signage controllers, HMI solutions and medical equipment. The board is ideal for system products that are exposed to harsh ambient conditions. It is designed for extended temperature range and 24/7 continuous operation.
The new MSC SM2S-ASL offers triple independent display support with a maximum of 4k resolution, fast LPDDR5 memory with up to 16GB and optional IBECC capabilities, eMMC 5.1, USB 3.1 and PCIe Gen3 on a power saving and cost-efficient SMARC 2.1.1 module.
www.tria-technologies.com/products/msc-sm2s-asl/
Tria Technologies www.tria-technologies.com
FEATURES
Ą Intel Atom® processors x7000RE/C Series (codenamed “Amston Lake”)
Ą Power efficient processors, TDP 6 to 12W
Ą Up to eight processing cores
Ą Integrated Intel UHD Gen.12 graphics, max. 32 executing units
Ą Up to 16GB LPDDR5 SDRAM, In-band ECC
Ą LVDS / Embedded DisplayPort and MIPI-DSI
Ą Full customization available
Embedded Hardware, including Boards and Systems
MaaXBoard OSM93 features an NXP i.MX 93 System on Chip compute module, with integrated AI/ML NPU accelerator, EdgeLock security enclave and Energy Flex architecture that supports separated processing domains, such as the Application domain with two Arm® Cortex®-A55 (1.7 GHz) cores, the real time domain with Arm® Cortex®-M33 (250 MHz) core and Flex domain with Arm® Ethos-U65 NPU (1 GHz). Other resources on the fitted MSC OSM-SF-IMX93 solderdown module include eMMC (16GB) memory, LPDDR4 (2GB, 3.7 GT/s) with inline ECC support, RTC clock and NXP PCA9451 PMIC.
The Raspberry Pi form-factor carrier SBC carrier board adds QSPI flash memory (16Mbit) plus connectivity and UI interfaces. High speed interfaces include four USB 2.0 interfaces (2x host type A, 1x host type-C, 1x device type-C), MIPI DSI display and MIPI CSI camera interfaces, two 1 Gbps Ethernet ports and two high-speed CAN interfaces.
Ą M.2 key-E connector on rear enables optional NXP based tri-radio M.2 modules
Ą Four USB 2.0 interfaces (2x host type A, 1x host type-C, 1x device type-C)
Ą Pi-Hat 40pin-header, 6-pin ADC header and 6-pin SAI digital audio header
Ą Develop with the SBC, then customize for your product
www.linkedin.com/company/triatechnologies/
Tria MaaXBoard OSM93
OSM SF-IMX93
The MSC OSM-SF-IMX93 is based on the new OSM 1.1 standard (Size-S) “Small” for completely machine processible low-cost embedded computer modules during soldering, assembly and testing.
Highly scalable and equipped with i.MX 93 Applications Processors manufactured by NXP. The processors integrate ARM Cortex-A55 cores, bringing performance and energy efficiency to Linux-based edge applications and the ARM Ethos-U65 microNPU, enabling developers to create more capable, cost-effective and energy-efficient machine learning (ML) applications. The i.MX 93 processors deliver advanced security with integrated EdgeLock secure enclave and an efficient 2D graphics processing unit (GPU).
MSC OSM-SF-IMX93 provides fast and low power LPDDR4 memory technology with inline ECC support, combined with up to 256GB eMMC Flash memory. Various interfaces for embedded applications such as Dual Gigabit Ethernet (RGMII), USB 2.0, 2x CAN-FD, MIPI-DSI and MIPI CSI-2 (2-lane) for connecting a camera are available. The typical design power ranges from 2 W to 4 W.
The module is compliant with the new OSM 1.1 standard (OSM-SF). For evaluation and design-in of the new OSMSF-IMX93 module, MSC provides a development platform and a starter kit. A Yocto based Linux Board Support Package is available (Android support on request).
FEATURES
Ą Based on the Qualcomm QCS6490 SOC
Ą 4x Arm Cortex-A78 (@ up to 2.7 GHz)
Ą 4x Arm Cortex-A55 (@ 1.9 GHz)
Ą GPU: Adreno 643 (@ 812 MHz)
Ą DSP/NPU: 6th gen Qualcomm AI Engine (13 TOPS)
Ą VPU: Adreno 633, video enc/dec to 4k30 / 4K60
Ą 2x USB 3.1 (2L), 3x USB 2.0 interfaces
Ą 2x PCIe Gen3 (1L), 1x PCIe Gen3 (2L) interface
Ą SMARC 2.1.1 edge connector (314 pin)
Ą Operating Temperature: -40C~+85°C
Ą Full customization available
The QCS6490 Vision-AI Development Kit features an energyefficient, multi-camera, SMARC 2.1.1 compute module, based on the Qualcomm QCS6490 SOC device.
High performance cores on the QCS6490 SOC include an 8-core Kryo™ 670 CPU, Adreno 643 GPU, Hexagon DSP and 6th gen AI Engine (13 TOPS), Spectra 570L ISP (64MP/30fps capability) and Adreno 633 VPU (4K30/4K60 enc/dec rates), ensure that exceptional concurrent video I/O processing performance is delivered.
A useful subset of the SMARC’s interfaces are pinned-out on the carrier board, to support four cameras, two displays, five USB interfaces, CAN-FD, Gigabit Ethernet and optional highspeed Wi-Fi networking. The audio subsystem includes two PDM microphones, stereo audio Codec, digital audio interface and analog audio jack I/O.
Integrated TPM and Wi-Fi/BT modules are offered as a SMARC assembly option.
The carrier board has M.2 slots for NVME storage and advanced wireless options. Compact 100 mm x 79 mm carrier board dimensions and mounting hole alignment with similar AI developer boards, enable a drop-in capability with many enclosures.
The development kit ships with a Yocto-based Linux BSP, plus example AI and multi-camera open-source applica tions. Additional OS options are available, with Windows 11 IoT Enterprise due in 2H24. (Qualcomm also supports Android and Ubuntu Linux on the QCS6490, but for this platform it will depend on demand).
Target Apps
1. Ruggedized Handheld Industrial Scanners and Tablets
2. Drone/UAV/other mobile Vision-AI Edge Compute Applications
3. Info kiosks, Vending Machines and Interactive HMI Systems
Our module families MSC SM2S-QCS6490 and MSC SM2S-QCS5430 feature high processor power and AI capabilities on the compact SMARC form factor. The computer-on-modules are powered by ARMbased Qualcomm QCS5430 and Qualcomm QCS6490 processors and bring a new level of performance to Avnet’s portfolio of SMARC modules.
The compact modules with dimensions of 82 x 50mm feature an optimized ratio of high performance and energy efficiency. The .SM2S-QCS6490 module family offers maximum CPU, GPU and NPU performance at a power consumption of only 7W, whereas the costoptimized SM2S-QCS5430 modules have a good balance between compute and power consumption of 5W. On the AI-capable modules with Qualcomm processors, large language models (LLMs) can be run locally, controlled by millions of parameters.
Ą Integrates a Qualcomm Kryo 670 CPU which contains up to eight cores
Ą The GPU supports video encode/decode at up to 4K30/4K60
Ą Modules deliver edge AI for high-performance with up to 12 TOPS
Ą Comprehensive ecosystem – including development platform and starter kit
Ą Designed and manufactured in-house to ensure exceptional quality
Ą Full customization available Tria Technologies www.tria-technologies.com
Embedded Hardware, including Boards and Systems
SM2S-IMX95
The MSC SM2S-IMX95 SMARC module family is based on the new SMARC 2.1.1 standard allowing easy integration with SMARC baseboards.
The i.MX 95 family from NXP combines multi-core high performance compute and immersive 3D graphics. It is also the first i.MX Applications Processor family to integrate NXP’s elQ* Neutron neural processing unit (NPU) and a new image signal processor (ISP) developed by NXP, helping developers to build powerful, next generation edge platforms.
The new SMARC module family is ideal for applications that require a small form factor compute platform with high performance, graphics, video and AI capabilities and rich I/O feature set. These applications include industrial, medical platforms, smart home and building (with vision and high-speed networking), industrial transport and grid infrastructure.
Ą Full customization available Tria Technologies www.tria-technologies.com
www.linkedin.com/company/triatechnologies/
Tria XRF16 Gen3 SOM
The Tria XRF16™ RFSoC System-on-Module is designed for integration into deployed RF systems demanding small footprint, low power, and real-time processing. The XRF16 features the AMD Xilinx Zynq® UltraScale+™ RFSoC Gen3 ZU49DR, with 16 RF-ADC, 16 RF-DAC channels, and 6GHz RF bandwidth.
Combine the production-ready XRF16 module with the XRF16 Carrier Card and Avalon™ software suite to jumpstart proof-of-concept and application development. Then deploy your system with the same XRF16 module used for proof-ofconcept. Example code and tutorials demonstrate AMD Xilinx RFSoC multitile sync (multi-converter sync) and multi-board synchronized analog capture.
Ą Based on AMD Xilinx Zynq UltraScale+ Gen3 ZU49DR RFSoC
Ą 16x ADCs, 14-bit up to 2.5 GSPS
Ą 16x DACs, 14-bit up to 9.85 GSPS (10 GSPS available)
Ą High-Speed Data Transfer - 16x ultra-fast AMD Xilinx GTY serial transceivers
Ą Ultra-low jitter programmable sampling clocks
Ą 4” x 5” footprint
Ą Industrial temperature rated
www.linkedin.com/company/triatechnologies/
Lauterbach Debugger for RH850
Lauterbach RH850 debug support at a glance:
The Lauterbach Debugger for RH850 provides high-speed access to the target processor via the JTAG/LPD4/LPD1 interface. Debugging features range from simple Step/Go/Break to multicore debugging. Customers value the performance of high-speed flash programming and intuitive access to all of the peripheral modules.
TRACE32 allows concurrent debugging of all RH850 cores.
• The cores can be started and stopped synchronously.
• The state of all cores can be displayed side by side.
• All cores can be controlled by a single script.
All RH850 emulation devices include a Nexus trace module, which enables multicore tracing of program flow and data transactions. Depending on the device, trace data is routed to one of the following destinations:
• An on-chip trace buffer (typically 32KB)
• An off-chip parallel Nexus port for program flow and data tracing
• A high bandwidth off-chip Aurora Nexus port for extensive data tracing
The off-chip trace solutions can store up to 4GB of trace data and also provide the ability to stream the data to the host for long-term tracing, thus enabling effortless performance profiling and qualification (e.g. code coverage).
Lauterbach, Inc. www.lauterbach.com
800-408-8353
FEATURES
Ą AMP and SMP debugging for RH850, GTM and ICU-M cores
Ą Multicore tracing
Ą On-chip and off-chip trace support
Ą Statistical performance analysis
Ą Non intrusive trace based performance analysis
Ą Full support for all on-chip breakpoints and trigger features
Ą AUTOSAR debugging
info_us@lauterbach.com
508-303-6812
TRACE32 Multi Core Debugger for TriCore Aurix
Lauterbach TriCore debug support at a glance:
For more than 15 years Lauterbach has been supporting the latest TriCore microcontrollers. Our tool chain offers:
• Single and multicore debugging for up to 6 TriCore cores
• Debugging of all auxiliary controllers such as GTM, SCR, HSM and PCP
• Multicore tracing via MCDS on-chip trace or via high-speed serial AGBT interface
The Lauterbach Debugger for TriCore provides high-speed access to the target application via the JTAG or DAP protocol. Debug features range from simple Step/Go/Break up to AutoSAR OS-aware debugging. High speed flash programming performance of up to 340kB/sec on TriCore devices and intuitive access to all peripheral modules are included.
Lauterbach’s TRACE32 debugger allows concurrent debugging of all TriCore cores.
• Cores can be started and stopped synchronously.
• The state of all cores can be displayed side by side.
• All cores can be controlled by a single script.
Lauterbach, Inc. www.lauterbach.com
TS-8820-4100
FEATURES
Ą Debugging of all auxiliary controllers: PCP, GTM, HSM and SCR
Ą Debug Access via JTAG and DAP
Ą AGBT High-speed serial trace for Emulation Devices
Ą On-chip trace for Emulation Devices
Ą Debug and trace through Reset
Ą Multicore debugging and tracing
Ą Cache analysis
With a powerful FPGA at its core, the TS-8820-4100 is a robust, flexible industrial process control systems platform with a packed feature set of various electrically isolated and buffered I/O. The TS-8820-4100 design intent is to meet needs in building automation, including building access, power control, and monitoring. It is suitable for any application that needs multiple robust and optically isolated I/O.
The TS-8820-4100 presents ADCs, DACs, Relays, GPIO, CAN, PWMs, Pulse counters, Thermistor-ready, RS-232, RS-485, Power over Ethernet (PoE), and more in an enclosed, DIN-mountable package.
Industrial Automation and Control
FEATURES
Ą NXP i.MX 6UL (Arm Cortex-A7 @ 696MHz)
Ą Rugged screw terminals for reliable connections
Ą 8 isolated digital inputs and 6 non-isolated inputs (40V tolerant)
More than 17 years of experience in ARM debugging enable Lauterbach to provide best-in-class debug and trace tools for ARMv8 based systems:
• Multicore debugging and tracing for any mix of ARM and DSP cores
• Support for all CoreSight components to debug and trace an entire SoC
• Powerful code coverage and run-time analysis of functions and tasks
• OS-aware debugging of kernel, libraries, tasks of all commonly used OSs
Lauterbach debug tools for ARMv8 help developers throughout the whole development process, from the early pre-silicon phase by debugging on an instruction set simulator or a virtual prototype over board bring-up to quality and maintenance work on the final product.
Debugger features range from simple step/go/break, programming of on-chip-flash, external NAND, eMMC, parallel and serial NOR flash devices and support for NEON and VFP units, to OS-aware debug and trace concepts for 32-bit and 64-bit multicore systems.
TRACE32 debuggers support simultaneous debugging and tracing of homogeneous multicore and multiprocessor systems with one debug tool.
Start/Stop synchronization of all cores and a time-correlated display of code execution and data r/w information provides the developer with a global view of the system's state and the interplay of the cores.
• High-tech company with long-term experience
• Technical know-how at the highest level
• Worldwide presence
• Time to market
FEATURES
Ą Full support for all CoreSight components
Ą Full architectural debug support
Ą Support for 64-bit instruction set and 32-bit instruction sets ARM and THUMB
Ą 32-bit and 64-bit peripherals displayed on logical level
Ą Support for 32-bit and 64-bit MMU formats
Ą Auto-adaption of all display windows to AArch32/ AArch64 mode
Ą Ready-to-run FLASH programming scripts
Ą Multicore debugging
Ą On-chip trace support (ETB, ETF, ETR)
Ą Off-chip trace tools (ETMv4)
Ą AMP debugging with DSPs, GPUs and other accelerator cores
About our Products
• Everything from a single source
• Open system
• The full array of architectures supported Our Company Philosophy
• Open user interface for everything
• Long-term investment through modularity and compatibility
PCAN-PCI/104-Express FD
FEATURES:
Ą PCI/104-Express card, 1 lane (x1)
Ą Form factor PC/104
Ą Up to four cards can be used in one system
Ą 1, 2, or 4 High-speed CAN channels (ISO 11898-2)
Ą Complies with CAN specifications 2.0 A/B and FD (ISO and Non-ISO)
Ą CAN FD bit rates for the data field (64 bytes max.) from 20 kbit/s up to 12 Mbit/s
Ą CAN bit rates from 20 kbit/s up to 1 Mbit/s
Ą Connection to CAN bus through D-Sub slot bracket, 9-pin (in accordance with CiA® 106)
Ą FPGA implementation of the CAN FD controller
Ą Microchip CAN transceiver MCP2558FD
Ą Galvanic isolation on the CAN connection up to 500 V, separate for each CAN channel
Ą CAN termination and 5-Volt supply to the CAN connection can be activated through a solder jumper
Ą Extended operating temperature range from -40 to +85 °C (-40 to +185 °F)
The PCAN-PCI/104-Express FD allows the connection of PCI/104Express systems to CAN and CAN FD buses. Up to four cards can be stacked together. The CAN bus is connected via 9-pin D-Sub connectors to the supplied slot brackets. There is a galvanic isolation between the computer and the CAN side up to 500 Volts. The card is available as a single, dual, or four-channel version.
The monitor software PCAN-View and the programming interface PCAN-Basic are included in the scope of supply and support the standard CAN FD.
Ą Multiple PC/104 cards can be operated in parallel (interrupt sharing)
Ą 14 port and 8 interrupt addresses are available for configuration using jumpers
Ą 1 or 2 High-speed CAN channels (ISO 11898-2)
Ą Bit rates from 5 kbit/s up to 1 Mbit/s
Ą Compliant with CAN specifications 2.0A (11-bit ID) and 2.0B (29-bit ID)
Ą Connection to CAN bus through D-Sub slot bracket, 9-pin (in accordance with CiA® 106)
Ą NXP SJA1000 CAN controller, 16 MHz clock frequency
Ą NXP PCA82C251 CAN transceiver
Ą 5-Volt supply to the CAN connection can be connected through a solder jumper, e.g., for external bus converter
Ą Optionally available with galvanic isolation on the CAN connection up to 500 V, separate for each CAN channel
Ą Extended operating temperature range from -40 to +85 °C (-40 to +185 °F)
PEAK-System Technik GmbH
CAN Interface for PC/104
The PCAN-PC/104 card enables the connection of one or two CAN networks to a PC/104 system. Multiple PCAN-PC/104 cards can easily be operated using interrupt sharing.
The card is available as a single or dual-channel version. The opto-decoupled versions also guarantee galvanic isolation of up to 500 Volts between the PC and the CAN sides.
The package is also supplied with the CAN monitor PCAN-View for Windows and the programming interface PCAN-Basic.
Ą Compliant with CAN specifications 2.0A (11-bit ID) and 2.0B (29-bit ID)
Ą Connection to CAN bus through D-Sub slot bracket, 9-pin (in accordance with CiA® 106)
Four-Channel CAN Interface for PC/104-Plus
The PCAN-PC/104-Plus Quad card enables the connection of four CAN networks to a PC/104-Plus system. Up to four cards can be operated, with each piggy-backing off the next. The CAN bus is connected using a 9-pin D-Sub plug on the slot brackets supplied. There is galvanic isolation of up to 500 Volts between the computer and CAN sides.
The package is also supplied with the CAN monitor PCAN-View for Windows and the programming interface PCAN-Basic.
info@peak-system.com
www.linkedin.com/company/peak-system
+49 (0)
Ą FPGA implementation of the CAN controller (SJA1000 compatible)
Ą NXP PCA82C251 CAN transceiver
Ą Galvanic isolation on the CAN connection up to 500 V, separate for each CAN channel
Ą Supplied only via the 5 V line
Ą 5-Volt supply to the CAN connection can be connected through a solder jumper, e.g., for external bus converter
Ą Extended operating temperature range from -40 to +85 °C (-40 to +185 °F)
The PCAN-PCI/104-Express card enables the connection of one or two CAN buses to a PCI/104-Express system. Up to four cards can be stacked together. The CAN bus is connected using a 9-pin D-Sub plug on the slot brackets supplied. There is galvanic isolation of up to 500 Volts between the computer and CAN sides.
The package is also supplied with the CAN monitor PCAN-View for Windows and the programming interface PCAN-Basic.
Ą Compliant with CAN specifications 2.0A (11-bit ID) and 2.0B (29-bit ID)
Ą Connection to CAN bus through D-Sub slot bracket, 9-pin (in accordance with CiA® 106)
Ą NXP SJA1000 CAN controller, 16 MHz clock frequency
Ą NXP PCA82C251 CAN transceiver
Ą 5-Volt supply to the CAN connection can be connected through a solder jumper, e.g., for external bus converter
Ą Extended operating temperature range from -40 to +85 °C (-40 to +185 °F)
CAN Interface for PC/104-Plus
The PCAN-PC/104-Plus card enables the connection of one or two CAN networks to a PC/104-Plus system. Up to four cards can be operated, with each piggy-backing off the next. The CAN bus is connected using a 9-pin D-Sub plug on the slot bracket supplied.
The card is available as a single or dual-channel version. The opto-decoupled versions also guarantee galvanic isolation of up to 500 Volts between the PC and the CAN sides.
Ą Optionally available with galvanic isolation on the CAN connection up to 500 V, separate for each CAN channel
The package is also supplied with the CAN monitor PCAN-View for Windows and the programming interface PCAN-Basic.
info@peak-system.com
www.linkedin.com/company/peak-system
Industrial Automation and Control
AKHET® BoxFlex M is the tiny house of the BoxFlex series and enables maximum power density in a compact space. The system is specially designed for machines and control cabinets to meet your needs.
With 5 expansion slots and 2 x 120mm fans, high performance and versatility is possible in a small space. The fans allow high operating temperatures up to 45°C and a TDP max of 100W. It can be equipped with Intel® Core™ i7 (20C/28T) and 5,4Ghz or AMD Ryzen 9 (12C/24T), provides DDR5 ECC RAM and the latest PCIe 5.0 version. Replaceable dust filter, bracket for wall mounting and the optional DC power supply equip the BoxFlex M for use in harsh environments. All connectors are available on the front side. Optionally upgradeable with common camera interfaces like GigE Vision, USB 3.0, FireWire, CameraLink (CL), POE adapters or GPUs the product is suitable for image processing and AI applications in industrial environments.
Pyramid North America Inc. www.pyramid-america.com
FEATURES
Ą Four expansion slots PCIe 5.0 + 1, all full height
Apacer manufactures a variety of stable and durable digital storage solutions designed for vertical application markets. In the field of industrial applications, Apacer listens to customer needs, grasps market trends, and actively enhances the research and development capabilities of embedded SSD storage as well as the associated software and hardware (e.g. CoreSnapshot, it performs fast full-disk backup operations without the need for additional software; CoreRescue, it can activate the disaster recovery function via multiple trigger mechanisms.) Apacer also provides stable and highly efficient designs to meet the diverse development characteristics of the vertical application market. Customized requests are welcome.
Ą SLC-LiteX for higher endurance is available upon request
Apacer Memory America Inc. www.apacer.com sdsales@apacerus.com
Apacer has developed the world’s first fully lead-free memory modules. Lead in the resistive layer is replaced by other elements and alloys. Therefore, it doesn’t rely on any RoHS exemptions. Manufacturers who source these modules will no longer have to worry about how RoHS standards may change in the future. This is particularly useful for products with long certification times and high initial costs, such as the latest AI and Edge servers currently in production. Many products in the healthcare, telecom and networking verticals are also likely to benefit.
Apacer Fully Lead-Free technology can be applied to UDIMM, SODIMM, RDIMM, ECC UDIMM and ECC SODIMM form factors. These products will soon benefit from other value-adding features such as VLP, wide-temperature operation and antisulfuration, making Apacer’s DRAM modules suitable for more challenging applications and environments.
Apacer Memory America Inc. www.apacer.com
408-518-8699
Memory and Storage
FEATURES
Ą Available form factor: UDIMM, SODIMM, ECC UDIMM, ECC SODIMM, RDIMM
Ą Temperature option: Standard Temperature, Wide Temperature
Ą Fully compliant with EU RoHS regulations, no exemptions are required.
Ą Void the need for re-validation when the product changes after the exemption expires.
With a successful track record protecting combat systems in Aerospace and Defense (A&D), Star Lab helps customers implement build secure methodologies into Linux-based embedded systems. Our products are easily integrated, flexible, and designed to handle worst-case threat scenarios: hands-on physical or privileged attackers. We prevent attackers from tampering with or altering software / firmware and limit their maneuverability, ensuring your systems remain protected, resilient, and operate even under attack.
Star Lab's Kevlar Embedded Security is a layered Linux security solution that helps thwart IP theft and achieve cyber resiliency for commercial embedded systems. It is designed using a threat model that assumes an attacker will gain administrative-level access to the system, and is intended for implementation early on in system development. It maintains the integrity and confidentiality of critical applications, data and configurations at rest, through boot, and runtime, and is compatible with both Intel and Arm chipsets using Yocto or Wind River Linux. Each capability deploys as a selectable Yocto layer, allowing customers to customize their security approach.
Kevlar Embedded Security hardens the kernel to prevent common exploit techniques from being used by attackers. At the OS level, it implements Mandatory Access Controls, application allowlisting, and sandboxing capabilities to limit attacker manueverability.
The product also provides at-rest and runtime data protection capabilities. Star Lab’s data-at-rest solution includes self-encrypting start-up and built-in integrity checks to detect tamper. At runtime, Kevlar Embedded Security isolates containers to protect critical data and prevent entry from external entities. It also establishes mandatory access controls to limit what on the system is allowed to access a container.
Kevlar Embedded Security easily integrates into customers’ existing DevSecOps tooling. The product also comes with a full test suite so customers can validate that they’ve implemented the security as desired.
FEATURES
Ą Compatible with Intel x86 and ARM
Ą Compatible with Yocto and Wind River Linux
Ą Cyber Resilience
Ą Kernel Hardening
Ą At-rest and runtime Data Protection
Ą Full Disk Encryption
Ą Application Allowlisting
Ą Mandatory Access Controls
Ą Seamless DevSecOps integration
Ą Full Test Suite Included
wolfSSL Embedded TLS Library, FIPS 140-3 validated
wolfSSL focuses on providing lightweight and embedded security solutions with an emphasis on speed, size, portability, features, and standards compliance. wolfSSL is supporting high security designs in government, automotive, avionics and other industries. For government consumers, wolfSSL is FIPS 140-3 validated cert #4718, with Common Criteria support. In avionics, wolfSSL has support for complete RTCA DO-178C level A certification. In automotive, it supports MISRAC capabilities. wolfSSL supports industry standards up to the current TLS 1.3 and DTLS 1.3, is up to 20 times smaller than OpenSSL, offers a simple API, an OpenSSL compatibility layer, is backed by the robust wolfCrypt cryptography library, and much more. Our products are open source, giving customers the freedom to look under the hood. wolfSSL has a mean time to release a fix for vulnerabilities of less than 36 hours, offers commercial support up to 24/7, and has the best tested cryptography in the market today.
As consumers look for new ways to retrofit their smart home devices without power cabling, device makers are looking or new ways to make their battery-operated IoT smart home products more power-efficient at all levels, while meeting customers’ needs.
This whitepaper helps device makers identify the top issues and considerations for connected smart home battery-operated devices and offers them a helpful case study that examines design principles that can be applied for lowering power consumption significantly on battery-operated smart home applications, such as smart doorbells, smart door locks, and smart thermostats.
at www.embedded-computing.com/ white-paper-library Lower Your Power Consumption for Battery-Operated Smart Devices
Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.
Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.
Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.