ARM E-mag 2015

Page 1

2015 Volume 2 Number 1 embedded-computing.com/topics/processing

ññ ARM DesignStart for fast innovation

ññ How SoC growth

influences software development

ññ How CoAP can save us from protocol obesity

Watch Video

Sponsored by: ARM, Zebra Technologies, IAR Systems, and SeaLevel Systems


E M A G

Featuring

ARM DesignStart enables fast innovation on the cheap By Brandon Lewis, Asst. Managing Editor

Hobson Bullman, ARM

Dealing with a complex world – How SoC growth influences software development Q&A with Hobson Bullman, ARM

IoT and efficient data transport: How CoAP can save us from protocol obesity By Zebra Technologies

Sponsored By

© 2015 OpenSystems Media, © Embedded Computing Design. All registered brands and trademarks within the ARM E-mag are the property of their respective owners.


Smaller, faster, smarter code with IAR Embedded Workbench

High performance, ease of use, complete code control, functional safety certification – use the right tools.

Join us at ARM TechCon! Booth 512


ARM DesignStart enables fast innovation on the cheap By Brandon Lewis, Asst. Managing Editor

4


A few weeks ahead of ARM’s TechCon, the IP design house is pushing to give embedded and IoT product developers a head start of their own. On Tuesday ARM announced a revamp of their DesignStart portal, a web-based resource for SoC designers that accelerates design times by providing pre-commercial access to physical and processor IP for the Cortex-M0 for free, and by “free” I mean free beer, not free speech. In addition to the -M0 System Design Kit (SDK) that includes the IP, peripherals, a test bench, and software, designers can leverage the full Keil MDK development suite on a free 90-day license, and when ready for prototyping or to mix in other IP, ARM is offers the relatively lowcost ($995) Versatile Express FPGA development board through the DesignStart portal as well. At production time the company has also implemented a $40,000 fast-track commercial IP licensing program through DesignStart that covers the -M0 IP, SDK, Keil tools, and a year of technical support; the “free” here being that the standardized license can hopefully “free you up of lawyers.” The upgraded DesignStart program comes amidst a new wave of custom SoC development where designers are increasingly integrating connectivity, mixedsignal, and sensor IP alongside processing capabilities for systems at the edge.

Figure 1 | IP provided through DesignStart represents more than 15 ARM Partner foundries and 85 process node technologies ranging from 250 nm to 16 nm / 14 nm FinFET.. 5


As the Cortex-M0 represents somewhat of an inflection point for IoT edge devices, the ability to design, simulate, and test SoCs based on the technology through a quick-start program significantly reduces barriers to entry for startups, Maker Pros, and even engineering design teams within larger companies that are working with ARM IP for the first time. Design house and foundry partnerships through DesignStart also help smooth out the commercialization process, which Ian Smythe, Director of Marketing Programs in ARM’s Processor Division said in a briefing matches the company’s goals of creating “a designer-friendly system that enables products to get out there, experimented with, and delivered as quickly and easily as possible.” “The IoT has made embedded sexy, and it’s a combination of several factors,” he said. “Embedded has always been a huge market, but adding connectivity to it provides multiple possibilities and we’re still at that experimental growth stage with multiple opportunities being created. That’s the opportunity around the convergence of the Internet and the embedded space, the opportunity of connecting things together and developing new innovative services,” Smythe says. “When we looked at the model we could use to support this, we knew we had a strong ecosystem and that Cortex-M was shipping in mass quantities, but how can we enable people to build SoCs for their market, and quickly? “The [DesignStart] relaunch is intended to accelerate innovation in the embedded and IoT markets,” he continues. “ARM felt the most important thing was to be able to equip designers in a start up, custom mode with free access to the IP, so we made the Cortex-M0 system IP, as well as the Cortex-M0 SDK, downloadable for free so people can go away and design their heart out. Download the IP, stick it into the toolchain, and simulate. You can go away and design and prototype, and at the point where you’re ready, ARM provides a fast, easy, low-cost way to get access to commercial IP. It’s a short license span that doesn’t need 5 lawyers, and for 40k you can take this to production. “If we can encourage growth with a web-based bit of IP, that’s what we want to try and see,” he adds. The Cortex-M0 evaluation package is available for download as a pre-configured Verilog netlist, and the updated DesignStart portal also gives access to a broad range of ARM Artisan physical IP. To get started go to http://designstart.arm.com. 6


Dealing with a complex world – How SoC growth influences software development Q&A with Hobson Bullman, ARM

Over the past few years, electronics systems have become increasingly complex while the software running on them has become even more complex. On the occasion of ARM’s 25th anniversary, Embedded Computing Design spoke with Hobson Bullman, General Manager of Development Solutions at ARM about how the growth of SoCs is influencing software development, and how development tools in turn are helping software developers cope with changing requirements. the past 25 years, n ARM has seen some major changes in electronics systems and particularly within mobile devices. Could you provide an overview of the major changes, in your view?

I

processors, compilers, debuggers, and models, the underlying SoC technology has become much more complex in the past couple of years.

Take a high-end mobile phone as an example. In the year 2000, a high-end BULLMAN: Yes, certainly. ARM has mobile phone had one application probeen creating software tools as well as cessor, one DSP modem, and one simple microprocessor designs for 25 years this operating system (OS). From a software year. And while we’re still doing appli- point of view, one person could undercations processors and more specialized stand the complete software stack. 7


These days, however, a high-end mobile device looks very different. It has up to 10 application processors, a graphics processor, and many embedded processors for connectivity – for instance, Wi-Fi, Bluetooth, GPS, and infrared – but also cameras, accelerometers, and heart rate monitors. The chip has become larger, the OS has become more open. The system is much more complex and much more capable; it can no longer be understood by one person. Instead it requires teams of people who understand various its parts, and this has a huge impact on software and tools development, which is my specialty.

ack in 2000, did you see all of these changes coming?

B

direction didn’t surprise us, some specific solutions were slightly surprising.

ere you able to anticipate any of the changes to development tools?

W

I think as an industry we were able to – as a broader ecosystem we are strong enough to find solutions and deal with the complexity. We are close enough with our partners to understand the challenges they face and provide ways of helping. We have a great ecosystem where some of the tools come from ARM, some tools come from the open source community, and many others come from the large and strong ecosystem of third-party tools vendors.

We saw some of it coming. For example, we knew that multicore, heterogeneous ow much more complicated is computing, and, to an extent, 64-bit an SoC nowadays? architectures would become prevalent in high-end markets. It was just a question of when. Heterogeneity and multiprocessing is a huge change. Cache coherency Hence, we knew we had to change but between the different processors, for we didn’t understand upfront the whole instance, adds a lot of complexity to impact on tools. For example, we didn’t hardware and software. imagine in those early days that we’d end up supporting high-level languages like Consider server chips. ARM is now OpenCL. We probably knew that high- coming into servers and there is a chip level languages would come to market, today that has up to 48 cores, with many but we couldn’t predict quite what more cores coming in the future. These they would be for. So while the general chips have carrier-grade networking

H

8


fabric with PCI express and 100 Gigabit Ethernet. That, along with the cores, along with the cache coherent interconnect, and a whole bunch of domain-specific workload accelerators, and you end up with an extremely sophisticated and capable system. From a tooling point of view, part of the challenge is to abstract the complexity and provide higher level information taken from all of the details of the system. This lets the user get some insight into how the system is performing and how the system is behaving. You can determine whether it is behaving correctly or incorrectly, and whether it’s performing fast or slow. Even with just an evaluation of our tool suite it is possible to test-drive all existing features so that users get an understanding of our tools.

ow do you actually debug a system with 48 cores?

H

We pay equal regard to both correctness debugging and performance debugging. Correctness debugging is all about looking at the internals of a system and checking that the program is behaving as it is designed to. Performance debugging is more about finding performance bottlenecks and resolving them. We use all of the capabilities that ARM provides on a chip to help users debug. There is a lot of infrastructure inside ARM called CoreSight, which has been adding more capability as processor and system technology have evolved. So effectively, the more processors, the more fabric, and the more capabilities we put onto a system, the more debug logic ARM puts into the system. And our job in the tools team is to gather information in real-time to present it to the user. With 48 cores running at 2 GHz this can be quite challenging, so in order to gather the right information for the user, the agent running on the device has to be designed intelligently.

Well, the good news is that we have modern operating systems to help us, and iven your explanation of workloads with many parallel tasks scale how complex systems have well on multicore systems running such an become and how developers OS. However, debugging can be tough can analyze their software, in the early days when the software isn’t where do people actually start mature yet, and this early day support is building software nowadays? critical given the pace of innovation. But once the software is mature and these There is a big change in the way softthings tend to run, you can debug at a ware is developed today as compared to software level and not at a system level. 15-20 years ago.

G

9


In the past, software was developed ARM, which extends our commitment to on either real hardware or an FPGA. providing early and accurate models. Developing on real hardware has many advantages, but also has its limitations. n addition to time to market, what Today, many people have started to develop their software using software models. We’ve seen model adoption grow exponentially over the last five years. Modeling technology hit a sweet spot in the sense that as systems got more complex, models got more capable, making them more attractive to use.

I

are the other benefits of using software models?

Visibility is another benefit. For example, you can get very little and limited information from the core in a multicore device with each core running at 2 GHz. Model visibility is inherently higher than hardware visibility because you can trace whatever you like. We know of develAt ARM we have responded by providing opers debugging complex problems on “gold standard” models. We run the models because of improved visibility. same validation suites on these models as we do on real hardware. In most cases oing back to the complexity we develop the validation suite against within mobile devices, what these models and then build the actual have you learned from the hardware. Through this we’re confident complexity of PCs? that we have high-fidelity models people can build software against. As a software Over time, mobile devices have become developer, this means you can start your as capable as PCs, and while we started software development much earlier. out with ARM producing tools for embedded systems, we now produce A great example of this from inside ARM tools for computers. itself was that when we moved to a new 64-bit architecture, we spent two years Sometimes we’ve adopted some of the building software on the model. When desktop tools and rolled them in. We the first silicon came back we had Linux have also increased our investments in running on it within two weeks. If we important open source software, like hadn’t started two years before, that the GNU compiler, to make ARM the would have taken a lot longer. best possible compiler target that we could. Actually, we’ve spent a lot of time We recently announced that the team from supporting new architectures and new Carbon Design Systems will be joining features of the various compilers, and

G

10


invested in the other components of our developer tool suites as well. Now we have a sophisticated debugger, a high-quality whole system performance analysis tool, and we ensure that a wide range of operating systems is well supported. We also work with the ecosystem of desktop and server tools vendors to make sure they can port their tools to ARM.

s ARM began developing tools for the embedded market, how have you seen it change over the years?

A

The embedded software market is very interesting. There is a lot more diversity in the embedded market today, both in devices and in the software running on them. We see embedded devices with Cortex-A-class processors and Cortex-Mclass microcontrollers (MCUs); these devices run a huge variety of operating systems. The Cortex-M MCUs run RTOSs while the Cortex-A processors run embedded Linux or proprietary operating systems. The variety is breathtaking compared to the OS landscape for mobile devices and enterprise systems, which have standardized around just a few platforms.

dominant in embedded rather than higher level languages. Debugging is still very much a number-one concern because a lot of problems exist at the bare metal level. People are trying to reuse code more and more, however, this is difficult due to the lack of software standards, and this means that the value in tools shifts. When we started this game it was all about compilers, and then debuggers and integrated development environments (IDEs) became important because of productivity. Now we see a lot more value in middleware and in integration of different software stacks. At ARM we have a standardization exercise in embedded called CMSIS, the Cortex MCU Software Interface Standard, and this is our response to help control the diversity and complexity of software in the Cortex-M world (Figure 1, page 12).

ow does CMSIS help to manage the diversity in embedded systems?

H

The approach is to embrace the diversity. We provide a standard for operating systems to interwork; standards for the embedded tools to interwork; we proIn embedded, we had to change to vide peripheral description standards in address this problem of diversity embedded tools; driver standards for because systems are quite complex using different drivers with different operthese days but the software is much ating systems; software packaging stanless standardized. C and C++ are still dards; and hardware abstraction layer 11


Figure 1 | The CMSIS standard family helps minimize some of the complexity of myriad operating systems, drivers, tools, and more to ease software development on ARM Cortex-M devices. standards. Over time, it becomes easier assets without prescribing what these for software engineers to learn and use assets are. these standards, and then move from device to device within a particular MCU icroprocessors are family or across MCU families. increasingly used in

M

applications that demand a high degree of reliability and safety. How have safety standards influenced software toolchains?

At ARM, it is very important to find ways in which our silicon partners can produce diverse, specialized, and differentiated devices, but not in a way that makes it very hard for the software engineers. We need to find ways that promote soft- This is a different sort of complexity. ware compatibility and portability across Safety complexity is about process as diverse and differentiated MCUs. much as about technology, and whether it’s in industrial, aerospace, automotive, or o with these standards it is medical. the state of the art for safety conpossible to customize tools for tinues to get higher. Systems that didn’t certain requirements? contain technology in the past now have lots of technology inside, and people need Yes, absolutely. We provide technology to ensure that their systems are compliant that helps people reuse their software with standards to prove that they are safe.

S 12


Creating a product that needs to be qualified to safety standards raises a diverse set of challenges for a project team. A lot of work is required to ensure the product behaves as specified, with well-understood and safe failure modes. Quite a few of these functions are done in software these days, and so there is an implicit strong dependency on the tools used to create such software, such as compilers and other code generators. Some other classes of tools do not generate software, but instead provide information or guidelines to improve software quality. ARM believes that toolchain suppliers need to help project teams understand how the tools operate so that project teams can ensure the tools are being used appropriately and with adequate safeguards. Therefore, ARM and its ecosystem provide assets around tools to help product vendors with safety certification. The compiler collateral, for instance, can be shared with certification bodies to show that the compiler is generating good code. In addition, code coverage can be used to prove that software is well tested. It is no longer acceptable to treat software tools as a “black box.”

inally, what’s next for ARM and the wider Ecosystem?

The capability of mobile devices looks set to continue to increase, and so we can expect the industry to invest in more ways to make it easier for developers to harness that power, which probably includes further deployment of higher level languages. In enterprise systems, we talk about the “Intelligent Flexible Cloud” bringing more workloads closer to the end device, which again has implications for how software is written and deployed. And in embedded, it’s quite clear that the Internet of Things (IoT) has the potential to roll in and add connectivity to a wide range of constrained devices, which will require new approaches compared with connecting computers to the Internet. Note: Interested developers can evaluate all the tools and capabilities mentioned in this interview through a trial version of DS-5 Ultimate Edition or MDK-ARM. Those who’ve evaluated DS-5 in the past can claim another free trial until November 30, 2015.

Hobson Bullman is General Manager of Development Solutions at ARM. ARM

F

 www.arm.com

I thought you would ask that question, so fortunately I brought along my crystal ball.

 www.linkedin.com/company/arm

 @ARMCommunity  www.facebook.com/ARMfans 13


IoT and efficient data transport How CoAP can save us from protocol obesity By Zebra Technologies

Watch Video

14


As the Internet of Things (IoT) continues to grow, so does the number of connected devices and the different paradigms around how to connect them. There are many different use cases in the IoT, but data consumption is changing in general. The way data is consumed on an Internet dominated by objects is quite different than the one dominated by humans and a few powerful computers, and where we have historically seen moderate traffic with heavy payloads, we now see higher traffic with smaller payloads. The reason for this

is quite simple: when a machine talks to another machine they communicate more efficiently, striving to get only the information they want at the bare minimum required to get it. There are many factors that influence why a device, sensor, or machine would want to communicate with lighter payloads. A few separate but related reasons are:

õõ

õõ

õõ

Human language and the associated metadata we are accustomed to are unnecessary for machines that operate on far-more-efficient binary code. The vast majority of machineto-machine (M2M) communications won’t be directly interpreted by people, so utilizing machine languages thereby allows devices to save on not only data transmission overhead, but also the resources required to convert messages back into human-readable formats. Optimized resource utilization is important because all machine interactions carry with them a cost, and therefore must be justified. Every signal sent or received costs not only memory but also power, and in order to remain autonomous within a large mesh, energy efficiency has become a critical aspect of device architectures and lifecycles. With most devices on the IoT acting autonomously as part of a 15


larger mesh network, machine entities must be able to maintain full control of their state so they can retain information about transactions and make independent decisions when alternative communications paths are needed to get messages across a network. All of this leads to one very important point: the mechanics through which autonomous entities communicate with the outside world have evolved. Information conveyed today must be just, light, and friendly to resource-constrained devices. And while the neo-ecosystem of devices we are watching unfold consists of both small and large machines, everything must interoperate.

The legacy of success When we look at protocols that interoperate well in our current ecosystem, we immediately arrive at the Hypertext Transfer Protocol (HTTP) that has served as a pillar of modern web communication and is at the core of almost all the content we consume and publish. The problem is that HTTP comes with its own requirements, and while justified for what the protocol was design for, those requirements may not play well in future use cases. Let’s first take one of the more basic mechanisms of the protocol, the client-server model. One of the main problems with this architecture in the IoT is that the client must poll the server for information, so in most cases the “thing� in an Internet of Things sense is the server. This creates uncertainty when striving to get data as close to real time as possible because the server producing the content needs to wait for a request from the client before sharing an update. The repercussions of this are twofold: the need for a client to request an update inhibits real-time round-trip communications, and the high availability required on the server side demands a lot of resources in order to respond to requests, even if they are futile. Smaller devices in the IoT would expend large amounts of power and other resources quickly with this model. One could argue that push notifications could fix the communications bottlenecks created by the client-server model, and while this is true, HTTP presents another fundamental challenge in IoT use cases in that it is not a lightweight protocol. Header files alone in HTTP can become quite large, amounting in some cases to several hundreds of bytes per request. This is partially fallout from the use of aforementioned textual (human-readable) language for marking options and 16


metadata about the transaction. When downloading a few gigabytes of information, header overhead can seem almost non-existent in the context of an entire payload. However, when attempting to retrieve basic data such as temperature, geo-coordinates, or battery level, header files alone can be larger than the payload itself. While headers can be compressed and sent in binary, this action requires resources that, as addressed previously, are limited for many IoT devices. Figure 1 shows a very basic HTTP exchange for temperature. If there are too many options or even missing options it’s debatable whether return characters are necessary to complete a meaningful transaction. While much more goes into a complete exchange, for the sake of experiment Figure 1 shows that we

Figure 1 | Depicted here is the size (in bytes) of a basic HTTP transaction for a temperature reading. 17


have a request on the order of 127 bytes and a 92-byte response. The more options or information about the transaction added to the header will increase these values quite rapidly, which can quickly climb into the kilobyte range. So, why would we default to using HTTP? Well, there are a couple of reasons, and at the top of the list is that we are accustomed to it and it is widely used. But looking at the main principles of HTTP, another reason is the representational state transfer architecture type, commonly referred to as RESTful. The RESTful approach offers a fully stateless, cacheable client-server relationship with a common interface that enables one to manipulate resources with ease and uniformity. Manipulating resources with ease and uniformity is also how we think about building the Internet of Things, so one of the fundamental tenets of HTTP suits the IoT, just not the package as a whole.

The core The advent of the IoT has made it more than clear that current models cannot sustain a predicated technological evolution that permits resource-constrained devices to optimize communications based on their use case, environment, and hardware. Enter the Constrained Application Protocol, or CoAP, which was designed with finite resources and the RESTful paradigm in mind to emulate some of the better features of HTTP without the overhead. CoAP is an efficient, bidirectional, reliable protocol that

Figure 2 | The CoAP header has a header of 4 bytes, which can be extended based on the paths and options selected. 18


utilizes a RESTful architecture style to provide features such as caching, URI-query, content format identification, and conditional requests. The protocol also adds new capabilities such as observation and the ability to fragment requests and responses as needed so that servers can be set up to send clients periodic updates when new data is available. The baseline CoAP header is 4 bytes, and can be extended based on the paths and options used (Figure 2). Using the same exchange scenario as earlier but replacing HTTP with CoAP, Figure 3 shows how the same temperature information acquitted using CoAP’s RESTful

Figure 3 | Using the same temperature data exchange scenario as earlier, the CoAP protocol relies on a RESTful architecture in a client-server models (3a) to deliver information with a fraction of the communications overhead of HTTP (3b). 19


architecture results in a much more manageable transaction size that is suitable for resource-constrained devices. Furthermore, because there is less data to transmit, devices that rely on portable or disposable power sources are able to conserve energy to extend the time period between recharging or replacement. Another important consideration to note is that CoAP was designed on the User Datagram Protocol (UDP), allowing packages to be even smaller than those reported in Figure 3a. The Internet Engineering Task Force (IETF) is also currently developing CoAP for TCP and other transports.

Key takeaway CoAP is by no means a replacement for HTTP, as the ladder is packed with features that are critical for the web. However, for an Internet dominated by “things” that are meant to aggregate and send data in an agnostic fashion, payload size has become increasingly relevant. CoAP delivers the functionality required by these devices with a balance of features and efficiency. Zatar is a product of Zebra Technologies, a global leader respected for innovation and reliability. Zebra provides products and services that enable real-time visibility into organization’s assets. Zebra solutions support Enterprise Asset Intelligence and the Zatar enterprise IoT service is a perfect example! Zatar provides a standards-based approach to connectivity and control of devices along with open APIs to create apps, onboard devices and enable collaboration. Zatar is an ARM mbed Cloud Partner. Zebra Technologies Corporation  www.zebra.com  www.zatar.com

 @ZebraTechnology  www.linkedin.com/company/167024  www.facebook.com/ZebraTechnologiesGlobal/timeline  www.youtube.com/user/ZebraTechnologies#p/u 20


NO COMPROMISE COMPUTING SOLUTIONS COM Express Modules Give your application the solution it deserves. Sealevel’s Computer on Module system designs combine the advantages of custom design with the convenience of COTS. COM Express modules offer a selection of processors ranging from powerful multi-core Intel i7 and i3 to the popular Atom. Low power designs eliminate the need for cooling fans, greatly enhancing system reliability. Extended temperature models are available offering -40C to +85C operating temperature range. COM Express modules contain the core computer functionality most affected by changing technology. Since the modules are based on an industry standard specification, COM Express systems are easily updated to stay current with the latest technology. • Variety of Processors and Form Factors • Application Specific I/O • Rugged, Solid State Operation • Vibration Resistance • Extended Operating Temperature • Long-term Availability • Superior Life Cycle Management

COM EXPRESS QUICKSTART KIT The 121004-KT provides everything you need to get your COM Express project off to a fast start. Powered by a 1.8GHz Intel Atom N2800 CPU with 4GB RAM and integrated heatsink, the QuickStart kit includes an installed 2.5" 32GB SATA solid-state disk. Standard features include five USB 2.0, two RS-232, one RS-485, dual Gigabit Ethernet, SATA, DisplayPort and audio interfaces. To interface the RS-485 port, a 10-pin IDC to DB9M serial cable is included. The carrier board and module are powered by the included 100-240VAC to 24VDC external power supply with US power cord. The QuickStart kit simplifies software development and prototyping while the target application carrier board is designed. Take advantage of Sealevel’s carrier board development services for the fastest time to market.

sealevel.com • 864.843.4343 • sales@sealevel.com

21


E M A G

SPONSORS

Š 2015 OpenSystems Media, Š Embedded Computing Design. All registered brands and trademarks within the ARM E-mag are the property of their respective owners.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.