9 minute read

Is software eating the world? The revolutionary

IS SOFTWARE EATING THE WORLD?

Bob Emmerson looks at the revolutionary advances in information and communication technology.

It’s been two years since my last article, so what’s happened in the meantime? Answer: a lot. Let’s start with advances in chip technology. Significant breakthroughs that boost performance and lower cost by an order of magnitude have been and continue to be realised. Computers will surpass the processing power of a single human brain by 2025 and a single computer may match the power of all human brains combined by 2050. However, there is an enormous gap between what technology can do and what it should do. We must look beyond profit and growth when considering technology that has the potential to alter human existence dramatically.

Advances in chip technology

Chips continue to get smaller and operate faster, thereby enabling the development of amazing devices like smart phones. But their intrinsic functionality hasn’t changed: they process data at blinding speeds. Multi-functional chips are different. They are based on the heterogeneous integration of mechanical, chemical or optical functionality, which is enabling chips to function as wireless IoT platforms. These platforms are tightly linked to the application, i.e. they are not generic; instead they are created in response to a specific market need, e.g. compact, lightweight, hyperspectral cameras that are used on drones to make detailed inspections of agricultural fields.

The combination of the on-going miniaturisation capability of chip technology and embedded physical functionality is enabling the development of next-generation diagnostic devices such as compact DNA sequencers and cell sorting devices that detect tumour cells in the blood stream. In fact, the scaling capabilities enable the integration of a full laboratory on a single chip. This allows doctors to test patients in remote areas for diseases such as Ebola.

Technology versus Humanity

I’ve been writing about technology for many, many years and along the way I’ve come to the conclusion that guru-type predictions tend to be pretentious and all too often they turn out to be wrong. Therefore when I bought ‘Technology vs. Humanity’, I was somewhat skeptical, but that subjective, ill-informed impression went in a matter of minutes. Had I been better informed, I would have realised that the author, Gerd Leonhard, was a highly respected futurist.

This is a brilliant, 180-page book. It is beautifully written, so easy to read that I found it hard to put it down. The author emphasises

a somewhat scary fact: Technology knows no ethics, norms, or beliefs, but the effective functioning of every human and every society is entirely predicated upon them. So what happens when technology goes on expanding exponentially while humans progress linearly? Do we lose control? If that sounds far fetched consider the fact that we are already grappling with some rapidly escalating issues, such as the constant tracking of our digital lives, surveillance-by-default, diminishing privacy, the loss of anonymity, digital identity theft, data security, and much more.

Ever since the Internet became a significant commercial force, we seem to have focused in the main on exploiting its economic and commercial promises. We have spent way too little time considering its impact on our values and ethics – and this is finally becoming apparent as we enter the age of artificial intelligence (AI), robotics, and human genome editing.

Back to the present

LPWANs (Low Power Wide Area Networks) is a somewhat prosaic term, but LPWANs are one of the foundations that underpin the IoT and there are several alternative technologies that are competing with LTE (aka 4G). This is a significant development.

LTE is a great technology. A brand-new network core and air interface technologies provide a groundbreaking combination of efficiency and flexibility. The efficient use of spectrum lowered costs and enabled a combination of high-speed, low-latency transmission with cost-effective, low bit rate services. However, the IoT has exploded and it became clear that the lowest bit rate service of 50Mbps was over-engineered. Data rates measured in a couple of hundreds of kbps are more than adequate for most IoT applications.

The alternative mature technologies include Nwave, Ingenu, SigFox, Weightless, and the LoRa Alliance. The response from the telecoms industry has been a series of developments that are designated as categories. Right now, Cat-1 is still the technology of choice for new IoT deployments, but a 10Mbps downlink isn’t needed. Cat-0, an IoT standard from 3GPP, was promoted for a while but now it’s dead, as is Cat-M1. That was just part of the cellular IoT alphabet soup.

Narrow Band-IoT (NB-IoT), when it arrives, will offer low complexity, low power consumption and long range. However, it seems that low complexity, which is needed to bring down module / device costs, has been achieved via trade-offs. To put it bluntly, they had to be made: it wasn’t possible to simply scale down the overall performance.

I’m a writer, not a communications expert, but it is clear that Cat-1 and NB-IoT are a retroactive response to a relatively new market requirement. The guys behind SigFox, LoRa and other LPWAN solutions saw it coming and designed brand new technologies from the bottom up. That said, there isn’t going to be a single killer technology and NB-IoT will play a significant role.

Intelligent connected products and ‘Big Data’

The IoT is another prosaic term. Manufacturers don’t make ‘things’, they make products, and the Internet is just a transportation medium. Therefore we are seeing ‘things’ being increasingly replaced by ‘intelligent connected products’ as well as a lot of media coverage on the ability to deliver actionable management information on the environment in which they operate. This is the result of a combination of real-time analytics being employed at the local level, e.g. a shop floor, and centralised historical analytics based on information generated by the company’s business processes.

Big benefits accrue when analytics is employed: when live data delivers insightful intelligence on which important business decisions can be made. This is an exciting, innovative concept. Moreover, it’s a logical development that addresses a generic issue: organisations lack real-time insight into the critical aspects of their business – aspects that are getting increasingly complex in today’s highly competitive, global marketplace.

The concept of Big Data has been around for many years. Analysing the data that is generated by mainstream business processes has helped organisations identify new opportunities. Most organisations understand that if they capture all the data that streams into their businesses, they can apply analytics and leverage its value. The difference this time is the inclusion of data streams coming from the local level and, in some cases, from third-party IoT solutions.

Preventative and predictive maintenance

Important manufacturing insights are obtained by analysing the real-time data that is obtained by monitoring the performance of machinery and production processes.

When analysis is done at this level, near real-time, actionable-intelligence is generated. This facilitates the creation of preventative maintenance programs that boost productivity and allow informed decisions to be made at the local level.

Predictive models exploit patterns found in historical and transactional data in order to identify risks and opportunities. These models reveal relationships between various contextual elements, which may not appear to be related. Applied to business, predictive models are used to analyse current data and historical facts in order to better understand customers, products and partners.

Now companies can employ data analytics to obtain insightful real-time information that can be used to make immediate decisions. Companies can also unearth powerful insights by identifying patterns in thousands of readings from many products over time. For example, information from disparate individual sensors, such as a car’s engine temperature, throttle position, and fuel consumption, can reveal how performance compares to the car’s engineering specifications.

Software-defined Wireless LANs do IoT

At the beginning of the decade the mobile enterprise was a hot topic. Mobilising applications and processes had and still have the proven ability to: boost productivity by capturing data at source; enable the delivery of enhanced services, thereby improving customer satisfaction; and improve the efficiency of the organisation, which in turn boosted margins and profits. The mobility paradigm was based on a seamless flow of information to and from the data centre to any device and vice versa enabled by Wireless LANs in office buildings.

Today the IoT is generating massive amounts of business data, and as outlined earlier, it is processed into a combination of real-time and historic information. The resulting need to access this information has placed a strain on WLANs. Adding more access points and employing higher speed Wi-Fi technology were temporary Band-Aid fixes. There was an intrinsic issue: the LANs were not designed to deliver the capacity needs that enterprises face as a result of the explosive growth of the IoT.

Aerohive Networks decided to start over and embrace the fact that ‘Software is eating the world’. The quote comes from Marc Andreessen. In this context it meant employing an architecture based on the principles of Software Defined Networking (SDN). SDN employs an open network system structure in which network control is decoupled from network forwarding, which enables the performance of the network to be programmed and managed via the cloud.

The company is marketing Software Defined LAN (SD-LAN) solutions that build on the principles of SDN in the data centre and SD-WAN in order to enable flexible, programmable and cost-effective wireless and wired networking capabilities that are being applied to the expanding networks of IoT devices. It is worth noting that softwaredefined networking does more than change the rules of the network game; it throws them out the window. In future networks, functionality will be determined by users and managed by IT departments. Looking ahead

“We are heading into a future where literally everything around us is impacted by a tsunami of technological advances, yet the way we frame the world, the way we evaluate what is right or wrong, the way we decide whether to engage and use a certain technology or not is still based on past experiences, on old frameworks and, worst of all, on linear thinking. Our ethics – and many of our laws and regulations as well – are still based on a world that advances linearly and on ‘what used to work’.

“Ever since the Internet became a significant commercial force, we seem to have focused in the main on exploiting its economic and commercial promises. We have spent way too little time considering its impact on our values and ethics – and this is finally becoming apparent as we enter the age of artificial intelligence (AI), robotics, and human genome editing.”

I’m quoting from ‘Technology vs. Humanity’ by Gerd Leonhard.

However, hundreds of AI and robotics researchers have converged to compile a list of 23 principles, priorities and precautions that should guide the development of artificial intelligence to ensure it’s safe, ethical and beneficial. Prominent members include Stephen Hawking and Elon Musk.

The 23 points are grouped into three areas: Research Issues, Ethics and Values, and Longer-Term Issues. One question that is particularly interesting is ‘What set of values should AI be aligned with, and what legal and ethical status should it have?’ A world where robots are complex enough to have ‘rights’ might seem far off, but these debates are beginning in the EU. n

This article is from: