5 minute read

The intelligent edge to win in industrial AI

More and more businesses are adopting artificial intelligence in their processes, but with this shift come significant challenges. By Adi Pendyala and Lawrence Ng

“Innovation distinguishes between a leader and a follower,” pronounced Steve Jobs. In 2019, Accenture conducted a global survey with 1,500 C-level executives across 16 industries, titled ‘Scaling to new heights of competitiveness’. The majority of top managers strongly agreed that leveraging artificial intelligence (AI) is necessary to achieve their growth objectives, while acknowledging that scaling AI at the enterprise level is a real challenge. Scaling AI means that diverse teams, departments and individuals across the enterprise realise the value of AI and utilise it in their work processes to achieve efficiency and business advantages. The volume and rate of data accumulation, especially for capitalintensive industries, increases exponentially, as more devices become internet-enabled each year. This generates rising demand for data storage and computing resources. According to Flexera’s 2021 State of the Cloud report, the COVID-19 pandemic has accelerated cloud plans and spend, with more organisations leveraging public cloud services, private ones, or hybrid models. But many enterprises are still not running their applications in the cloud and face challenges relating to response latency; data security; data management; cyber regulatory compliance; implementation costs; retaining employee knowledge; as well as inference at the edge. The first critical decision is how to transfer data and the cost of doing so. Response latency is seen as a constraint or an obstacle. For any business, it is important to consider whether the benefit of achieving lower latencies is greater than the cost of acquiring the necessary network bandwidth. Another challenge is the increased cyberattack surface. More businesses are shifting confidential information to the cloud, and data breaches targeting cloud-based infrastructures increased by 50% from 2019 compared to 2018, according to Verizon. Moving data out of the plant increases the number of potential cyberattack vectors. Data breaches can be caused by a simple misconfiguration or internal insider threats, and can be hard to avoid when part of the IT infrastructure is outsourced to a third-party business. Ensuring data security in this dynamic environment is crucial. Digital sovereignty – the level of control over the data, hardware, and software that a company relies on to operate – is another challenge. Operational sovereignty provides customers with assurances that those working for a cloud provider cannot compromise a customer’s workload. Software sovereignty ensures that the customer can control the availability of its workloads and run them without being dependent on or locked into a single cloud provider. Moreover, data sovereignty provides customers with a mechanism to prevent the cloud provider from accessing their data, designating access only for specific purposes. The real challenge for organisations is trusting those managing their cloud services, especially when sensitive data could circulate in the hands of multiple third-party businesses. Cyber-regulatory compliance has its own complexity; making sure that compliance programs evolve with cloud deployment, infrastructure, environments, and applications, and that various cloud services and applications are configured securely. Data movement from the plant to the cloud service, especially when it is owned and operated by a third-party business, may violate regulatory compliances. Organisations that have a multi-cloud strategy can benefit from what is called Cloud Security Posture Management (CSPM) as it becomes difficult to ensure that various cloud services and applications are securely configured. The next concern is around the cost of cloud-centric implementations. Adi Pendyala According to the International Data Services (IDC) report, annual public cloud spending will hit $500bn by 2023. There is a growing awareness of the long-term cost implications of the cloud, and several companies are taking the dramatic step of repatriating parts of their workloads or adopting a hybrid approach to alleviate the costs. This shift is driven by an incredibly powerful value Lawrence Ng proposition – infrastructure available immediately, at exactly the scale needed by the business – driving efficiencies both in operations and economics for enterprises. A critical challenge is to retain experienced employees’ knowledge as a key strategic resource before they retire or after a merger or acquisition occurs. One of the solutions is to automate workflows and processes at the edge. Utilising such automation, along with incorporating AI and machine learning (ML) techniques, can track and store the critical knowhow of key employees, and retain, improve, and share the knowledge with new recruits. Finally, instead of streaming process data from the plant edge into the cloud for running inference models, the application (including the trained model) could be shipped to an edge execution environment. Actionable responses and insights could be quickly communicated to the human stakeholders. This mechanism would reduce costs in terms of time, network bandwidth, storage capacity, loss of independence, security and privacy caused by centralised cloud storage and computing. In the current state of Internet of Things (IoT) devices, edge computing reflects as intelligently collecting, aggregating and analysing IoT data via cloud services deployed close to IoT devices (i.e. at the edge) based on the business needs of the application. The future of edge computing is complementary to cloud capabilities. The cloud will not be replaced by the edge. The duality of these two paradigms promotes an infrastructure risk distribution between the offshore facility (manufacture) and its data centre. This will provide uninterrupted real-time actionable responses on the edge. The cloud will execute less critical tasks such as model training, retraining, and sustainment as well as monitoring. This hybrid combination will optimise uptimes while minimising the risk of unseen issues. To achieve this intelligent edge vision, it is necessary to leverage today’s edge computing technology in an optimal and scalable way to deliver high-value intellectual property (IP) in an intelligent edge solution. For example, the Aspen AIoT Hub provides access to data at scale, whether in the enterprise, the plant, or the edge, providing comprehensive AI pipeline workflows to embed AI in Aspen Models for both engineers and data scientists. Indeed, change is mission-critical. As the famous quote goes: “If you always do what you always did, you will always get what you always got.”

Adi Pendyala is a Senior Director and Lawrence Ng is VicePresident – APJ at Aspen Technology. www.aspentech.com

This article is from: