4 minute read
CAN Group- AI: Rise of the Machines
AI: RISE OF THE MACHINES
For over 50 years Hollywood movies have featured the proliferation of artificial intelligence (AI); from HAL-9000 in Stanley Kubrick’s 1968 film “2001: A Space Odyssey” to the more universally known 1980’s cult movie (and follow-up series) “The Terminator” with one of the most well-known catchphrases – “I’ll be…"…well you know the rest.
The great thing about such movies is you already know the plot; the AI has suddenly determined humanity is a threat and begins a campaign of tyranny and destruction against its creators.
Fast forward to 2021, the world has come a long way in the digital revolution however we are by no means on the cusp of an AI uprising of Hollywood proportions. Nevertheless, the storylines of these movies do serve as an important reminder that trust and reliance on artificial intelligence may not always end well for any humans involved.
Whether we realise it or not, AI is part of our daily lives. Whether being used to determine what websites we can or can’t access or whether or not we qualify for a special deal on our next mortgage, AI-driven processes are all around us and many industry sectors have derived benefits of utilising machine learning algorithms in place of humans to achieve more cost-effective and efficient operations. So, there is no question there are benefits and the AI lobbyists are quick to highlight the success stories however, notably absent from fanfareor press are the times when replacing human decisions with machine learning results in the realisation of unintended consequences. For example, a recent switch to AI-based fraud detection within some banks led to legitimate customers being deemed fraudulent and locked out of accessing their own accounts and with limited ability to speak to a human to challenge the decision (computer says no – and the account holders deciding “I won’t be back”).
The argument often boils down to the assumption that machines are better than humans because they can never be wrong. This somewhat simplistic worldview is amplified by the growing reliance on technology on a personal level. However, what is often overlooked is that the machines themselves are designed, built and programmed by humans who are all capable of errors, so any flaws in programming or incorrect assumptions in an AI model will ultimately result in a flawed system; an example of this being the poor data modelling during the pandemic when thousands
of examination candidates were given computer-generated exam results based on the modelling of previous years’ candidates. In a high number of cases condemning students from historically poorer performing schools to much lower grades than their educators had predicted based on their proven ability. Thankfully those in authority saw sense and replaced the computed grades with those from, you guessed it……humans.
So how does AI add value to CAN Group and its clients?
As an Asset Integrity specialist, celebrating 35 years in business this year (a little bit younger than the first Terminator movie), CAN Group continually harness advancements intechnology with safety, integrity and certainty at the forefront of our operations. Our team of specialists adopt a risk-based approach to inspection using AI, Human Intelligence and experience at its core. In this approach, an engineer will review data from a variety of sources to make an informed decision on which equipment to prioritise and how often inspections should occur. On the face of things, this type of process is perhaps a good opportunity to remove the human element and utilise a machine learning algorithm to determine risk as this would remove the engineering cost and would make the decisions faster, however what consideration to unintended consequence?
For instance, corrosion can often occur in a non-linear fashion and there may be external factors at play that are not immediately obvious if simply analysing raw data. These are the types of issues that only an experienced engineer can identify and mitigate. A machine learning algorithm can only make its decisions based on the data used to train the model. This could result in a blinkered view of the situation and unintentionally increase the risk of failure. If the computer says the pipe should not be leaking, how do you explain the black liquid oozing into the sea?
All intelligence has its place; at CAN Group, we see the benefits of AI as an assistive tool rather than a replacement to the human decision-making process. CAN’s data intelligence platform ENGAGE, utilises AI to help users make informed decisions on asset integrity by aggregating data from a variety of sources. This system was designed and developed in-house by CAN Group’s expert Integrity professionals to overcome the technological challenges of assuring and maintaining safety critical asset information to ultimately ensure continued safe operations. A recent development within ENGAGE also uses AI Computer Vision to analyse all imagery. The system will flag any potential defects from the gathered imagery (with the emphasis on “potential”) to ultimately draw attention to objects that may have ordinarily been missed by the human eye and allow the engineer to make an informed decision on risk and further mitigation.
CAN Group fully embraces the principles of AI and uses it where it adds value; however, it’s important to remember that humans should be leading the decision-making process when it comes to asset integrity. After all, a significant hydrocarbon event is going to affect us an awful lot more than the machines.
NO HUMANS WERE HARMED IN WRITING THIS ARTICLE