OGE_20_08

Page 1

To vote for the OGE product of the year, go to www.oilandgaseng.com/NP4E


From the melt There are many phases in the process of manufacturing Quality fittings and flanges, with one of the most important being where the mother tube or mother billet originates. Starting with only the highest quality pipe and forgings, Weldbend produces both fittings and flanges made to the strict ASME standards.

Weldbend welcomes visits to our state-of-the-art manufacturing facility showcasing the steps we have developed to ensure quality and complete product traceability.


…to your hands. From start to finish, we maintain complete traceability for all the products produced at our factory so that you have peace of mind with the products you are buying. Why take the chance by purchasing questionable products from companies with questionable manufacturing practices, and questionable traceability? By purchasing Weldbend products, you are buying from a Domestic manufacturer, with over 70 years of manufacturing experience – not a glorified broker. Contact Weldbend today for a quote on your next specified job!

708.594.1700 We support the American Worker.

www.weldbend.com


— ABB Air Gap Inspector for motors and generators Visual inspection without removing the rotor greatly reduces time, cost, and risk.

A super-slim robotic crawler, equipped with five cameras, moves in the air gap between the rotor and stator, covering the entire length of the core. Video feeds allow the maintenance team to inspect, on site, inner surfaces not normally accessible without removing the rotor. Ask our team how the new Air Gap Inspector, for synchronous motors and generators with an air gap of 10 mm or more, can be an effective addition to your preventive maintenance program. MotorService@abb.com new.abb.com/motors-generators/service


IIoT IN O&G

Modernize electronic flow measurement with MQTT MQTT is a publish/subscribe, extremely simple and lightweight messaging protocol By Arlen Nipper

Photo 1: A round chart recorder. All images courtesy: Cirrus Link

T

o discuss modernizing electronic flow measurement (EFM), let’s start at the top. In 1980 I was at Amoco Oil, and EFM, or measuring how much oil or gas flows through a pipeline, was calculated using round chart recorders. A pin would chart the pressure and temperature over the course of a month and then someone would physically collect the round charts, if the ink hadn’t run out or it hadn’t been eaten by mice (See Photo 1). Then, engineers would integrate the temperature and pressure with a calculator to determine volume. You can imagine an engineer’s office with a couple of hundred of these paper charts lying around with the office inhabitants trying to make sense of the data. Sometime in the early 1980s someone realized the data could be measured electronically and integrated on a computer. Eventually the American Petroleum Institute (API) released a standard that said if you are going to measure oil and gas and then sell it, the measurement must be precise. In the ensuing rush to automate EFM for better measurement, millions of flow computers

were installed at companies all over the world. Over the next 30 years there were something like a dozen different manufacturers and a dozen protocols for EFM. As a result, today we see a heterogeneous, legacy infrastructure for EFM, with many customer-specific, proprietary solutions. Some oil & gas companies have thousands of flow computers; therefore, upgrading to new technology is not easy. A tech must go out to unwire/uninstall the flow computers, then reinstall new equipment. When you are looking at tens of thousands of dollars to modernize each flow computer, the cost is often prohibitive. Not to mention, the flow computers in the field today are accurate. They work. We all know if it isn’t broken, don’t fix it. The problem is not accuracy, but consistency. Challenges with varied EFM data types Today’s flow computers do double duty, with multiple hosts and multiple data types, which is not conducive to typical SCADA processes. From an operations standpoint the real-time operational data includes configuration, alarm, event, and history data. Per the API standard these data points are atomic, meaning they need to stay together, and you need to prove they were all gathered at the same time, which is incredibly challenging. These data points are accessed by protocol. On the other side of the flow computer is the cash register which says — in the last hour this pipeline delivered x barrels of oil or x cubic feet of natural gas. Also, two hours ago someone changed the calibration of the flow computer. This accounting type of data is accessed differently but from the same instrument. Several challenges exist with EFM today, but the primary concern is that most existing networks do not have the bandwidth to meet all these demands for data. Companies are OIL&GAS ENGINEERING AUGUST 2020 • 5



how much bandwidth we waste with the old method. For instance, previously companies would ask, every 30 or 60 minutes, “What is the flow rate?” and the answer would be “10.” Thirty minutes later, “What is the flow rate?” “10”.The question is the same, and the answer is the same — again and again. With MQTT you can find out the flow rate is 10 and then not hear the answer again unless it changes. Computers can just talk to each other whenever they want to send data — they report by exception. Replacing a poll/response EFM network with an MQTT-based network saves 80% to 95% of bandwidth. That means stranded data can be rescued. MQTT also allows for multiple data consumers (See Figure 2). You can publish the data from an EFM device and multiple applications can consume it, all at the same time. MQTT allows for a single source of truth for data and that data is standard and open source, so anyone can use it. MQTT is the most-used messaging protocol for an IoT solution, but it’s time for the oil & gas industry to catch up for EFM. We don’t have to prove that MQTT has become a dominant IoT transport — everyone is using it because it is simple, efficient, and runs on a small footprint. You can publish any data that you want on any topic. We recently created a specification within the Eclipse Tahu project called Sparkplug that defines how to use MQTT in a mission-critical, real-time environment. Sparkplug defines a standard MQTT topic namespace, payload, and session state management for industrial applications while meeting the requirements of real-time SCADA implementations. Sparkplug is a great starting point for how to use MQTT in EFM. One of the challenges with old EFM methods is dealing with atomic data. SCADA systems can handle single process variables, but multiple variables put together as a record gives them trouble. MQTT can publish this type of data as records. MQTT and Sparkplug can handle both single process and records as a whole object with data standardization, data time stamped at the source, data sent as an immutable object and the ability to store and forward.

ter what piece of equipment or what end application speaks what language, it’s just a piece of data, such as a temperature. All of those backend systems now can use the temperature independently and however they like. EFM data can be connected to the cloud, to big data applications — the possibilities are endless. Let’s take one last look at a customer who implemented MQTT on their EFM system. Its number one goal was to get more production from its wells. With MQTT it can adjust its well once per minute instead of once per hour, which led to an increase in field efficiency of five percent, or hundreds of millions of dollars. It was able to eliminate three different host systems, each representing hundreds of rack servers and moved to a single source of truth for data with one large MQTT broker, simplifying the overall infrastructure significantly. The misnomer in the industry is that modernizing EFM requires a large financial investment along with a lot of time and effort. MQTT is open-source, and it can be implemented on existing legacy equipment. OG Arlen Nipper is president and CTO of Cirrus Link. He brings more than 40 years of experience in the SCADA industry to Cirrus Link as president and CTO. He was one of the early architects of pervasive computing and the Internet of Things and co-invented MQTT, a publishsubscribe network protocol that has become the dominant messaging standard in IoT. Arlen holds a bachelor’s degree in Electrical and Electronics Engineering (BSEE) from Oklahoma State University.

Figure 2: The basic MQTT architecture allows for unlimited clients over a publish/ subscribe protocol.

EFM and MQTT in practice EFM creates vast amounts of data at the source. With MQTT there is a standard way to send that data across the enterprise so no matOIL&GAS ENGINEERING AUGUST 2020 • 7


Spreadsheets weren’t designed for time series data analytics. Seeq is.

Time series data analysis poses unique challenges. With Seeq®, difficult and time-consuming work in spreadsheets is a thing of the past. Seeq’s multiple applications enable you to rapidly investigate and share insights from data stored in multiple enterprise data historians, such as OSIsoft PI, Honeywell PHD, and GE Proficy, as well as contextual data sources such as SQL Server, Oracle, and MySQL. Seeq’s support for time series data and its challenges – connecting, displaying, interpolating, cleansing, and contextualization – relieves you of hours and days of fruitlessly searching for insights in your process manufacturing data. Seeq helps you get more value from the data that you’ve already been collecting, and gives organizations data transparency and the ability to execute on those insights.

Learn more at www.seeq.com Asset Optimization

Situational Awareness

Investigation & Troubleshooting

Operational Excellence

© 2020 Seeq Corporation. All Rights Reserved.


EDGE, FOG & CLOUD

Flare stack monitoring made easy with edge-enabled video analytics Real-time remote monitoring of a pressing concern in oil & gas By Ramya Ravichandar

Photo 1: Emergent technology can help reduce flare stack emissions, alerts workers or even shut down operations when emissions are outside of an acceptable range. The Environmental Protection Agency (EPA) is enforcing a comprehensive set of regulations surrounding flares, particularly to reinforce the need to control hazardous air pollutants.

M

cKinsey estimates that effective use of digital technologies in the oil & gas industries could cut capital expenditure by up to 20%. At the same time, it forecasts that total cash flows will improve by $11 per barrel across the offshore oil and gas value chain, adding $300 billion a year by 2025. Even with the rapidly maturing solutions of the Industrial Internet of Things (IIoT), there are still areas of immediate and pressing concern for the oil & gas industry. One of these is monitoring flare stacks. Gas flares pose a threat to both the environment and worker safety, but the current method of monitoring them is expensive and problematic, with no margin for error. Since humans are only human, errors can creep in. Errors can not only bring on environmental disasters, but they also put workers at greater risk. In addition to the time and labor issues, and the opportunity for human error, manual monitoring often causes a delay in identifying potentially life-threatening problems like equipment failure. Even though flares can now be monitored via streaming analytics, there still needs to be visual monitoring of the stack to maintain safety. Nothing changes in the end. What’s needed is a fully automated system capable of both processing stack analytics and keeping a watchful eye on the flare itself. The deeper the dive into the data, the better visibility and foresight into potential issues. Digitalization of flare monitoring That deeper dive brings IIoT technology to the surface. It’s been estimated IIoT will have a

$930 billion impact within the next decade. Regardless of exact numbers, the industry is investing heavily in IIoT. However, it’s not as easy as simply installing sensors and cameras — oil & gas experts also need to process and glean insights from the data. IIoT devices generate data on a massive scale, which must then be transmitted (in most cases) to a central control center for processing. However, the transport, storage and processing of video data soon becomes prohibitively expensive. And if one is not going to take immediate action on flare anomalies, is it worth the opportunity cost of high latency? How can we ensure operators execute on actionable insights in time to prevent potentially catastrophic issues? There is a solution for processing video from flare stack monitors, and it involves edge computing, and, more specifically, edge intelligence. Video analytics comes of age What type solution eliminates having to store massive amounts of data and makes it possible to react in seconds rather than hours or days? It involves bringing computing to the edge. All sensors, cameras and other IIoT-enabled devices sit at the edge, typically the edge of the cloud. Edge computing takes the data center out of the equation by performing compute functions in situ and communicating directly with other devices and systems. By moving compute functions to the edge, network connectivity and speed are no longer an inhibiting factor. While the data is still transmitted to the central data center in the cloud or on-premises, it’s possible to do the transfer in batch or send only the outlying data points. Most of the compute is done at the edge to spare bandwidth costs and relieve network congestion. Edge computing solves several problems inherent in collecting, processing and reporting on sensors, cameras and other IIoT devices OIL&GAS ENGINEERING AUGUST 2020 • 9


New Green laser delivers these important benefits: ● Reduces Vibration ● Eliminates downtime and productions ● At an affordable price ● Visible indoors and Outdoors ● Brightness great for long distances

Mr. Shims

your answer to better alignment for rotating machinery

1-800-72-SHIMS (1-800-727-4467)

www.mrshims.com

Belt/Sheave Laser Alignment System

EDGE, FOG & CLOUD that are probably scattered across your infrastructure. But what happens when the need is to process streaming video for insights into flare stacks in particular? There’s a common misconception that machine learning (ML) and artificial intelligence (AI) can only be performed by powerful, large-scale systems. But a highpowered server isn’t necessary to take advantage of these technological advances. One of the ways edge intelligence differs from simple edge computing is the ability to perform sensor fusion with the incorporation of ML and AI. Add those capabilities together, and something impressive emerges — edge intelligence. The inclusion of ML and AI based applications allow for the most benefit from IIoT-connected monitoring cameras. Edge-enabled ML and AI allow for issues to be assessed and acted upon in real-time on streaming camera data. Putting that kind of advanced intelligence at the edge of the network, and where the stacks are, reduces the size and required memory by

approximately 80%, enabling fast and efficient execution in real time. Ensuring safety Video analytics with edge intelligence in flare stack monitoring can help reduce flare stack emissions and immediately alert workers, or even shut down operations when they are outside of an acceptable range. The Environmental Protection Agency (EPA) has produced and is enforcing a comprehensive set of regulations surrounding flares, particularly to reinforce the need for careful controls over hazardous air pollutants (HAPs). HAPs can cause health issues, such as cancer or birth defects, or serious environmental damage. Using advanced, intelligent video analytics to closely monitor flares helps ensure full compliance with EPA regulations, as well as protecting your people and the environment. OG Ramya Ravichandar is vice president of products, FogHorn.


REMOTE OPERATIONS

Cloud solutions enable remote work productivity, without replacing SCADA Use SCADA to best advantage, and the way it’s meant to be used By Eric Fidler

N

ot long ago, operations and facilities engineers went to work each day, logged into the supervisory control and data acquisition (SCADA) system, and went about their business of viewing production status and alarms to manage assets or facilities. They adjusted setpoints and dispatched technicians to make sure that production remained within targeted parameters. SCADA is an integrated system that directly controls a company’s production operations, a function critical to any company. It traditionally is isolated on an operations network, separate from business IT networks, to reduce the risk of unauthorized access while connecting it with the control and automation assets. This isolation was necessary to avoid risk to their revenue stream. In early 2020, COVID-19 forced some tough decisions in this regard. COVID-19 and the resulting stay-at-home orders for non-essential businesses forced companies to make risky decisions to maintain operations continuity. Engineers using SCADA systems were no longer able to go to an office to access the company’s isolated operations network. IT and operations network professionals scrambled to come up with plans to enable access while executives grappled with whether to accept the risk of doing so. Security approaches that most had gotten comfortable with for their IT networks suddenly seemed risky to use for the operations network. They didn’t have much choice. No one knew how long the orders would stay in place and operations had to have a way to continue to manage facilities. COVID-19 stayat-home orders, the potential for similar orders in the future, and changing behavior, as well as the concerns and preferences of the workforce, all conspired to make accommodations for remote operations part of the new normal for operations executives. As executives come to terms with remote workers, concerns about employee productivity

are inevitable. Although it may be natural to think their immediate concerns are whether remote employees maintain focus on work, the larger concern quickly becomes whether remote employees have what they need to be productive. SCADA challenges Opening SCADA up for outside access remains a highly debated decision. In addition, other enterprise employees need information SCADA contains as well, including for oversight, management and planning. SCADA wasn’t designed to meet their needs. They are often frustrated by formats and presentations they don’t understand. Further, widening access to SCADA compounds the risks involved, and therefore heightens the terms of the debate. Some SCADA providers offer an optional web server add-on to provide secure visibility, but this modality doesn’t address the format and presentation issues involved. The truth is that, given the risks involved, SCADA must be securely isolated on operations networks. Network isolation presents challenges to those personnel working remotely, yet isolation needs to remain the practice. Employees working at home can work more efficiently when furnished with a series of reporting dashboards that display simple visuals on process variables and operation status and output. Required actions can be initiated with a text or phone call to other employees, accelerating performance improvement in these difficult times. Further, in many companies, SCADA has inadvertently become the repository for production data. While originally intended to support performance of specific tasks related to production control and visibility, it’s ended up becoming a collection point for data. Remotely located subject matter experts and production engineers needing this information struggle to access SCADA data. Once the OIL&GAS ENGINEERING AUGUST 2020 • 11


REMOTE OPERATIONS raw data is obtained it must be manipulated each time to provide the required performance insights. Testimony from operators in recent months reinforces their struggles with remotely managing operations with SCADA. They dislike exposing their operational network externally. A better way Cloud-based software-as-a-service (SaaS) solutions are lower-cost, accessible-from-anywhere alternatives to either building or licensing tools hosted in a company’s own infrastructure. SaaS benefits apply in production operations as well. Solutions exist today that combine cloud with edge processing that are quick to launch, scale and manage at a lower cost than traditional approaches. Cloud solutions are designed from the ground up with secure accessibility in mind. Welldesigned solutions have responsive pages that adapt to a variety of screen sizes. They have connectivity when and where they need to. Security of cloud solutions has been a concern since their introduction. Most solutions have strong encryption and other capabilities, to meet or beat the level of security associated with financial institutions, and to comply with company requirements. Rather than serving up raw or cryptic tag data, data brought into cloud solutions typically has context, so users can understand what is there and focus on what interests them. Cloud solutions include modern user interfaces with built-in analytical tools and data visualization capabilities. Production accountants, production geologists, maintenance engineers, management and others can view current or historical data, at any time. A cloud solution such as that described improves worker productivity while leaving SCADA to do the job it was meant to do. SCADA and a cloud/edge solution together gives employees in different roles access to what they need in ways that make the most sense to them. Although operations employees that use SCADA may also come to prefer cloud capabilities to monitor operations and know when they need to take action in the SCADA system, other employees will prefer the cloud information presentation and ability to easily use preferred tools for analysis. This parallel system approach delivers the benefits of cloud and software as a service without disrupting existing operations. 12 • AUGUST 2020 OIL&GAS ENGINEERING

What to look for Some key areas to investigate or consider when evaluating cloud/edge solution providers to improve productivity of remote employees: • Speed and ease of deployment: Cloud setup, edge deployment, edge setup, and management should be as easy as possible. • Choose a partner that has professional services, including a framework to plan and manage deployment with high quality and repeatability. • A solution that is configurable is preferred to those requiring programming. This reduces initial setup and testing, while decreasing long-term costs required to sustain the solution through future changes. • A solution should have easy-to-use tools to manage software updates and configuration changes with minimum risk over the solution’s lifetime. • Select partners with an open, standardsbased approach to integrating their platform with existing assets. This should include a combination of operations technology (e.g. automation controllers, sensors) and information technology standards. Something new Taking on something new is the last thing companies want to do right now. But with the right cloud/edge solution partner, they can expedite deployment, minimize disruption to operations, and improve productivity quickly by empowering employees to do their work remotely without having to struggle with SCADA. Here are things to look for in a fast-tracked solution: • An open platform, with technology to ensure it is secure, that delivers timely data smoothly into the hands of personnel in a manner they are comfortable using. • Right data, to the right people, at the right time, to speed decision making. • Professional services with strategic planning for implementation, and platform features to manage a scaled deployment, to help you execute quickly and easily. • Breadth of solutions to maximize ROI. With the right partner, results can be seen in weeks, not months. OG Eric Fidler is founder and chief strategy officer, Lavoro Technologies.


I

n this 2020 ABB webinar series, our experts are providing practical solutions to everyday challenges faced by operators and facility owners alike.

August 24, 2020 Until now, buying high horsepower motors meant a lot of customization, time and complexity. However, a custom-engineered motor doesn’t need to be the answer for all high horsepower applications. In some cases, a pre-engineered general-purpose motor is the faster, more cost-effective option. On August 24, join us for “A faster approach to selecting high horsepower motors.” In this live webinar, we will discuss the advantages of buying pre-engineered high horsepower motors. We will provide guidance on when a highly customized motor is needed and when it’s not, and we will also present online tools that make the selection and ordering of high horsepower motors easier. In many industrial sectors, a potentially explosive atmosphere can occur within many different processes and environments. To ensure safe operation when selecting an industrial motor for potentially hazardous locations, certain issues need to be considered. On-Demand Now In this on-demand webinar titled, “Do you have the right motor for your explosive environment,” we talk about what you must consider when selecting a motor for these types of locations. We discuss how materials and risk levels are categorized and how classifications define the minimum motor safety levels. You will also learn how to select and install the right motor to provide high performance, efficiency and reliability without ever compromising safety.

baldor.abb.com 479.646.4711


T

echnical resources you need from an

automation vendor you can depend on The Beckhoff team works hard to design and deliver the most advanced automation and controls technologies available. Of course, that is only half the battle as offering best-in-class education and training is also crucially important. A wide range of online resources are available 24/7/365 from Beckhoff for engineers:

E-Learning Portal - Beckhoff USA is pleased to announce the launch of an e-learning portal with a range of presentations on topics related to industrial automation – and you’re invited to join! These useful educational resources are free and open to engineers located in the U.S. Multiple classes are available, including topics from all Beckhoff product families: automation software, industrial PCs, I/O and drive technology. Each presentation is followed by a quiz to reinforce the topics covered and gauge what you’ve learned. No Beckhoff hardware is required to participate, but there are modules that permit students to use their own Beckhoff equipment during the training. Register for an account at learn.beckhoffus.com and start learning today!

Educational Webinars - Pressed for time? Beckhoff offers many webinars throughout the year on a wide range of interesting topics, particularly automation and controls programming, industrial Ethernet applications, tips for designing world-class motion control architectures and much more. Visit www.beckhoff.com/webinar to learn more. Don’t forget to visit the webinar archive to view the complete history of Beckhoff webinars anytime.

TwinCAT Engineering Environment - Programmers and engineers can download the base engineering module of TwinCAT 3, the leading PC-based automation software at no charge from Beckhoff (TE1000). Visit www.beckhoff.com/twincat3 to quickly download and install TwinCAT 3 on your programming and development PC today! For more information: www.beckhoffautomation.com


D

etecting

Figure 1: Natural gas well with piping.

Water Carryover

in Natural Gas Instrumentation improves control of the water removal process from natural gas. Natural gas, as it comes from a well (Figure 1), contains water. Before transferring the gas into a pipeline for distribution, as much water as possible must be removed to eliminate carryover, with any remaining water detected to alert operators. Although a separator is designed to remove all water from the natural gas, carryover of water sometimes occurs, and this condition must be quickly detected to alert operators.

By detecting liquid carryover, we accomplish a few things: 1. Understand how efficient the mist eliminator is functioning in relation to gas flow rate 2. If a step change in the “moisture diagnostic variable” occurs it could indicate that the mist eliminator is damaged 3. Proper gas-liquid separation isn’t taking place and the separator performance should be evaluated. The control system can adjust dump cycle time and retention time, or adjust back pressure at the gas outlet. Detecting liquid carryover can allow a producer to adjust the separation process. Downstream plants are designed for certain gas properties and wet gas could result in higher operating cost at the plant due to reprocessing and wasted energy. The Prosonic Flow G 300/500 flowmeter from Endress+Hauser produces a wealth of information that process and instrument engineers can use to detect water carryover, and to better understand how a particular separator is working and how to improve its control.

www.us.endress.com • 888-ENDRESS • Info.us.sc@endress.com


We understand how you need to reduce complexities at your plant.

CLEAN PROCESS + CLEAR PROGRESS You strengthen your plant’s safety, productivity, and availability with innovations and resources.

Promass Q – for increased plant productivity • Error-free flow measurement in custody transfer applications in mass or volume units due to unmatched accuracy for density determination • Ideal for hydrocarbons with entrained gas thanks to the patented Multi Frequency Technology (MFT) • Patented “Heartbeat Technology” for device verification during operation and permanent self-diagnostics

Do you want to learn more? www.us.endress.com/promass-q300


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.