OGE_19_02

Page 1


Taking American manufacturing to a new level of excellence. The most technologically advanced cold-formed tee-press in the world, and largest in all of the Americas and Western Europe is coming to Weldbend.


Investing in the future of the fitting and flange industry weldbend.com


COMMENT

3010 HIGHLAND PARKWAY, STE. 325 DOWNERS GROVE, IL 60515 630-571-4070, FAX 630-214-4504

CONTENT SPECIALISTS/EDITORIAL KEVIN PARKER, Senior Contributing Editor 630-890-9682, KParker@CFEMedia.com EMILY GUENTHER, Associate Content Manager EGuenther@CFEMedia.com KATIE SPAIN, Art Director KSpain@CFEMedia.com

PUBLICATION SERVICES JIM LANGHENRY, Co-Founder & Publisher JLanghenry@CFEMedia.com STEVE ROURKE, Co-Founder SRourke@CFEMedia.com AMANDA PELLICCIONE, Director of Research APelliccione@CFEMedia.com ELENA MOELLER-YOUNGER, Marketing Manager EMYounger@CFEMedia.com KRISTEN NIMMO, Marketing Manager KNimmo@CFEMedia.com PAUL BROUCH, Director of Operations PBrouch@CFEMedia.com CHRIS VAVRA, Production Editor CVavra@CFEMedia.com MICHAEL ROTZ, Print Production Manager 717-766-0211, Fax: 717-506-7238 mike.rotz@frycomm.com MARIA BARTELL, Account Director, Infogroup Targeting Solutions 847-378-2275, maria.bartell@infogroup.com RICK ELLIS, Oil & Gas Engineering Project Manager, Audience Management Director 303-246-1250, REllis@CFEMedia.com LETTERS TO THE EDITOR Please e-mail your opinions to KParker@CFEMedia.com INFORMATION For a Media Kit or Editorial Calendar, e-mail Susie Bak at SBak@CFEMedia.com REPRINTS For custom reprints or electronic usage, contact: Marcia Brewer, Wright’s Media 281-419-5725, MBrewer@wrightsmedia.com MAILING ADDRESS CHANGES Please e-mail your changes to customerservice@CFEMedia.com

PUBLICATION SALES JUDY PINSEL, National Sales 3010 Highland Parkway, Ste. 325 Downers Grove, IL 60515

JPinsel@CFEMedia.com 847-624-8418 Fax 630-214-4504

2 • FEBRUARY 2019

Why the midstream cares about data management

S

urveys show that only 50% of midstream companies consider data management a priority, according to the second of a series of reports on digitalization in oil & gas from Deloitte Insights. The report authors admit the outlook for the midstream already is promising given strong growth in U.S. light tight oil production, natural gas production in shale fields, and emerging export opportunities for oil and liquified natural gas. On the other hand, for example, basin price differentials can make decision-making bearing on midstream infrastructure planning difficult, with misplaced investment falling prey to either the Scylla of stranded assets or the Charybdis of lost opportunities. The midstream asset base includes about 2.7 million miles of U.S. oil & gas pipelines with an average asset age of 20 years. The industry’s mechanical-centric operating culture hasn’t yet translated digital concepts into grass-roots level change. Virtually yours The Deloitte digitalization model proceeds from the mechanical to the virtual and then back again. It starts with developing a narrative focused either on assets or the value chain, aligning operational objectives with digital technologies. Storage operations seem, digitally speaking, ahead of other midstream operations; while terminal operations are ahead of tank management systems, the authors say. Gathering line systems, the first receiver of hydrocarbons prior to processing and transport, are at the emergent stage of “sensorizing.” What is typical today is availability of pressure and volume data from lease automatic custody transfer systems with tasks such as leak detection done using linear balancing equations such as negative pressure wave, real-time transient models, and corrected volume balances.

OIL&GAS ENGINEERING

KEVIN PARKER SENIOR CONTRIBUTING EDITOR

Through data generation and integration, relationships are defined, which lead to meaningful insights. The authors point to the example of one U.S.-based service provider that assimilates diverse data sets on a virtual server, allowing users to define physicsbased relations or calculations as well as run auto-tuning algorithms to refine results using non-physics-based concepts. This goes beyond leak detection and batch management to predicting optimal operating parameters, simulating product properties, and illustrating zone characteristics. Not fade away The authors also cite an example of four North American operators that are leveraging the GE Predix platform to integrate operational and economic aspects of production, transportation, storage, and contracts, and running scenarios in a virtual collaborative environment to evaluate the consequences of network changes on basic production and gathering lines. The authors say deployment of sensors and communication networks is a prerequisite for trunk lines, an operation where computational monitoring via pressure, volume, and temperature analysis, controller monitoring via SCADA systems, and scheduled line balance calculations are “a regular affair.” Although new pipelines often are preequipped with such technologies, there is a significant opportunity to upgrade legacy infrastructure throughout a pipeline network rather than only at key junctions. Obsolete legacy systems are exposing midstream companies to cyberattacks, especially the electronic data interchange systems used to encrypt, decrypt, translate, and track key energy transactions. OG


I NSIDE

Cover photo courtesy: Superior Energy Systems. A Superior Energy Systems transloader (also known as a portable rail tower) extracting propane from a rail car to be loaded into transport trucks. This specific unit is located at Transflo in Westborough, Mass. Transloaders can also be utilized for butane and propylene transfer.

FEATURES 4

Oil terminal efficiency improved with high-capacity flowmeter Coriolis flowmeters mitigate pressures placed on terminal, tank farm, and pipeline distribution facilities

7

With AI, zero failure is more than a pipe dream Improve confidence with a complete view of data

10

Basic steps to take when applying analytics processing Upstream oil & gas operations improved using data analytics

14

AI application developers target wellsite production optimization Solutions range from predictive maintenance to artificial lift improvement

16

The oilfield of the future will include a mobile wireless network

Take a new approach to connectivity with wireless mesh

18

AC drives emerge as entry point for industrial digitalization While bearings wear out frequently, pump applications are forever

20

Predictive analytics in the upstream introduced as a service Solution identifies production and equipment problems before they become apparent

OIL&GAS ENGINEERING FEBRUARY 2019 • 3


FLOW CONTROL IN THE MIDSTREAM

Oil terminal efficiency improved with high-capacity flowmeter Coriolis flowmeters mitigate pressures placed on terminal, tank farm, and pipeline distribution facilities by U.S. oil industry expansion

D

By Mark Thomas

Figure 1: A Coriolis flowmeter is shown. All images courtesy: Endress+Hauser

ue to increased U.S. oil production, pipeline bottlenecks in Texas and New Mexico are causing more truck and rail car use to haul oil, leading to transportation challenges. Increase in crude oil production also causes problems at tank farms, pipeline distribution facilities, and even oil terminals. In addition to increased U.S. production, the country still imports a large amount of crude. According to the American Geosciences Institute, the U.S. imports 10.4 million barrels of petroleum per day (MMBPD), with the largest amounts coming from Canada (42%) and Saudi Arabia (8%). One main issue at oil terminals, tank farms, and distribution facilities revolves around restrictions in flow caused by conventional flowmeters. At one new oil terminal, Coriolis flowmeters are proving they are better able to handle flows through larger pipes and thereby help address emerging challenges in crude oil production and distribution. Domestic crude production The Energy Information Agency (EIA) says the Permian Basin in West Texas and southeastern New Mexico will double production by 2023. Current production already exceeds pipeline capacity, so several pipeline projects are underway. The Eagle Ford shale formation in Texas is another huge source of crude. According to the latest report from the U.S. Geological Survey (USGS), the land sits on billions of barrels of untapped oil and natural gas. The USGS estimates these shale fields contain approximately

4 • FEBRUARY 2019

OIL&GAS ENGINEERING

8.5 billion barrels of oil, 66 trillion cubic feet of natural gas, and 1.9 billion barrels of natural gas liquids. In 2018, the USGS said the Wolfcamp and Bonespring formations in West Texas and New Mexico hold the most potential oil and gas resources ever assessed—more than twice as much as previously reported. The Permian and Delaware basins account for roughly one-third of all U.S. crude oil production. Estimates are now in the range of 46.3 billion barrels of oil, 281 trillion cubic feet of natural gas, and 20 billion barrels of natural gas liquids. The International Energy Agency (IEA) said U.S. production of crude oil, condensates, and natural gas liquids will rise to 17 MMBPD by 2023, up from 13.2 million in 2017. Rising production means the U.S. may soon be energy self-sufficient, a huge shift from the recent past when the U.S. was the world’s largest oil importer. Someday the U.S. may even be a net crude exporter, something that has not happened in 75 years. Crude oil imports have remained at 10 MMBPD since 2015, but instead of importing much of the crude from OPEC countries, the U.S. is importing more from Canada via pipelines, railroad cars, and tanker trucks, most of which is bound for Gulf Coast refineries. The challenge to oil terminals, tank farms, and pipelines is they are handling 2.5 MMBPD of crude today, but this will increase to 4.0 MMBPD by 2022. This will require more capacity at all facilities. And one bottleneck for all this increased capacity is the ability of flowmeters to handle the increased flow. Flowmeter challenges Mechanical meters with rotating vanes or gears were the workhorse of the pipeline flow metering business for decades.


Such meters are large, heavy, must have upstream dirt filters, wear out expensive rotating parts, require regular maintenance, don’t work with gas, and are not “smart” instruments. Device accuracy is usually around 0.25%. All mechanical meters share common limitations: • Maintenance needed due to moving parts and other issues • Reliable only with lubricating and clean fluids • Sensitive to changes in process parameters • High installation cost. Regular maintenance is required on mechanical meters to replace worn or damaged parts. Parts, such as bearings, must be lubricated, inspected, or replaced on a regular basis to ensure the accuracy and performance. Other parts—such as pistons, gears, and turbine blades—must also be checked for damage and replaced over time. Maintenance is often overlooked on initial investment, but this is likely the biggest cost in the life cycle of a mechanical meter as maintenance requires extensive downtime and constant replacement of sometimes expensive parts. Mechanical meters are sensitive to process parameter changes such as in temperature, pressure, and viscosity. Parameter fluctuations can affect meter accuracy, performance, and life of the meter. More and more terminals are looking for compact flowmeters that can fit in tight areas, are easy to repair and maintain, and provide exceptional accuracy, while also looking at multiple parameters and advanced diagnostics features. To address these issues, oil terminals and other midstream facilities are turning to Coriolis flowmeters. Terminal chooses Coriolis For example, one new Gulf region oil terminal receives Gulf crude oil and distributes it to five local refineries via pipelines. The terminal accommodates multiple oil tankers at a time using an appropriate number of unloading piers. It needed reliable and repeatable

flowmeters having a wide turndown range, capable of handling increased flow capacities from 24-in. pipelines. Endress+Hauser and its partners met with the terminal operator and management to discuss options for these high-capacity flowmeters. Promass X Coriolis flowmeters were recommended due to their ability to meet high-capacity crude oil flow rates, maintain a low pressure drop, and remain accurate with a wide turndown ratio. The involved companies worked with a local engineering firm to design unloading skids. The skids were installed at the shore end of the piers where low sea levels meant soft soil, so weight minimization was important. The more weight, the more concrete was needed for the foundation. By reducing size and weight, the need for additional support structures could be reduced. A lightweight large-capacity mass flowmeter helped reduce the skid’s weight. Each Coriolis meter was installed in a horizontal position with its underside facing up. This departure from the normal vertical mounting reduced the size of the skid by almost 50%. The project called for ten 12-inch Coriolis flowmeters mounted on five skids to handle crude oil being unloaded from tankers. Using the Coriolis flowmeters, terminal operators confidently move and distribute crude oil at high rates, while decreasing the size and weight of the metering skids. The five skids and 10 Coriolis flowmeters handle a total of 167,000 barrels of oil per hour. With Coriolis meters applied in this high-capacity crude environment, the terminal reliably and accurately tracks the crude entering the facility, with proper allocation of oil to local refineries.

Figure 2: By using four small tubes instead of two large tubes, a four-tube Coriolis meter is lighter and smaller, making it easier to fit on skids.

OIL&GAS ENGINEERING FEBRUARY 2019 • 5


FLOW CONTROL IN THE MIDSTREAM The four-tube Coriolis Coriolis flowmeter capacity can be increased by making the measuring tubes larger. However, larger measurement tubes result in bulky Coriolis devices which can be demanding to install due to the weight and required space. Rather than upscale an existing two-tube Coriolis design for higher capacity, a patented four-tube design was used. Instead of two large measuring tubes, each flowmeter uses four smaller tubes. By doing so, 68% of the pipe’s cross-sectional area can be used, allowing a more compact design than a two-tube system. Four-tube Coriolis meters are now available in sizes up to 16-in. with a capacity of 720,000 B/D and accuracy of 0.05% with repeatability of 0.025%. Advantages of a four-tube Coriolis flowmeter over mechanical meters include: • Measurement is independent of density, viscosity, and flow profile • Provides both volume and mass flow rate

CIM

• Typically handles higher temperature and pressure • Better turndown • No regular maintenance required • No upstream piping • Best basic accuracy of any oilfield meter • Patented Reynolds Number corrections • Measurement of density and other fluid quality parameters • Advanced diagnostics, process monitoring, and built-in verification. With these advantages in mind, four-tube Coriolis flowmeters are the measurement technology of choice in midstream oil and gas and other demanding applications. OG Mark Thomas is the oil and gas industry manager for Endress+Hauser USA. Mark is responsible for the overall business development and growth of the company position related to the oil & gas Industry. Mark is a 2003 graduate of Texas Tech University with a BA and achieved his MBA in 2008.

Cognitive Integrity Management

End-to-end Pipeline Integrity Management 

Regulatory compliance

Fully integrated enterprise platform

Exceed peer financial performance

Performance / insight reporting

through the assistance of data science and machine learning.

1-877-261-7045 onebridgesolutions.com/try-cim


MIDSTREAM INTEGRITY

With AI, zero failure is more than a pipe dream Improve confidence with a complete view of data By Tim Edward and Rob Salkowitz

E

veryone in our business knows—or ought to know—about the pipeline maintenance crisis that puts billions of dollars, lives, property, and the reputation of midstream oil & gas industry at risk, leading some in the public to call it a “ticking time bomb.” Statistics indicate tens of thousands of miles of pipes that are decades beyond their predicted end-of-life, scattered so wide and buried so deep that just finding them on a map can be a problem. No one is happy with this situation, but it’s not easy to solve. Pipeline integrity teams already are asked to perform miracles with the data generated from traditional inline inspection (ILI) tools, to analyze vast spreadsheets that typically represent only 5% of the data collected. What if the answers are in the other 95%? What happens when new laser scanning technology increases the volume of data exponentially, without any new tools to make sense of it all? And what happens when the senior-level experts who have been keeping everything running for decades retire to spend their days playing golf and passing time with their grandchildren? Those are the kind of problems keeping risk managers, CFOs, and CEOs awake at night. Today’s expertise and technology is, at its very best, able to hold the line against catastrophic failures. Given the challenges ahead, is it even conceivable to imagine reducing failure risk to zero? We think it is. Here’s why. Digital transformation operators Folks use the term “digital transformation” as a buzzword for the investments being made in technologies like artificial intelligence (AI), machine learning, augmented reality, robotics, and wearables taking place across the business world. McKinsey and Accenture recently estimated digitalization

has the potential to create as much as $1 trillion in value for oil & gas companies. AI may be the technology promising the most impact—particularly when applied to pipeline management. AI systems aren’t actually “intelligent,” of course, but they are trained to recognize in Big Data sets patterns that look like problems or issues worthy of human attention. At scale, they can churn through billions of data records to spot everything from suspicious network activity to fraudulent financial transactions, if they know what to consider a problem. The more data these systems look at, the more precise and confident they become in finding what they’re looking for. That has a bunch of big payoffs for pipeline management. First, you will analyze 100% of your data, as compared to the likely 5% that organizations leverage today to assess safety risk, and can increase that analytical capacity at almost no marginal cost. In fact, costs will probably be reduced by freeing up analyst time and effort by automating the most timeconsuming, repetitive parts of their job. Second, depth of confidence improves based on a more complete view of the data. Think of the millions of dollars that pipeline operators waste every year because they don’t believe their data and err on the side of safety by digging where they don’t need to. Sure, it’s better to be safe than sorry when it comes to risk exposure. On the other hand, there are financial costs to being over-cautious—costs that organizations can eliminate by having clear and precise data on the exact status of every square inch of pipe along thousands or tens of thousands of miles of run. Bringing data into the business When we first started OneBridge in 2015, we felt we had a good handle on the technology required to automate the repetitive OIL&GAS ENGINEERING FEBRUARY 2019 • 7


MIDSTREAM INTEGRITY

Figure 1: Artificial intelligence is being applied to pipeline management based on its ability to identify patterns in data that are not always intuitively obvious to humans. Graphic courtesy: OneBridge

tasks related to pipeline data analysis that experts hate. Now that we’ve spent several years working with some of the top companies in the business, more capabilities have been added to our system to benefit operators on everything from assessment planning to integrity compliance to threat monitoring. With each new release, including the latest, Cognitive Integrity Management 3.1, the product is getting smarter about how it analyzes and identifies problems, and smarter about how to make adoption easier for customers. These capabilities take the platform beyond the basic ability to interpret pipeline data by bringing that data into the mainstream of the business. For workgroups and managers responsible for assessment planning, it’s easier to automate, schedule, and monitor many of the project tasks associated with maintenance. Tools have been incorporated that enable regulatory compliance by monitoring a range of technical threats and documenting the integrity of business processes end-to-end. New monitoring and reporting features have been added that empower teams to collaborate around threat monitoring data, smoothing the path from insight to action. Listening to customers it’s clear they want better reporting, data visualization, and integration with standard analytics tools like Microsoft Power BI. Along these lines, the ability of OneBridge to interoperate with other systems across the enterprise has been enhanced. Moving forward, part of the mission will be looking for unique and exciting ways to get that data to field crews to make digs faster, more precise, and less costly, so operators

can maintain their networks with greater confidence and less expense. Getting to zero failures For some people, AI can have sinister connotations, including the replacement of humans with computers. Rest assured, AI tools are about getting customers to zero failures, not zero employees. For the past 20 years, pipeline integrity experts have been waging a valiant battle against aging infrastructure with inadequate tools and inadequate resources, incurring the wrath of management when they fail, but garnishing little notice or attention in their daily triumphs, despite the odds. As those people head toward a well-earned retirement, organizations need to embed their knowledge and experience in systems so that their successors can step into new roles without any reduction in operational excellence. Investing in AI-based tools can move that process forward while at the same time it delivers the other benefits of greater confidence, greater data coverage, and reduced risk. It also demonstrates to next-generation workers that the company is willing to invest in modern solutions. No one can stop the hands of time as they affect our pipelines, organizations, or workforce, but interested parties can get out in front of the problem instead of giving everything they’ve got just to fight it to a standstill. AI is the industry’s secret to stop playing defense and start moving toward the goal of zero failures. It’s not a promise, a vision, or a pipe dream. It’s here today and getting smarter by the minute. OG Tim Edward is the co-founder and president of OneBridge Solutions. Rob Salkowitz is an author, educator, and consultant whose work focuses on the social and business impact of technology innovation.

8 • FEBRUARY 2019

OIL&GAS ENGINEERING


ADVERTISEMENT

A

s market trends and customer needs grow and change, we must have solutions readily available for our customers before issues arise. It is our top priority to focus on what we can do to better understand our customers’ goals and challenges. We listen to feedback and develop solutions to ensure customers’ critical processes run without interruption. One trend we have seen our customers focus on is the skills gap in manufacturing. With our customers’ concerns in mind, we proactively engage with youth starting in intermediate school, educating them on opportunities in advanced manufacturing. Endress+Hauser hosts an annual event where students get the opportunity to speak with educators, manufacturers and industry partners, so they understand their future possibilities in our industry.

to enroll in our Rotational Engineering Program. This program allows our company to accommodate and anticipate our customers’ future needs. During the program, our engineers receive a well-rounded education covering nearly every major area within our organization. After they have completed the program they become a resource we are able to place where our customers see demand. In addition to the Rotational Engineering Program, Endress+Hauser, in partnership with Industry Consortium for Advanced Technical Training (ICATT), will launch a new apprenticeship program in 2019. The three-year program will split time between working and training at Endress+Hauser and attending related college courses. After completion, the program offers a twoyear employment commitment.

Our PTU (Process Training Unit) in Greenwood, Indiana

Endress+Hauser has also made significant investments in state-of-the-art training facilities with ten PTUs® (Process Training Units) in various locations throughout the US. Each PTU has more than 120 Endress+Hauser instruments installed to measure flow, level, pressure, temperature—along with various analytical parameters.

We partner with local colleges and universities to change curriculum and gear it more towards advanced manufacturing. We encourage those students who have recently graduated with a STEM degree

Customers, students and employees can use the PTUs for hands-on training, learning experiences and field trips. And recently, Endress+Hauser has partnered with another industry leader to provide

Todd Lucey

General Manager, Endress+Hauser USA

educational training to military veterans at no cost. As we continue to develop an understanding of our customers’ needs, we are making certain our offering of services and solutions adapt to, meet and surpass expectations. It is important we continue to build on growth, anticipate our customers’ needs and strive for excellence while gaining trust in improving our customers’ processes and products sustainably and efficiently.

We are making certain our offering of services and solutions adapt to, meet and surpass expectations.

Tel. 888-ENDRESS, 317-535-7138 info@us.endress.com www.us.endress.com


DATA MANAGEMENT IN THE UPSTREAM

Basic steps to take when applying analytics processing Upstream oil & gas operations improved using data analytics

R

By Michael Risse

Figure 1: Data analytics typically follows this series of steps, a process greatly simplified by using the right software. All images courtesy: Seeq

10 • FEBRUARY 2019

educing the break-even price of U.S. shale oil production requires the intelligent application of what Goldman Sachs refers to as “brawn, brains, and bytes.” Any discussion related to bytes must address use of data analytics to accelerate insights by engineers and other experts into Big Data. These insights can improve operations, increase safety, and cut costs. According to McKinsey, these types of improvements represent a $50 billion-dollar opportunity in upstream oil & gas, including increasingly important shale oil. The brawn stage of shale oil innovation started around 2013, including longer horizontal wells and fracking with more sand and horsepower. The brains stage included better horizontal well placement and targeted, optimized fracking. These innovations were responsible for reducing break-even pricing from $70 per barrel in 2013 to $50 per barrel

OIL&GAS ENGINEERING

in 2017. At the same time, production in key U.S. shale plays rose from 2.4 million barrels per day (MMBPD) in 2013 to 4.6 MMBPD in 2017. To further reduce the breakeven price to $45 per barrel and increase production to about 7.7 MMBPD will require more brains, but also will rely heavily on bytes. These bytes can improve operations in several areas. This article will discuss two of them, production monitoring and preventive maintenance, but let’s first look at how data is gathered and stored in preparation for analysis. Collecting data Production monitoring and preventive maintenance each require acquisition of data from sensors—wired and wireless. Discrete sensors indicate whether an item of equipment, such as a pump, is on or off. They also are used commonly to indicate open/closed status, as with a valve.


Typical analog sensors measure pressure, temperature, flow, and density—parameters of considerable interest to shale producers. Analytical analog sensors are used more sparingly, most often to measure the chemical composition of oil. Sensors can be wired or wireless. Traditional wired sensors work well in many applications, but as the name implies they have a drawback—the requirement to connect them via cabling and wiring. This is particularly problematic for retrofit applications at existing sites. Discrete sensors transmit their on/off or open/closed status to monitoring systems via a single pair of wires. Smart discrete sensors transmit not only status, but also sensor condition, via a digital communications link. Wired analog sensors also are either standard or smart. Standard analog sensors transmit a single process variable, for example a pressure reading for a pressure sensor, to a monitoring system, usually via a 4-20 mA signal. Smart analog sensors transmit a wealth of data, up to 40 parameters for a sophisticated sensor such as a mass flow meter. For example, a typical Coriolis mass flow meter will transmit mass flow as its process variable, plus density and temperature. Diagnostic data indicates meter condition, and shows when the meter was last calibrated, and when it should be again. Wireless sensors were introduced about a decade ago, and both discrete and analog versions are smart. For industrial applications, the two main wireless protocols are ISA100 and WirelessHART. Although wireless is relatively new, there are well over 30,000 WirelessHART networks worldwide, with more than 10 billion operating hours. Sensors and networks collect data, stored and often shared, a task which technology advances make easier. Storing and sharing data Not long ago, storing vast amounts of data generated by a shale drilling site was expensive. Costs have come way down, for both onpremises and cloud storage. On-premises storage is typically on a serverclass PC connected to the monitoring PC via a hardwired Ethernet connection. The server-class PC hosts one of the many popular time-series databases, such as OSIsoft Pi. Unlike relational

databases, time-series databases store huge amounts of real-time data efficiently. Data stored on-premise often is needed at central locations, such as a control center, and may be transmitted via many different means, including cellular and satellite networks. In like manner, data may go directly from a local PC-based monitoring system to the cloud, which has many advantages over on-premises data storage. Costs per unit of storage are lower, and storage can scale as required. Once in the cloud, data is accessed worldwide via any Internet connection. Accessing either on-premises or cloud-based data remotely presents some security issues, and while not insurmountable, are outside the scope of this article. Now that data has been collected, stored, and shared, it can be analyzed to improve operations. Improve and implement Many oil & gas companies are overwhelmed by the sheer volume of data collected. Despite claims by some suppliers to the contrary, it’s not possible to simply turn AI or machine learning software loose on data and get useful information. Instead, exploiting data analytics must follow a multi-step process, shown in Figure 1 and described below. Connecting to data is easier when using data analytics software with secure, pre-built connectors to the databases used. When evaluating data analytics software offerings, make sure pre-built connectors link to existing and anticipated databases. Automatic linking to databases allows Google-like searches for parameters and time periods of interest. Otherwise, custom code must be written to link the analytics software to the database, an expensive and time-consuming task. Data cleansing requires aligning data sources on the same time scale, and validating data quality. Doing so can consume up to 50% of the time required for gaining insights, depending on the nature of the existing data. Data analytics software should come with built-in data cleansing tools. Tools should be specific to the process industries and be usable by a process engineer with limited background in signal processing methods such as spike detection, low pass filtering, managing intermittent bad OIL&GAS ENGINEERING FEBRUARY 2019 • 11


DATA MANAGEMENT IN THE UPSTREAM

Use cases in brief Optimizing oil collection PROBLEM: A company collected oil from scattered well sites, but never optimized pick-up routes to match the erratic nature of production. Internal analysis efforts using level data from sites proved fruitless. SOLUTION: Using programmatic analysis, production data from the sites is monitored by watching the rate of change, with results used to predict the optimum time to send a truck to a given location. Truck calls are more efficient, and reports are generated automatically. Well pump analysis PROBLEM: Flow assurance engineers watching out for undesirable pumping conditions had difficulty analyzing well production data from a large group of sites. A mathematical model could perform the calculations but typically took an entire day, delaying corrective action. SOLUTION: With improvements to the model, the same calculations can be done in about 30 minutes. This makes it far easier to identify problem situations and evaluate the effectiveness of corrective measures, improving production overall. Well performance analysis PROBLEM: Flow assurance engineers knew specific attributes of crude oil from a given well were predictors of equipment performance issues such as clogging, fouling, and corrosion—but could not develop adequate mathematical models for accurate predictions. SOLUTION: Using data from a large group of wells, the solution ties oil characteristics to equipment performance, helping operations and maintenance departments recognize when and how a change in the crude produced is likely to cause equipment performance problems. Rotating equipment evaluation PROBLEM: Even with all the diagnostic sensors applied to large rotating equipment installations, users had difficulty getting useful information beyond the most basic alarms. Performing the required sophisticated analyses proved elusive with conventional tools. SOLUTION: Using process analytics, analysis zeroed in on root causes quickly and effectively, eliminating the problematic first-principle models and false positives common with less sophisticated analytical approaches. It is far easier to determine optimal operating conditions and avoid outages.

values in data sets, and others. Capturing context relates each data point to others. A relational database does this upon setup and creation, with each data point’s relationships to others defined. With time-series databases, each data point is time-stamped, but without relationships established among data points. Capturing context adds the relationships for each data set as it’s pulled from the database 12 • FEBRUARY 2019

OIL&GAS ENGINEERING

into the data analytics software. Once again, tools use must be intuitive for process engineers, with no assistance required from data scientists or IT experts. Today’s most popular data analytics tool is the spreadsheet but analyzing time-series data with this general-purpose tool is timeconsuming and requires expertise with macros, pivot tables, and other arcane spreadsheet functions. Furthermore, data volumes handled by spreadsheets typically limit the types of analysis. Software for analysis of time-series process data is needed. The software should support subject-matter experts (SMEs) with visual representations of the data of interest, allowing direct interaction with the data using an iterative procedure (Figure 2). SMEs can then rapidly perform data calculations on the data, search for patterns, analyze different operating modes, and so forth. Capture and collaborate capabilities give SMEs the means to share results with colleagues. This not only brings multiple minds to bear on a problem, but also supports knowledge transfer. Annotated captured results allow others to follow the trail that generated the original insights. Extensibility provides the flexibility to use a data analytics solution anytime and anywhere. A browser-based interface means the look and feel are the same whether on an office PC or a tablet in the field. “Run-at-scale” means the data analytics software works with the largest data sets to solve the most complex problems. In extreme cases, the software runs on multiple servers to harness the processing power and local data storage needed. This capability will be more important in the future as deployment data volumes and problem complexities grow. Finally, the SME may want to establish monitoring applications for alerting stakeholders to specific operating conditions, providing early warning and driving faster corrective action. Detailed use case Pioneer Energy is a service provider and original equipment manufacturer solving gasprocessing challenges in the oilfield with gas capture and processing units for tank vapors and flare gas. Pioneer operates and monitors these geographically disperse units from its headquar-


ters in Lakewood, Colo., analyzing the results to deliver continuous improvement. Their FlareCatcher system is powered with a natural gas generator, found inside a trailer. Fuel gas for the generator can be any of FlareCatcher’s refined energy products, representing only about 5% of the total energy of the gas processed by the equipment. Pioneer has systems installed in the western United States. Future sites could be anywhere in the world with cellular or satellite connectivity. Alternately, a local radio network could get the data to a network hub. Well-site data is sent to a local data center with built-in redundancies in power and networking services. Pioneer has data centers in Denver and Dallas. It is investigating virtualization to add dynamic scaling and load balancing to improve field data gathering. Analog data is transmitted at one-second intervals and discrete data is transmitted as it changes, but Pioneer had no sophisticated data analysis tools. If engineers found themselves with free time, they manually loaded historical data into Microsoft Excel spreadsheet to calculate a few basic metrics. But Excel is not suitable for calculations of reasonable complexity, so much of the data gathered was not exploited for value. Pioneer selected Seeq’s advanced analytics application because it manifested what they envisioned. It has a graph database, time-series optimization, a clean browser-based interface, as well as advanced data analytics and information sharing capabilities. The decision was easy after seeing the visual pattern search tool demonstrated. The solution enables Pioneer to optimize the data stream. Simple computations performed at the edge determine what data is streamed to headquarters for analysis, and what is archived locally. The system analyzes historical data to define rules for operating parameters. In a continuous improvement cycle, all data has potential value

if unlocked and leveraged. Seeq is the environment for experimentation and learning. Visual feedback allows engineers to analyze complex data in a reasonable amount of time. For example, Pioneer’s refrigeration systems are very sensitive to changing operational conditions. The solution allows Pioneer to isolate these effects, identify their causes, and develop simple operational rules to extend the life of its capital investment. Pioneer delivers value by operating systems remotely. If the software identifies a problem with field equipment, corrective action can be taken quickly. For instance, Pioneer uses aircooled cascade refrigeration systems. During hot days, discharge temperatures and pressures can rise to elevated levels, leading to hardware failure. By detecting this condition, the software allows operators to intervene by reducing system throughput. All well-site data is streamed to a centralized, secure data center where the server resides. The interface is available via a web proxy server. Pioneer technicians and engineers can access the data anywhere there is a network connection, including at the well site, given a cellular hot spot. The analytics software installation improves operational intelligence, shedding light on otherwise complex processes. The challenge now is deciding what mystery to tackle next. OG

Figure 2: Providing subjectmatter experts with visual representations of data allows them to interact directly to solve problems.

Michael Risse is a vice president at Seeq Corp. He has been a consultant with Big Data platform and application companies, and prior to that worked with Microsoft for 20 years. Michael is a graduate of the University of Wisconsin at Madison. OIL&GAS ENGINEERING FEBRUARY 2019 • 13


SOLUTIONS CHART

AI application developers target wellsite production optimization Solutions range from predictive maintenance to artificial lift improvement

T

he number of analytic, machine learning, and artificial intelligence (AI) application suppliers specific to the oil & gas industry keeps growing. Please find brief descriptions of a generous selection of them in the listing below.

Ambyint

Arundo

Calgary, Alberta and Houston

Houston

Offers an artificial-intelligence-driven, artificial-lift optimization solution that helps exploration and production (E&P) companies reduce costs while improving production. Solution integrates physics-based analytical capabilities with bestin-class data science and machine learning to deliver optimized production. Ambyint has a 12-year operating history delivering artificial lift control and monitoring solutions to E&Ps and over that time has gathered a high-resolution data set, currently totaling almost 100 million pump operating hours or 45 terabytes (TB). OG

Proprietary software for asset-intensive industries enables “edge�-state streaming and analytics, rapid cloud deployment of machine learning models, and enterprise-scale models management. Customers take control of their data across a broad range of assets with context provided by asset hierarchies that help take advantage of trapped or previously unutilized data. OG

Cloudera Palo Alto, CA Cloudera, the enterprise data cloud company, recently announced completion of its merger with Hortonworks, Inc., allowing it to run in any cloud, from the edge to AI, on a 100% open-source data platform. An enterprise data cloud supports both hybrid and multi-cloud deployments, providing enterprises with the flexibility to perform machine learning and analytics. OG

AKW Analytics Inc. (AKW) New York

C3 IoT

Provides Big Data analytics and machine learning software systems and services to increase production of upstream and midstream pipeline gathering operations. Technologies combine machine learning and optimization into the PALM (Petroleum Analytics Learning Machine) software product suite, which manages a set of applications for multi-variant analysis of combined datasets from geology, geophysics, rock physics, reservoir modeling, drilling, hydraulic fracture completions, production, and gathering for delivery to markets. OG

Redwood City, CA

14 • FEBRUARY 2019

OIL&GAS ENGINEERING

C3 Platform delivers a comprehensive platform as a service (PaaS) for rapidly developing and operating Big Data, predictive analytics, AI/machine learning, and IoT software-as-a-service (SaaS) applications. C3 also offers a family of configurable and extensible SaaS products developed with and operating on its PaaS. OG

Detechtion Technologies Houston Industrial IoT and mobile application provider enabling the digital oilfield offers solutions for chemical injection, compression, and other production operations. Monitor, protect, control, and optimize paradigm allows customers to automate assets with a single hardware device. Over 100 customers and thousands of users depend on the harnessed technologies to monitor and manage over 10,000 assets worldwide. OG


Seeq eLynx Technologies Tulsa, OK Recently announced the commercial launch of predictive analytics as a service (PAaaS), a suite of predictive maintenance software products that forecast oilfield problems— everything from when equipment is about to break to downhole events such as liquid loading—before they happen. The oil & gas industry can leverage the platform—combining predictive analytics as a service and SCADA—to achieve significant gains. The plunger lift predictive maintenance product, for example, saved one producer $710 per month per well. OG

Falkonry Sunnyvale, CA Helps companies improve the performance, throughput, quality, and yield of operations by discovering patterns hidden in existing operational data and delivering actionable analytical insights. Ready-to-use machine learning system can be deployed and used directly by practitioners like process engineers or chemical engineers and does not require a data scientist. Falkonry says it is like a “data scientist-in-a-box” and is designed to complement existing operational infrastructure. OG

Seattle, WA Noting that many processes are “data rich, information poor” and the number will increase with new sensor deployments and higher data creation rates, Seeq’s vision is to close the gap between advancements in data and computer science, including Big Data and machine learning, and deliver innovation as features in easy to use, advanced analytics applications. OG

SparkCognition Austin, Texas Leverages cutting-edge machine learning techniques to provide predictive maintenance capabilities. Its AI platform learns from data to understand operational states and failure modes of assets and uses this intelligence to warn of impending asset failures. This allows operators to plan corrective actions and optimize budgets. OG

Tachyus San Mateo, CA Platform helps producers optimize production across environments that include secondary and tertiary oil recovery in complex reservoirs. Engineers use the platform to integrate all relevant data sources in real time, explore millions of scenarios, and identify operational and development plans. Solutions include for cyclic steam, steamflood, and waterflood optimization. OG

Lavoro Technologies Houston

TrendMiner

As a software development company in Houston, Lavoro’s objective is to help oil & gas companies maximize the value of oilfield production assets, offering solutions to optimize production, find asset efficiencies, and assure peak production recovery. Standardized software applications integrate data from the oilfield and deliver executable information to your enterprise and backend systems. OG

Houston

Maana

Delivers self-service data analytics to optimize process performance in oil & gas and other industries. Software is based on a high-performance analytics engine, for captured time-series data. Plug-and-play software adds value on deployment, eliminating the need for infrastructure investment or long implementations. Diagnostic and predictive capabilities enable users to speed up root-cause analysis, define optimal processes, and configure early warnings to monitor production. OG

Menlo Park, CA

WellAware

Maana’s vision is to encode the world’s industrial expertise and data into new digital knowledge for millions of experts to make better and faster decisions, while operating the assets of Global Fortune 500 companies. Computational Knowledge Graph creates a digital knowledge layer showing relationships and interdependencies between concepts. This digital knowledge layer combined with Maana’s AI algorithms, help subject-matter experts rapidly create models that power artificial intelligence applications. OG

San Antonio, Texas Industrial Internet of Things (IIoT) company provides a full stack solution to simplify the collection, management, and analysis of oilfield assets. Modular solution complements existing environments. Build a complete picture of well conditions by integrating all critical production data, including wellhead, line pressures, separators, compressors, storage tanks, and chemical tanks, to make informed and timely strategic and operational decisions. OG OIL&GAS ENGINEERING FEBRUARY 2019 • 15


NETWORKING AND CONNECTIVITY

The oilfield of the future will include a mobile wireless network Take a new approach to connectivity with wireless mesh

T

By Todd Rigby

Figure 1: A kinetic mesh network features node- and frequency-level redundancy to dynamically route around interference, signal blockages, or other potential challenges. All images courtesy: Rajant Corp.

16 • FEBRUARY 2019

he digital oilfield combines sensor technologies, cloud-based Big Data analytics, and other emergent technologies to reduce unplanned downtime, increase asset optimization, and improve operational efficiency, including in scenarios such as the following: • Sensors on a rig detect abnormalities (such as temperature fluctuations) and send an alert. • Engineers in an integrated operations center receive the alert and perform a diagnosis via interactive 3-D models. • Surveillance drones investigate the rig and stream photos and videos in real time. • Predictive data analytics determine maintenance needs based on drone data and send the parts order to the supply chain. • Delivery drones bring the parts from the warehouse to the rig. • Engineers receive maintenance orders on mobile devices and use virtual models on tablets and augmented reality data on smart glasses to perform maintenance. To support technologies like drones and smart watches, however, oil & gas operators need a robust, reliable, and mobile network that ensures connectivity 24 hours a day, seven days a week. Many oilfield operators face connectivity challenges and are forced to watch productivity slow to a halt as cellular and traditional wireless networks struggle to keep up with emergent technologies.

OIL&GAS ENGINEERING

Connectivity to cell towers, few and far between as they are in remote areas, can be further hindered by factors like distance, rugged terrain, or extreme weather conditions. Personnel and assets can be left stranded. What oil & gas industry participants need is a complete mobile network that can move as one with an oilfield. Oilfield mobility Oilfields don’t always have enough existing communications infrastructure located within range. But what if network infrastructure could instead move directly to where it was needed, rapidly expanding coverage to that area across all assets and machines? This mobile infrastructure also could spread as far and wide as a site requires, flexibly augmenting or creating a network ad hoc to provide ubiquitous coverage across an oilfield, no matter how large it is. As more connected devices and machines join an oilfield’s operations, new infrastructure would simply roll in to provide the increased support required, as well as work with the nodes already installed on the numerous moving and static field assets. With a mobile communications infrastructure, the many moving assets that make up an oilfield, from equipment to vehicles to people, could take robust connectivity with them as they travelled. The network would simply follow along, dodging line-ofsight issues caused by large equipment, and connecting hot zones to allow operators to maintain connectivity to, communications with, and control over all the “things” that empower productive operations. Wireless mesh networks Giving the network “wheels” means that even outer-edge communications would be reliable, providing direct connection to a control center.


To support this strategy, wireless mesh networks are ideal for an oilfield because of their mobility, flexibility of scale, and reliability. Oil & gas operators can kick-start their organization’s journey to the digital oilfield by deploying a kinetic mesh network topology. This type of network allows multiple nodes to connect, broaden, and strengthen the network as necessary. Each node acts as a compact, rugged, transportable, mini cell tower. Thus, anything in the organization’s infrastructure can be turned into networking equipment. Compared with a regular cellular network, which can only make limited connections, a kinetic mesh network can communicate peerto-peer seamlessly, via numerous connections, forming an adaptable, dynamic network that provides reliable wide-range communications practically anywhere. Nodes integrate with existing infrastructure to rapidly extend coverage, communicating with and controlling roaming assets across a site. Line-of-sight issues cease to be a problem. If terrain or moving assets interrupt a cell tower’s line of sight, connectivity can be obstructed with no way around it. Kinetic mesh nodes are mobile, generating more lines of sight, and the mesh networking technology dynamically selects the fastest path from hundreds of potential options to route around interference, signal blockage, or other potential challenges. A kinetic mesh network features node- and frequency-level redundancy, with nodes making multiple simultaneous connections. No connections need be broken for new ones to be made—keeping critical oilfield data intact. A case study Today’s oilfield operators manage remote wells across hundreds of square miles of rugged terrain, manually retrieving information from each well-head and reporting back to the command center weekly. This process is long, tedious, and potentially unsafe. The data collected on each weekly trip virtually is redundant once it reaches the command center. To ensure real-time data, increase production, and decrease failures of semi-autonomous down-hole pumps, one Texas oilfield deployed a kinetic mesh network.

The oilfield uses semi-autonomous downhole pumps. These pumps have data loggers on the wellheads, which workers use to set the speed at which the pump brings oil to the surface to be collected and processed. Setting a speed that is too fast risks the hole running dry and the pump burning up, which could cost hundreds of thousands of dollars for total repair and removal. To avoid these costs, the wells were operated conservatively. The pumps only were visited every week or so, so recorded pump speeds were outdated as soon as the workers left the field. To address this challenge, the oil company deployed a kinetic mesh network that could connect the pumps and send all production data to a central office in real time, eliminating any need for technicians to go into the field to pull data from each individual well. Technicians in the central office are alerted immediately if there is a production drop on any well. By monitoring the field’s production remotely and in real time, the operators maximize production while eliminating unnecessary down-hole pump failures. The kinetic mesh network enables remote equipment operation and allows the company to run the pumps at the correct speed based on oil conditions at a given well, increasing profits and delivering return on investment within months. The future of the oilfield Kinetic mesh networks give oil & gas operators secure connectivity to access and act on increasing data volumes. Automation of processes and machinery, precision drilling, wellhead communications, automated drilling and pumping, drones for surveillance and inspection, and production control and reporting all benefit from a successful transition into the digital age. All can be supported by a kinetic mesh network. Oil & gas field environments are tempestuous and unpredictable, even before throwing network and connectivity issues into the mix. Rapid developments in technology are disrupting organizations’ current operating models and pushing for change, forcing companies to update their thinking when it comes to technology. OG Todd Rigby is director of sales for Rajant Corp. OIL&GAS ENGINEERING FEBRUARY 2019 • 17


TECHNOLOGY EVOLUTION

AC drives emerge as entry point for industrial digitalization While bearings wear out frequently, pump applications are forever By Steve Meyer and Kevin Parker

I

n ancient Greece and Rome, the pump was an essential mechanical technology, just as it is today. Yet even the very oldest technologies are being impacted by the emergent Industrial Internet of Things (IIoT) and the digitalization trends IIoT supports. Pumps, as part of IIoT networks, and the motors and drives that move those pumps, are being equipped with more sensors, dataacquisition capabilities, and computational resources. But what is the best way for this to happen? It’s been noted that the drive is the first point at which digital technology can be applied to a mechanical system, such as a pump, in a production environment. The drive can act as a kind of edge server or gateway that orchestrates the data that enables predictive maintenance or process optimization applications. A variable frequency drive (VFD) controls the rotational speed of an alternating-current electric motor and thereby the flow and pressure of a pump, eliminating the need for a throttling valve.

Managing assets The number-one cause of motor failure is the bearings. Motor bearing failure can bring just about any production process to a screeching halt. Maintenance-wise, for those in the know, just the sound of a motor can indicate an impending main bearing failure. Motors can be equipped with vibration monitoring to detect transient bearing failure, as well as with the means to shut down a failing motor. However, because momentary vibration increases are not necessarily indicative of imminent failure, reading only a single sensor value makes false positives possible. Filtering can mitigate these type challenges, but even better, applying computer machine learning techniques can ensure not only that transients are eliminated, but that 18 • FEBRUARY 2019

OIL&GAS ENGINEERING

any significant trends detected are escalated for attention as soon as possible. In other words, while management of pump and motor assets has been well understood for years, computing power helps decide when to take action, based less on the operators’ intuitive insight and more on quantifiable facts. In addition, other factors can be factored in, such as how long the motor has been running, temperature—a very important indicator—and load can be factored in, which may vary with the pumped fluid’s viscosity. The influence of these other factors can be calculated so that alarms are not strictly state-conditioned. The cost of the additional computing power is tiny compared to the cost of motor failures in an industrial setting. Digital duplicates Digital twin use is another way to move pump operations beyond strictly state-conditioned monitoring of sensor values. With equipment design documented as a 3-D solid model, empirical operating data can be associated with it. The digital twin enables deeper insights into equipment operation. Technologies for building digital models and populating them with data are offered by at least several software suppliers. Implementing IIoT can be data intensive. In practice, measuring parameters that include temperature, pressure, flow, vibration, and power, can generate 2.5 MB of raw data per second. Bandwidth for sending this data to the cloud can be expensive. Processing that data at the edge helps by compressing the data to a small fraction of features that can be easily sent to a local server or the cloud. In today’s world, controllers detect when a parameter varies from its expected set point value. However, as mentioned, this can lead to transient alarms and unnecessary downtime. A more efficient method uses machine learning capabilities associated with the digital twin


to recognize data patterns across parameters to determine whether anomalies are problematic and, if so, predict time to failure. In a refinery with 150 pumps, installing servers or gateways at each pump can be expensive. Alternatively, next-generation drives can send pump data to an IIoT system, such as PTC ThingWorx. This reduces implementation cost and provides data to optimize fluid flow, improve overall yield, and reduce downtime. Residual challenges Industry has a good understanding of digitalization’s inherent possibilities, but consensus is lacking about how to proceed to implementation. Those well-versed in computing want to be involved in Industrie 4.0 but don’t necessarily have the required industrial domain expertise. To say it in plain English, IT departments aren’t always well-suited to dealing with remote operations in adverse environments nor do they necessarily know how to implement instrumentation for data acquisition. As a result, in some cases IT providers seem to recommend users build a kind of redundant, secondary network devoted to acquiring the data needed for predictive maintenance and process optimization. This is an expensive, high-risk approach, and doesn’t mark out a clear path from basic control to full-throated optimization functionality. On the other hand, drives equipped with extra computing power can be a major part of a “clean” solution, especially for oil & gas pipelines, in process industries, or wherever motors are a big part of a plant’s energy consumption. While some OEM equipment manufacturers balk at the cost of adding multiple sensors to their products, bringing motor-pump combinations into the IIoT world doesn’t mean applying a lot of extra sensors. The drive itself already has capabilities for analog measurement of pressure and flow, as well as 3-phase current transducers, with vibration inputs to be added in the near future. Many motor and load behaviors are understood from the current waveforms and heat profiles recorded in the drive. Things like the ATEX protocol require six thermocouples to be installed in the motor stator. This provides a good window into a motor’s heat profile. The petrochemical

industry has used these protocols for years in order to deal with applying electric motors in explosive atmospheres. Sensor and bearing manufacturers, recognizing the need for more complete information from the bearing, are innovating a broad array of solutions. More bearing data is available than ever before, from MEMS, piezoelectric, and accelerometer-based vibration sensors that are installed externally, to some newer integrated designs. For bearing sensors, higher resolution analog-to-digital conversion is required, driving demand for more instrumentation and control resources. Integrating the needed interface into the drive directly is the most cost-effective and efficient way to build the bridge toward IIoT. Operators tend to see the motor and drive as a black box: “you put electricity in and get mechanical work out.” At that level, no one really wants to deal with the complexity of what’s inside the box. On the other hand, drives manufacturers like Danfoss have a deep understanding of the electrical and magnetic interactions that occur between a motor and drive. By paying a little extra attention to the insights that additional drive data provides, significantly reduced cost of operations can result. And in the end, that is value that customers are looking for as we move into the IIoT connected world. OG

Diagram 1: A variable frequency drive controls the rotational speed of an alternating current electric motor and thereby the flow and pressure of a pump, eliminating the need for a throttling valve. Image courtesy: Danfoss

Steve Meyer is a regional sales manager at Danfoss, the maker of AC drives. Kevin Parker is a senior contributing editor with CFE Media. OIL&GAS ENGINEERING FEBRUARY 2019 • 19


BEYOND SUPERVISORY CONTROL

Predictive analytics in the upstream introduced as a service Solution identifies production and equipment problems before they become apparent By Kevin Parker

I

www.mrshims.com

Belt/Sheave Laser Alignment System New Green laser delivers these important benefits: ● Reduces Vibration ● Eliminates downtime and productions ● At an affordable price ● Visible indoors and Outdoors ● Brightness great for long distances

Mr. Shims

your answer to better alignment for rotating machinery

1-800-72-SHIMS (1-800-727-4467)

n the fourth quarter of 2018, eLynx Technologies announced the commercial launch of a suite of predictive maintenance software products and services the company says will revolutionize the upstream oil & gas industry. Use of predictive analytics as a service (PAaaS) will be the means to predict equipment breakdowns and downhole events such as liquid loading. A plunger lift predictive maintenance product introduced in September was followed in November with a solution for electrical submersible pumps. As an early provider of SCADA as a service, eLynx compiled a vast store of fully normalized well operating data, ready for exploitation by the analytics and machine learning applications taking hold today in oil & gas and other industries. “We are the fortuitous beneficiary of almost 20 years of industry knowledge gained from monitoring over 40,000 wells for hundreds of customers in all major U.S. drilling basins. We now are leveraging that knowledge,” said eLynx founder and CEO Steve Jackson. The industry’s relentless march of mergers and acquisitions over that same time period prevented the same rigor being applied to other oil & gas assets, as personnel and technologies frequently changed over. To take advantage of the compiled data, over the last several years, eLynx invested heavily in human assets having expertise in the data sciences. “It takes powerful minds to turn data into valuable products, and we have built one of the most potent data analytics teams in the United States,” Jackson said. PAaaS promises to save companies capital, boost reserves, reduce waste, enhance safety, and protect the environment. “We will increase users’ revenue valuations based on increased average flow,” Jackson said. Plunger lift and ESPs In initial trials, the plunger lift predictive maintenance product saved one producer more than $700 per month per well. For a 500-well field, that translates into $4.2 million in annual savings. This early experiment represents just a sliver of what eLynx said it will deliver to its customers through 20 • FEBRUARY 2019

the combination of predictive analytics as a service and SCADA. Although found in both oil and gas wells, today a plunger lift is typically used in deliquefying a natural gas well. As is well known, the plunger is used to remove from productive gas wells contaminants that include water, sand, oil, and wax. As they mature, gas wells exhibit a decrease in bottomhole pressures and production velocities less than ideal for carrying liquids-produced water, oil, and condensates to the surface. Over time, this liquid accumulates in the production tubing downhole, creating a condition known as liquid loading. Plunger lift, as an artificial lift method, is a way to remove liquids from aging gas wells. Plunger lift controls regulate the cycling of a motor valve in response to plunger arrival at the well head, line pressures, liquid levels, or pressure differentials. The electrical submersible pump (ESP) is an efficient and reliable artificial-lift method for lifting moderate to high volumes of fluids from wellbores. To ensure optimal ESP performance, operators install downhole sensors that continuously acquire real-time system measurements such as pump intake and discharge pressures and temperatures, vibration, and current leakage rate. Typically, users monitor pumps through SCADA systems, which act as central repositories of data from all downhole sensors. Anomalous readings lead to changed pump settings. Ongoing innovation As might be expected from a machine learning application, to start, use of predictive analytics as a service application initially will anticipate operating challenges based on anomalous operating conditions. Over time an application will more precisely identify the bad actors responsible for the adverse conditions. The eventual goal is the kind of prescriptive analytics that will recommend actions to be taken in response to identified challenges. Plunger lift was the first application tackled because of its cyclical nature, with solutions for the more complex challenges of rod pump and gas lift scheduled for release in 2019. “Almost 20 years ago, eLynx introduced SCADA as a service to the industry,” said Samantha McPheter, eLynx chief product officer. “With these new technologies, we move to predictive analytics, a pivot that vaults data from something merely important to essential for companies seeking to be competitive in this new environment. Companies that are slow to adapt to this new reality will be consumed by competitors that are quick to embrace this data revolution.” OG

OIL&GAS ENGINEERING


We understand how you need to reduce complexities at your plant.

CLEAN PROCESS + CLEAR PROGRESS You strengthen your plant’s safety, productivity, and availability with innovations and resources.

Promass Q – for increased plant productivity • Error-free flow measurement in custody transfer applications in mass or volume units due to unmatched accuracy for density determination • Ideal for hydrocarbons with entrained gas thanks to the patented Multi Frequency Technology (MFT) • Patented “Heartbeat Technology” for device verification during operation and permanent self-diagnostics

Do you want to learn more? www.us.endress.com/promass-q300


NO NODES... LIKE OUR NODES.

The WAGO-I/O-SYSTEM The System that Started the Modular I/O Revolution r r r r r

Fieldbus independent – supports 16+ protocols 500+ digital, analog and special function I/O modules Compact – up to 16 DI or DO in just one 12 mm wide module XTR Series for eXTReme temperature, shock and vibration Add-on instruction library for RSLogix 5000® software

www.wago.us/IOsystem

56/RJL[ LV D UHJLVWHUHG WUDGHPDUN RI 5RFNZHOO $XWRPDWLRQ s QR HQGRUVHPHQW RU DƱOLDWLRQ LPSOLHG


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.