Foundations, Volume 6 Issue 1

Page 1

Foundations Journal of the Professional Petroleum Data Management Association

Print: ISSN 2368-7533 - Online: ISSN 2368-7541

Volume 6 | Issue 1

1st Place Foundations Photo Contest Winner: “Mammoth Hot Springs” – Gordon Cope

Transforming Daily Drilling Reports Into a Higher Value Resource (Page 4)

PLUS PHOTO CONTEST: This issue’s winners and how to enter (Page 16)


Analyze, Evaluate and Act with ConďŹ dence.

EnergyIQ solutions leverage our proprietary Trusted Data Manager (TDM) technology to increase the time knowledge workers spend analyzing and optimizing instead of manipulating, cleaning and verifying data.

IQlifecycle

Well Master Solution

IQgeoscience Technical Data Solution

IQinsights

Data Analytics & Visualization Solution

Establish a single source of truth for well data blended from multiple proprietary, commercial, and public data sources. Provides the platform to automate and streamline critical workflows, reporting, and analytics. Provide immediate access to quality validated subsurface data: well logs, directional surveys, production, completion, treatment, formation, core, location, and much more for better interpretation - with any data provider. Quickly discover, visualize, and report. Stop wasting time finding and fixing data, and deliver more value, more quickly, using the most trusted data available from across your enterprise.

Creating information certainty in an uncertain E&P world.

www.EnergyIQ.info | 303.790.0919 | info@energyiq.info


Foundations Foundations: The Journal of the Professional Petroleum Data Management Association.

Table of Contents Volume 6 | Issue 1

COVER FEATURE

Transforming Daily Drilling Reports (TOUR Reports) 4 Into a Higher Value Resource By Oskar Klavins

GUEST EDITORIALS

Chief Executive Officer Trudy Curtis

Convergence of OT and IT

Senior Operations Coordinator Amanda Phillips

Another Data Object, Another Data Source By Jim Crompton

Senior Community Development Coordinator Elise Sommer Article Contributors/Authors Uwa Airhiavbere, Sue Carr, Jim Crompton, Jamie Cruise, Dave Fisher, Guy Holmes, Oskar Klavins, Trish Mulder, Charity Queret, Ali Sangster, Gary Silberg, Phil Ward Editorial Assistance Beci Carrington, Jim Crompton, Dave Fisher, Uwa Airhiavbere Graphics & Illustrations Jasleen Virdi Freepik Graphic Design

Buzzwords and Technology Basics

9

The Big Switch

20

Vice Chair Paloma Urbano Secretary Kevin Brunel Treasurer Peter MacDougall Directors Allan Huber, Ali Sangster, David Hood, Emile Coetzer, Jamie Cruise, Jeremy Eade, Lesley Evans, Paloma Urbano, Peter MacDougall, Robert Best, Trevor Hicks HEAD OFFICE Suite 860, 736 8th Ave SW Calgary, AB T2P 1H4, Canada Email: info@ppdm.org Phone: 1-403-660-7817

Building a New Subsurface Data Foundation

25

Digitally Transforming Corporate Data Management By Jamie Cruise

6

Thanks to Our Volunteers 29 Featuring Four Great Volunteers

To 3D By Phil Ward

DEPARTMENTS

Data Science as a Strategic Tool

11

Photo Contest

13

Upcoming Events, Training, and Certification 31

in the Oil and Gas Industry By Charity Queret

BOARD OF DIRECTORS Chair Allan Huber

23

The Corporate Data Ecosystem By Jamie Cruise

Web 4.0 By Guy Holmes FEATURES

Hands On with the PPDM Association Board Of Directors

What’s in a Name? What Is A Completion? By David Fisher & Ali Sangster

Geophysical Data Compliancy

18

16

This issue’s winners and how YOU can get your photo on the cover of Foundations.

Join PPDM at events and conferences around the world in 2019. Learn about upcoming CPDA Examination dates and training opportunities.

Utilizing Technology, Part Two By Sue Carr & Trish Mulder

Why Move to the Cloud? 22 Three Key Benefits By Uwa Airhiavbere

ABOUT PPDM The Professional Petroleum Data Management Association (PPDM) is the not-for-profit, global society that enables the development of professional data managers, engages them in community, and endorses a collective body of knowledge for data management across the Oil and Gas industry. Publish Date: April 2019

Foundations | Vol 6, Issue 1 | 3


Cover Feature

Transforming Daily Drilling Reports (TOUR Reports) into a Higher Value Resource By Oskar Klavins, geoLOGIC systems ltd.

ORD EM E P RO GR

E

S

S

O

n today’s media there has been a great emphasis on the need to maximize the value of raw resources through constant upgrades. This goes for Oil and Gas as well as for the ‘New Oil’ - Data. Daily Drilling Reports (TOUR Reports) are considered a raw resource that requires transformation into a higher value data product. However, TOUR Reports are usually highly complicated documents that can range from one or two pages to hundreds of pages in size, so getting data from these reports into the Professional Petroleum Data Management (PPDM) Association’s 3.9 Data Model (PPDM 3.9) can be a challenge! Formatting data so it can easily be used by an analytical or business intelligence (such as Power BI) tool has proven to be the biggest challenge. In addition, you need to have a sufficient number of transformed reports to enable the economy of scale required to justify the costs associated with the upgrade. Finally, you need to establish the final form of your enhanced data product that will be consumed by the BI tools. The TOUR Report has been used for a long time. Its formatting rights are owned by the International Association of Drilling Contractors (IADC). The IADC has licensed

I

the format to the Canadian Association of Drilling Contractors (CAODC) and to other companies. TOUR Reports come in two formats: in two or three shifts per day. The report contains very important information in regard to the rig and the daily drilling activities associated with it. Originally, the information was all manually entered by hand in triplicate. Today, a lot of the information is automatically gathered by drilling tools and software. However, some data still requires human entry. The following image (Figure 1) shows some of the major data sections that can be found on a TOUR Report. The sections shown would be repeated for each shift and day that the rig was on the job. Many factors make it difficult for this resource upgrade difficult to support a PPDM 3.9 implementation. These include: • Vintage of the report (the year the well was drilled). • Multiple formats of a TOUR Report. • Multiple media used to deliver the report. • Data entry issues. • Missing information from the TOUR report. • A Unique Well ID (UWI) change since the well was drilled. What is the vintage of the report? This depends on the year that a well

4 | Journal of the Professional Petroleum Data Management Association

was drilled, the data requirements of the operating company, the drilling company, and the regulator. TOUR reports can have various levels of detail. In addition, the CAODC has issued multiple versions of the Electronic Tour Sheet Standard so far in this century. Another challenge is the different media used to deliver the report. The report can come in a paper copy, which needs to be scanned and quality controlled, or a PDF file of a previously scanned printed paper document. The quality of the PDF may require that the report be Optical Character Read (OCR) processed before it can be used, and this doesn’t guarantee that manual data entry will not be required. Finally, electronic versions of the TOUR Report have been used since about 2000. These can be uploaded into tools for processing electronically; however, they still need to be quality controlled. The raw data in electronic versions of the TOUR Report is captured in two main ways: (a) by the drilling tools and associated recorders, and (b) by the drilling crew. The majority of data capture issues are always associated with human manual capturing. These issues can be classified into the three categories: • Insufficient information. • Data entry errors by the drilling crew.


Feature

• Inconsistent abbreviations and data entry shortcuts that vary by drilling company and drilling crew. A prime example of insufficient information would be leaving the UWI field empty or just including a placeholder such as “100//00.” In this case, either the drilling crew was told not to provide that information or the information was not provided to them. The absence of the correct UWI makes it difficult to match the TOUR Report to the correct well and historical data. This is a major issue for referential integrity in a PPDM 3.9 implementation. Data entry errors can be further subclassified into the following: • Entering the correct data into the wrong field. An example would be entering the bit type into the bit serial number field and vice versa. • Entering the correct data, but not adjusting for the unit of measure specified for that field. • Entering the correct data, but including the unit of measure with the value when the unit of measure is defaulted and not to be entered. • Entering bad data into a field. An example would be entering “30978” meters for total drilled when the value should have been “3978” meters. The use of abbreviations and shortcuts is a very common practice of drilling crews. Since the applications that crews use to enter the data do not normally enforce consistency, you can have different ways to specify the same thing. If you take a mud additive product, such as “ALCOMER 110 RD,” you can find it specified in many different ways: lowercase, uppercase, mixed cased, with dashes, or without spaces. It can be different within the same report. For example, some existing variations are: • ALCOMER-110 • ALCOMER110RD • alcomer 110 • ALCOMER 110 RD • Alcomer 110 Rd • ALCOMER 11ORD

Figure 1: A Tour Report

Figure 2: BI Data Visual

• ALCOMER RD 110 Missing information is another key issue that is inherent with the structure of the TOUR Report, just as with other data sets. One of the key pieces of missing information is that the meters drilled data as reported on the TOUR Report cannot be directly associated to a bit when more than one bit was used in the reported time period. This has a direct impact on using the PPDM 3.9 data model, which assumes that you have all the information to populate all the tables correctly. This data modelling issue can be overcome by using table extensions that follow PPDM Architectural Principles. Finally, when you have all TOUR Reports upgraded and transformed and into a PPDM 3.9 database, you still need to make sure that the report is associated with the correct UWI because UWI changes can still happen after you have drilled a well. Just because BI Data Visuals are all the rage, here is a little snippet that was

generated using the Microsoft Power BI tool and the enhanced PPDM 3.9 data model (Figure 2). This dashboard was generated as an example of the types of analysis possible with the data. It provides the ability to view different bit manufacturers, bit types, and IADC bit codes for hardiness, formation, bearing, and special features like the distance drilled and the reason for pulling the bit. At geoLOGIC systems ltd., we have achieved this data transformation into an enhanced PPDM 3.9 database that enables businesses to make informed decisions either by using analytical or business intelligence tools. About the Author Oskar Klavins is a Senior Technical Advisor with geoLOGIC systems ltd.

Foundations | Vol 6, Issue 1 | 5


Feature

The Big Switch to 3D By Phil Ward, PetroCAD 3D ORD EM E P RO GR

N

E

S

S

O

o doubt that you will have seen 3D models and simulation creeping in to most industries around the world, and the Oil and Gas (O&G) industry is no exception. Highly volatile oil prices have fundamentally altered how the industry operates. Efficiency is now critical to ongoing business. As a result, not only do exploration and production (E&P) engineers need to develop more complex solutions, faster and more cost effectively, they must deliver enhancements in quality and innovation whilst reducing risk. E&P engineers are challenged to accelerate oil production and shorten the time it takes to develop proposals. They must do this while simultaneously ensuring the completeness of their designs. Fortunately, there is a way to achieve all these goals. The O&G industry has in many ways been at the forefront of technology. But, conversely, it is also risk averse. Decisions carry high consequences, which brings a reluctance to implement anything new untested that might add unknowns to the production and safety status quo. Many industries have successfully adopted 3D computer-aided design (CAD) as the standard method for design and optimisation. Aerospace, automotive, and construction have successfully adopted 3D CAD modelling and simulation because it promotes innovation and accurate communication. It simplifies the design process whilst reducing cost, time-to-market, and risk.

The O&G industry is eager to adopt 3D CAD and simulation technologies to replicate the associated benefits. It has been extensively used for the operators’ primary asset - the reservoir. The reservoir is modelled and simulated in 3D; however, the connectivity between subsurface and surface (‘the plumbing’) and topside assets are not. The design, construction, and operation of the physical well completion are currently subjected to unforeseen risks that could be easily avoided if 3D CAD technologies were utilised. Utilising 3D CAD software, E&P engineers can address challenges at every stage of the development process: from conceptual design through validation, production and implementation. With access to common 3D wellbore models, collaboration between operators and their contractors can improve, leading to a better understanding of system constraints and objectives. Modern 3D CAD software will provide integrated design tools and workflows that E&P engineers need to accelerate time-to-market, control development costs, improve product quality, innovate, and compete as a business. This helps to bridge the gap between innovation and implementation. In today’s world, the O&G industry uses representative 2D schematic design tools to plan and communicate multimillion dollar projects. These tools no longer represent a viable option, given their limitations utilising design data to

6 | Journal of the Professional Petroleum Data Management Association

Figure 1: Real-world mathematical models are the foundation for future innovation. address multiple functional requirements. In contrast, by using a 3D CAD design tool (Figure 1), engineers can rapidly create a real-world 3D mathematical model that has been validated and can be used throughout the well life cycle to capture all engineering data at every stage. Furthermore, a 3D model is the optimal foundation for advanced simulations, digital twin optimisation, virtual inspections, improved team communication, rapid prototyping, and product marketing.

THE DIGITAL TWIN In the near future, all assets and systems of value will have a digital twin; it is just the way it will be. The digital twin is a virtual replica of the real-world physical asset and it enables engineers to rapidly prototype scenarios to determine their impact on factors such as construction cost, future maintenance, and risk. Only after the optimal design has been identified are plans drawn up to adjust the physical


Feature

Figure 2: Without rapid prototyping utilising 3D CAD models and multi-physics simulation, billions of dollars in savings would be lost. asset. The digital twin accompanies the physical asset throughout the entire life cycle, therefore ensuring consistent high efficiencies are achieved. In the context of the O&G industry, the same 3D CAD models created by E&P engineers are the foundation for any digital twin solution. As O&G wells become more complex, the digital twin will be invaluable because it is capable of detecting irregularities quickly and can determine when maintenance is necessary to avoid expensive and unexpected repairs. Providing cost reduction, reliability and reduced risk is massively important within the O&G well engineering domain where mistakes can last 20 years and are oftentimes nearly impossible to resolve.

INFORMATION MODELLING The O&G industry faces a significant challenge in the management exchange, and integration of engineering data. Information pertaining to the planning, design, construction, commissioning, and ongoing operations of a well is typically scattered across multiple database systems and unstructured documents (e.g., Excel, Word). Identifying relevant data to reconstruct an accurate representation of the current well state is frequently challenging and a source of error. The solution to this challenge is to adopt information modelling, whereby all engineering data (e.g., equipment dimensions, specifications, constraints, suppliers, lead times) is stored within the 3D CAD model itself. Populating a 3D information model can be achieved

through importing data from disparate systems using industry-accepted data formats, such as PPDM, WITSML, PRODML, EDM, and LAS. The result is a unified object-oriented 3D model of the well, which enables users and systems to interrogate any object (e.g., part, assembly) and have instant access to contextual information required to manage it. This approach to storing data is scalable and highly intuitive, making it quick and easy for users to find information and act upon it. A 3D information model reduces the need for rework and duplication of 2D schematics that are used to present different well requirements for E&P disciplines to review. The model contains a lot more information than well schematics and associated Excel spreadsheets as currently used by the industry. A 3D information model enables each discipline to inspect, annotate, and embed their intelligence into the well design. The added benefit of using a fully integrated model is the ability to create a highly accurate bill of materials (BOM) that specifies exact equipment costs for the project.

IS THE O&G INDUSTRY READY FOR THE BIG SWITCH? Many industries have widely adopted 3D CAD modelling as the standard method for design, optimisation, and training. It is difficult to imagine training airline pilots without a flight simulator. The inherent cost and risks involved and the limits of the training,

Figure 3: The potential benefit to the O&G industry if BIM methodologies are adopted.

in terms of exposing pilots to all the scenarios that they may need to deal with, make simulators necessary. Innovation is also critical; aircraft fuel economy was boosted by six per cent with a winglet that was designed and validated using Finite Element Analysis (FEA). This would not have been possible without real-world 3D mathematical models and simulations (Figure 2). The construction industry has also achieved significant improvements. It has built a cradle-to-grave process around 3D CAD technologies called BIM (Building Information Modelling). Through a connected design and information model they have achieved significant business benefits (Figure 3). The aerospace and construction industries have proven the value of 3D CAD modelling coupled with digital twin simulation. Hence, the O&G industry is ready to adopt this technology to ensure wells are fit for purpose. 3D CAD, information modelling, and digital twin simulation will transform the O&G industry, increasing oil production through improvements in upper and lower completion design. Using this approach, the aerospace industry has increased aircraft fuel economy by six per cent. Imagine just a three per cent increase in production. For example, BP’s Mad Dog Phase Two project in the Gulf of Mexico would add over 1.5 million barrels of oil per year contributing to an additional $92 million per year to BP’s bottom line at $60 per barrel.

Foundations | Vol 6, Issue 1 | 7


CONCLUSION All companies face competitive pressures. They must continually develop more advanced products, equipment, and systems — faster, better, and more cost-effectively. Utilising a complete 3D well engineering environment enables O&G companies to enhance their competitiveness — improving timeto-market, controlling development costs, and designing better products. Companies can also safely innovate, test, and realize improved efficiencies and productivity gains at every stage of the development process. The same model is used for concept development, design automation, digital twin simulation, communication, prototype development, and product manufacturing. At the time of this writing the drilling and exploration (D&E) sector was beginning to adopt 3D well engineering to optimise the drilling process and improve well integrity. There is still great scope to applying this technology to the well completion for maximising the efficiency of the entire system during the production phase. Getting a truly global view of a well construction project using existing toolsets is challenging at best – with hundreds of unconnected engineering documents it can take years for design teams to see the forest for the trees. However, a 3D CAD enabled information model pulls all of a project’s engineering knowledge into a single view. Teams that utilise such models can collaborate and communicate much more effectively. Oil and Gas companies face competitive time pressures and require more collaborative development relationships, data reuse, modular design, systems prefabrication, and standardisation. A company’s ability to respond quickly and effectively is highly dependent on how well it can organise, manage, and communicate its internal design data and expertise. 3D CAD solutions are designed specifically to fulfil these needs.

3D CAD technologies enable the design, construct, complete, and operate phases of a well to be completed in a virtual environment first. Then, only when the team is happy with the virtual performance, will they continue to spud the well. This, of course, will not always be possible due to other factors, but it should be the primary objective. Operators conducting Drill Well On Paper (DWOP) and Complete Well On Paper (CWOP) will undoubtably discover that they could be doing a much better job using 3D CAD and simulation software. The technologies are proven and the benefits are clear; it’s time for the O&G industry to adopt

this new way of thinking. About the Author Phil Ward, CEO of PetroCAD 3D Advanced Well Engineering Solutions. Focused on the application of digital transformation technologies within the O&G industry.

I AM THE VERY MODEL OF A MODERN DATA MANAGER by Gary Silberg To the tune of “I Am the Very Model of a Modern Major-General” By Gilbert and Sullivan from “The Pirates of Penzance” Lyrics by Gary Silberg for the 2018 PPDM Calgary Data Management Symposium I am the very model of a modern data manager I’ve studied ev’ry diagram professional and amateur I know the definition of a well bore horizontally And ev’ry geologic age both backwardly and frontally I can recite the statuses of wells and their facilities I’m known for my “espeshly” good third normal form abilities I understand where Oil and Gas are found when subterranean From periods named Permian right though the Pennsylvanian I’m often told I have some definitional proclivities For seismic data, fields and pools, and drilling rig activities I’ve catalogued equipment types, both annular and angular To prove I am the model of a modern data manager I can explain completion data with great specificity And producer operator working interest multiplicity I’ve analyzed the long term flows of Oil and Gas reservedly And won awards for normalizing data stores deservedly I can secure your data lakes by retina or facially Display your subject facets both as data points or spatially If anything I’ve said should leave you wondering or quizzical I’m sure it can be verified with data geophysical I understand that data sets are growing quite prodigiously I follow Trudy Curtis and her writings quite religiously I’ve studied ornithology and seen a scarlet tanager And now you’ve met the model of a modern data manager

8 | Journal of the Professional Petroleum Data Management Association


Guest Editorial

Convergence of OT and IT: Another Data Object, Another Data Source By Jim Crompton, Reflections Data Consulting

T

he Digital Oilfield is meeting up with the Industrial Internet of Things technology wave, creating new demands on asset management and prediction models. The connectivity explosion gives us the capability to link almost any sensor (including those on your smartphone and tablet) with anyone interested in the data, anywhere they might want to work (through cloud computing). This is helping to accelerate the convergence of operational technology (OT), normally referred to as process control (SCADA and DCS systems), with corporate information technology (IT) and engineering workflows. Solutions, such as virtual flow meters and predictive maintenance solutions, are being developed based on this new connectivity. Historically, the OT world has grown up under the watchful eye of electrical and instrumentation and control engineers. They have created sophisticated and mission critical systems, largely isolated from the corporate IT environment. Proprietary protocols (such as Modbus and others) for real-time, time series data from sensors on critical equipment and processing units allowed field operators and maintenance staff to keep field systems ORD EM E P RO GR E

S

S

O

running and to manage alerts and alarms that focused on safety performance. A degree of standardization came in with the OPC Foundation. Open Platform Communications (OPC) is a series of standards and specifications for industrial telecommunication. An industrial automation industry task force developed the original standard in 1996 under the name OLE for Process Control (Object Linking and Embedding for Process Control). OPC specifies the communication of real-time plant data between control devices from different manufacturers. A new version of this standard was released under the name OPC-UA (for unified architecture) in 2006. Now more and more people, from reliability engineers, to maintenance supervisors, to asset managers, are interested in this data, but they don’t want to go out to the field to get it. They want the data to come to them, on their business laptops, along with their emails and financial dashboards. The folks from OT are running into the data stewards and IT telecommunications and server management support people in planning and production meetings. The first area of convergence came with the infrastructure. Management thought it was a good idea to put all

communications and servers under the IT department since they run these assets for the rest of the company. Well, that didn’t make the OT folks very happy. When the new IT masters started trying to change the server outage windows and patch and code update procedures, the OT folks went on strike. You see, in the field there is no good time for a maintenance window on SCADA systems; 99.9% is not good enough. When a patch on an application doesn’t work you can’t just back it out and try again next week. When the OT systems stop working, the field stops working and everyone gets a little grumpy. The IT folks thought they had it tough with their network reliability objectives; the world just got a lot harder by including SCADA. For the data management community, this convergence is just starting to hit home. A trained DBA with her SQL skills and relational data management experience now meets up with a new data source (called a process historian) and a new data type (time series data described by tag name, time stamp, and electrical measurement). What is she supposed to do with that? Let’s take the tag name to start with. How do you compare a tag name with a unique well identifier? A tag name comes from a specific sensor measurement

Foundations | Vol 6, Issue 1 | 9


Editorial

POEM FROM CALGARY DMS This week, under Alberta skies so blue We saw the birth of Alberta oil, how industry grew A history of life, of rocks, of data. Yes it’s true, Data has been discovered, Its true value uncovered As the trillion-dollar jewel It’s become the new “cool” And our users are omnivores! Data preparedness is the new dance As data and analytics start their romance Once as longed for as a unicorn A maturing profession has been born. point, but where is that sensor located? You need to check with an asset registry or asset framework (if one exists) to find out that information. A sensor is usually on a piece of equipment, so there may be many sensors on a well head (or downhole) and there maybe only one sensor on a group of wells (collection point). So, you have a one-to-many or many-to-one modeling challenge on your hands. In addition, you have to consider the dynamics of an asset life cycle. A well may start out with natural flowing pressures (different kinds of sensors) and then be replaced with an artificial lift unit (different kinds of sensors) and eventually left as a marginal producer with no specific sensor at all (depends on the sensor/smart gauge on the sales meter). So, the relationship changes over time and well condition. We need a “What is a Tag” standard. How about the time stamp? That one seems straightforward until you start to compare different measurements against each other (multivariate analysis). Is your time stamp measured against a universal clock or just a local clock? Again, you need more information or the events will be difficult to sync with each other. You only have three attributes of this new data record, and I am making each one a complex challenge. The last attribute, electrical measurement, starts out as just a voltage reading, but then you have to process the measurement against formulas to get an engineering value

that you can work with. These formulas vary by equipment, by manufacturer, and sometimes by engineering practice. Now the data steward needs to keep track of the processing flow as well as the data. I think you are starting to get the picture. Now add in measurements of sensors and gauges that are not linked into the SCADA systems but are captured on mobile devices (such as tests, emission readings, or just a sensor that was added after the process control system was installed). This isn’t going to be as easy as you first thought. As much as you would like to go back to the ‘good old days’ when you only had to think about “What is a Well,” the industry is not going to put this genie back in the bottle. The potential value from predictive maintenance, conditionbased maintenance, remote monitoring, and even automated production techniques is too large to ignore. About the Author Jim retired from Chevron in 2013 after almost 37 years. After retiring, Jim established Reflections Data Consulting LLC to continue his work in the area of data management, standards, and analytics for the exploration and production industry.

10 | Journal of the Professional Petroleum Data Management Association

~ An Excerpt, Trudy Curtis, PPDM Association, 2018 Calgary Data Management Symposium, Tradeshow & AGM Submit your Data Management Poetry to Foundations now! foundations@ppdm.org

Do you have a Data Management Horror Story to share? Names/Companies will be removed. Submit your story to foundations@ppdm.org to see it in our next edition.


Feature

Data Science as a Strategic Tool in the Oil and Gas Industry By Charity Queret, Stonebridge Consulting

T

he Oil and Gas industry is missing the boat when it comes to Data Science - that is, reimagining data and its inherent value as a strategic asset. As in every industry today, the Oil and Gas industry seeks ways to improve efficiencies and thus reduce operating costs and increase revenues. However, unlike many industries, Oil and Gas organizations also face unique safety, environmental, and regulatory reporting requirements. Data science offers numerous advantages that, when embraced by our industry, will be instrumental in improving data efficiencies and increasing revenues. Let’s start with the basics: What is data science? For sure, data science is an overused and confusing buzzword used to promote concepts like Big Data and digital transformation. It is often thrown around as a catchphrase for anything data or analytics related. High level, it is more accurately defined as a progressive approach to data, using analysis of past and current data to predict future outcomes. This ability to utilize the past and present to better understand the future can identify data value that can be translated into business value. The principles and tools behind data science have been around for decades, including: 1. Statistics 2. Mathematics 3. Computer Science 4. Machine Learning 5. Probability ORD EM E P RO GR

E

S

S

O

Today, the term data science is the unifying umbrella encompassing these principles and applying them to data. When we discuss data science, we are referring to these principles and tools from various sciences to explore a company’s past and current data to find patterns and then use those patterns to develop models or algorithms to predict future outcomes of a business. Historically, business data was structured and limited in quantity. It was not uncommon for data to either be maintained in file cabinets or entered manually into spreadsheets. Some business data was encrypted in proprietary databases and not even available for download. Companies were able to manage this limited and structured data by using business intelligence tools to analyze the available data. This is not possible today. More and more of our business data is unstructured and huge in volume. It is generated from diverse data sources — text files, financial records, multimedia, instrument sensors, etc. More complex and advanced analytic tools and algorithms are required for processing and analyzing this data. Digital tools can provide Oil and Gas companies an avenue upon which they can define, connect, and use their data regardless of data source. A common misconception is that data science is the same thing as business intelligence. Business intelligence is the process of using technology to analyze data for the presentation of deliverables such as graphs, charts, reports, and

spreadsheets. Business intelligence asks the question “What happened and what should be changed?” Data science asks, “Why did it happen and what can happen in the future?” The difference in “What,” “Why,” and “How” differentiates business intelligence and data science (Figure 1).

Figure 1: Differences Between Data Science and Business Intelligence

WHY WE NEED DATA SCIENCE Data volume in the Oil and Gas industry has grown exponentially through the advancement of information technology. This includes everything from recording sensors in exploration, drilling, production, and seismic operations to Logging While Drilling (LWD) technology, allowing drilling data to be recorded real time. It also includes fiber optic solutions, providing a wide range of data about environmental conditions such as temperature, oil reserve levels, and equipment performance or status. Managing this data and using it as a strategic asset significantly impacts the financial performance of the company. Business intelligence tools are no

Foundations | Vol 6, Issue 1 | 11


Editorial

longer capable of providing the level of analysis required. Applying data science, mathematics, statistics, computer science, machine learning, and probability can make the data manageable. Data science can help in moving organizations from reactive remedial solutions to proactive decision making. This is enabled through integrating different types of data into predictive models, which can then be used to predict future outcomes. Predictive models are statistical models used to predict outcomes – data is collected, a predictive model is defined, predictions are made, and the model is validated or revised as new data is available. Data science uses predictive models to interpret and organize big data. The oil price slump has forced Oil and Gas companies to look beyond traditional methods and seek broader business practice changes to increase performance and cut costs. Better data analytics and technology provide the key in determining whether Oil and Gas companies thrive.

STANDARD LIFE CYCLE OF DATA SCIENCE PROJECTS Data science is an ever-evolving field, positioning the data science project life cycle to be open to interpretation and customization. Until standards have been defined and accepted, a basic iterative data science life cycle is recommended as a starting point. 1. Business Understanding a. Identify problems – These will become the target model. b. Define business goals i. Regression – How much? ii. Classification – Which category? iii. C lustering – Which group? iv. Anomaly detection – Is this expected or unusual? v. R ecommendation – Which option? 2. Data Understanding a. Identify data source – Is the required data available? If not, can it be obtained? b. Ingest the data – Import the data into an analytical sandbox.

c. Explore the data – Use data summarization to audit the quality of the data. d. Set up a data pipeline – Define a process to regularly refresh data. 3. Modeling a. Group data into training data set and test data model set. b. Build models using the training data set. c. Evaluate model results. 4. Deployment a. Deploy the model to a production or test environment for consumption.

Figure 2: Data Science Life Cycle

BENEFITS OF DATA SCIENCE IN THE OIL AND GAS INDUSTRY Here are a few high-level examples of how the Oil and Gas industry can benefit from data science. 1. Exploration and discovery – Seismic data and geological data, such as rock types in nearby wells, can be used to predict oil pockets. 2. Production accounting – Production data can be linked with alarms. 3. Drilling and completions – Predictive analytics can employ geological completion and drilling data to determine preferred and best drilling locations. 4. Equipment maintenance – Real-time streaming data from rigs can be compared with historical drilling to help predict and prevent problems and better understand operation risks. These examples demonstrate the operational goal of data science in Oil and Gas: to continuously maximize the life cycle value of Oil and Gas assets by realtime monitoring, continuous updating of predictive models with latest data,

12 | Journal of the Professional Petroleum Data Management Association

and continuous optimization of multiple long- and short-term decisions. As with any technological advancement, there are barriers to the successful use of data science, including: 1. Taxing computing resources – There may not be enough resources to hold and process large amounts of structured and unstructured data. 2. Poor data quality – Data may be maintained in multiple locations and subject to inconsistent governance. 3. Incorrect modeling – The right questions may not have been asked or may have been misunderstood. 4. Intransigent corporate culture – C-suite support is imperative from the get-go. Communication among collaborators, SMEs, and data scientists is critical. 5. Talent gaps – Data science and data engineering talent are new to the Oil and Gas industry. These skill sets are still developing, and it can be difficult to assemble the right team.

CONCLUSION All things said, we are living in a historic era of an explosive period of growth for the Oil and Gas industry, with mind-boggling growth in both the production of hydrocarbons and digital data. Data science and all the new and emerging technologies enable the discovery of new opportunities, generating more efficient workflows, increased safety, and significant reductions in operational costs. As the Oil and Gas industry grows and becomes receptive to big data and the use of data science, it can only move forward. Huge volumes of unused and undervalued data that is just stored has little worth. For data to be a true asset, it must be identified, aggregated, stored, analyzed, and perfected. This ability to understand insights from large data sets can make the Oil and Gas industry more profitable and efficient. About the Author Charity Queret is a senior consultant at Stonebridge Consulting. Charity has more than 20 years of experience in designing and developing endto-end business intelligence and data warehousing solutions.


Feature

What’s in a Name? By Dave Fisher, David Fisher Consulting & Ali Sangster, Drillinginfo

ORD EM E P RO GR

I

E

S

S

O

n October 2018, the PPDM Association released What Is A Completion. It is a document that seeks to remove ambiguity from the word “completion” and avoid confusion in data types associated with the preparation and operation of wells. The word “completion” can be a technical term related to the preparation of a reservoir or a general business term for the end of an operation or fulfilment of an obligation. For example, is a well completion report about the end of the drilling phase or about preparing for the producing phase? For most regulators, the report is about both. Every jurisdiction has its own set of data types. Operators must submit a well completion report (or a variation of this title) with the format and content specified by the regulator. PPDM recently examined these reports from state and federal authorities in the USA. The study, summarized in Calgary at the PPDM Data Management Symposium last October, revealed considerable variety in the ORD EM E P RO GR

E

S

S

O

required data types. For example, they all want the operator’s identity and the well name. Most want the spud date, total depth, and perforated interval. Some want pressures and initial production. One wants owners who take production in kind. A well’s owners and contractors have the original data; what about the rest of us? Most of our well data comes through the regulatory agencies and we all want good data. “We” includes the government, the industry, and the public. Data professionals must assemble the well data and make it available, subject, of course, to regulations, confidentiality, contracts, etc. Extra effort is required to discover all the data available in multiple reports.

DATA TYPE VARIETY The chart in Figure 1 shows some of the data types required by 33 state and federal regulators in the USA. It illustrates the variety of data items that regulators want; a complete list would be much longer. The data types are from reporting forms on government websites; many forms were identified in a compilation by the Groundwater Protection Council (GWPC). In most cases, the well completion report covers the drilling (wellbore construction) and completion (reservoir preparation and testing) phases, but a few states (e.g., CO, LA, MI) have a separate report format for completions. The chart is sorted by frequency of the data item. All regulators want the well

identified by name or number (usually both) and the operator. Most require the report to be certified by signature and, in a few cases, notarized. Most also want to know about reservoir completion and production testing, but there is considerable variety in the details. Some “missing” items are collected through other reports. Some unusual items are relevant only to certain areas (e.g., offshore) or state-specific legislation. Report forms may use the same words but have different meanings. For most states “well type” (or “type of well”) is “oil, gas, dry, .…” But for one state, the choices are “new, existing” and another agency defines well type codes for “exploratory, development, strat test, .…”

IMPLICATIONS FOR ANALYTICS Does the data mean what you think it means? Data professionals and end users should understand the consequences of the variability in reporting requirements. • Operators file well completion reports. Familiarity with the requirements for one state does not help when reporting to another. As perhaps an extreme example, 16 regulators want to know the amount of casing pulled at the end of the drilling operations; one state asks for the amount left in the hole. • Regulators surveyed by PPDM in 2015 expressed frustration in having no authority or resources to enforce compliance when an

Foundations | Vol 6, Issue 1 | 13


Feature Selected data types from US well completion report forms

operator’s report is deficient. This affects their legislated duty to monitor and regulate the industry. Data vendors invest considerable effort to collect and validate the well data from regulators, using algorithms and human intervention tailored to each source. In-house database managers load well data from external and internal sources. Quality problems occur when a data type is missing or has multiple meanings or formats. End users in the geoscience, engineering, and business communities are frustrated and delayed in their decisions if there are problems with data completeness and correctness anywhere in the data flow chain from the well to the desktop. Decisions made by government departments, municipalities, and the general public are affected if well data is incomplete or incorrect.

OWNERSHIP SOLUTIONS If you are responsible for trustworthy data, there are actions that should be taken right now. • Awareness is power. Know the pitfalls. Don’t rely on the “package name.” • Know your data. The same word can have different meanings. Compliance is more than filling in the spaces on a form. • Know your obligations to each regulator. Don’t assume the definitions and data requirements. If the information is not required on a well completion report, it may be required on a separate form. If you are now responsible for New York data, don’t assume it is reported in the way you have done it for Oklahoma. • Know your customers and end users. They are probably trusting you to provide complete and accurate data. Even while navigating the various requirements for today, we should be looking ahead to something better. PPDM is always seeking to facilitate our industry’s collaborative efforts to eliminate errors and omissions in data exchange.

0

5

10

15

20

25

30

35

Operator name Perforated interval Well name and ID number Reporter’s name, signature Location – survey Treatment amt, rate, etc. Formation picks Production test volumes

Wellbore total depth Production test 24 h rate Producing / inj formation Location – lease, unit, etc. Cement class, yield

Completion type (single, dual) Location – latitude, longitude Drill stem test results Completion dates (start, end) Water encountered in drilling Environmental (disposal, etc) Rig release date Lease type (state, fee, Indian)

Figure 1: Selected data types from USA well completion report sorted by number of regulators.

• A standard well completion report template would reduce errors and omissions. • Electronic data filing, such as the RBDMS eForms already used by some regulators, reduces paperwork and the need to submit the same data more than once. It does not eliminate the need to know a regulator’s unique definitions and requirements. • A standard set of data rules to validate each entry would enhance the compliance and data quality. Many of these rules are in the PPDM Rules Library. • A standard set of data requirements would support data quality at the point of creation, long before it is assembled into a well completion report.

14 | Journal of the Professional Petroleum Data Management Association

CONCLUSION “You keep using that word. I do not think it means what you think it means.” (from the movie The Princess Bride). The content of the package may not be what you expect from the label. Everyone’s business depends on correct data; analytics is only accurate if the data is correct. Data professionals must understand the meaning behind the name. About the Authors Dave Fisher is a retired Calgary geologist. He had never heard of data models until he joined the PPDM board of directors in 1993. Ali Sangster has been with Drillinginfo for nearly 10 years, serving as Director of US E&P data.


EDM for Energy Data management solutions for energy firms ‒ ‒ ‒ ‒ ‒ ‒

Centralization of disparate data Cross-divisional workflow automation Geospatial data analysis Data governance Exception reporting Integration with BI tools

ihsmarkit.com/edm-for-energy


Photo Contest

Foundations Photo Contest

“RED-TAILED HAWK LANDING IN PINE TREE” BY BOB BUSH 2nd Place in the Volume 6, Issue 1, Foundations Photo Contest “Red-tailed hawks are common throughout North America, with a migratory range from south of Mexico to Alaska. This one was photographed while landing on a tree next to my home in Houston.” July 22, 2015 Born near the San Andreas Fault and educated as a geologist, the photographer has worked for BP, Pennzoil, and Devon Energy in both geological and IT roles. For the last four years he has been employed by a consulting company working in data management.

16 | Journal of the Professional Petroleum Data Management Association


Photo Contest

On the cover:

“MAMMOTH HOT SPRINGS TERRACE, YELLOWSTONE NATIONAL PARK” BY GORDON COPE 1st Place in the Volume 6, Issue 1, Foundations Photo Contest

“The travertine terraces are formed over thousands of years as hot ground water emanates from a magma chamber and precipitates carbonates.” The photographer is a geologist and author who currently lives in Mexico. Please visit www.gordoncope.com.

Enter your favourite photos online at photocontest.ppdm.org for a chance to be featured on the cover of our next issue of Foundations!

Foundations | Vol 6, Issue 1 | 17


Feature

Geophysical Data Compliancy – Utilizing Technology, Part Two By Sue Carr & Trish Mulder, Katalyst Data Management

ORD EM E P RO GR

E

T

INTRODUCTION The three transitional stages are the connectors that allow a project

S

S

O

to move through the overall data compliancy process and link the three asset stages together. We refer to these transitional stages as Divide & Conquer, Visualize & Report, and the last stage, Data Manipulation. This article explores what is required by Compliance Click to editProcess Master title style these transition stages (Figure 1).

the data or did you license it? Entitlement focuses around the contract and the proper use of the data. The critical component of this transition stage is establishing an entitlement rating matrix (Figure 2). With the assistance of your legal department, a rating matrix allows for the understanding of how a survey has been classified, the confidence of that • Click to edit Master text styles classification, and, therefore, the confidence – Second level • Third level • Mind the gap in the use of the data. Historically, contracts are not created equal — contracts may be Click to edit Master title style • Data manipulation • Divide and conquer Database missing critical information pertaining to the asset. The ambiguity within contracts creates Find, and peck • Click •toHunt edit Master text styles identify risk, risk increases liability, and liability • Visualization and – Second level and reporting Data Contracts validate increases a corporation’s exposure to lawsuits. • Third level – Fourth level » Fifth level

– Fourth level

Confidence Level

» Fifth level

Figure 1: The three transitional stages.

TRANSITION STAGE #1: DIVIDE AND CONQUER The purpose of the first transitional stage is for the corporation to properly understand where their legal risks and exposure may exist and to apply mechanisms to mitigate these risks. We view ownership and entitlement as having different definitions. Ownership is simply asking the question, do you own

18 | Journal of the Professional Petroleum Data Management Association

Ownership Class 100% / Exclusive Proprietary

Entitlement Grade Grade Grade Grade A B C D

Partnered Proprietary Trade / Non-Proprietary Unknown Can Not Confirm

Figure 2: Entitlements Rating Matrix

Ownership Class

his is the final article in a two-part series called “Geophysical Data Compliancy – Utilizing Technology.” This article will complete the discussion of the three transitional stages using tools, to assist in your compliancy journey. In our first article, we discussed a six-stage compliancy life cycle for a data governance environment. The six phases consist of three asset stages and three transitional stages. All six phases must be complete, and some may occur simultaneously, in order to achieve data compliance. This model is different from others as we believe a corporation can never really achieve compliancy without relying on technology, visualization, tools and an automated, continuous monitoring process. The first article focused on the three asset phases (the database, the contracts, and the data); this final article will complete the overall method with the discussion of the three transitional stages.


Feature

Once a rating matrix has been implemented, you are ready to start to break down your database to group data into “similar/like” ownership. As mentioned in the first article, the second asset stage, contracts, is where you will start to analyze the contracts, master license agreements (MLAs), AFEs, etc. Prior to doing that, you must understand how the data has been acquired so you know what documentation and agreements you are targeting.

TRANSITION STAGE #2: VISUALIZE & REPORT The second transition stage is where the technology that you’ve implemented all aligns to paint a visual picture of what your environment looks like and the extent of your corporate risk. In our first article, we discussed the importance of supporting technology such as analytics tools, ArcGIS, FME, and crawling mechanisms and how they are used to more deeply understand the information. At this point in the process, the information collected can be amalgamated and displayed in a way which is less complicated for the user community, legal department, and upper management to understand. User Name

No Action

Remove

Investigate

Rename

Compliant files Non-compliant under under investigation investigation (Can not identify by files filename) Joe Smith 497 125 97 1109 Jane Robyns 440 50 98 1424 Tom Kent 376 68 95 921 Rachel Burk 677 200 182 173

Total Files per User

ACTION required % of user files

NO ACTION Required % of user files

1828 2012 1460 1232

73 78 74 45

27 22 26 55

Figure 3: A geographical representation. The geographical representation pictured above (Figure 3) represents the data the corporation can use, the data it cannot use, and the data that is still under investigation. The color-coded report identifies each user, the total number of files in question, and the severity of risk to the corporation should they use data to which they are not entitled. The entitlement rating matrix will provide assistance while validating

existing contracts when information on the ownership/entitlement documentation is scarce or ambiguous. Another challenge is when users alter file names, breaking the relationship between the data and the associated contracts in their personal projects. If you have no mechanism to link the two back together, this creates an environment of risk for the corporation.

TRANSITION STAGE #3: DATA MANIPULATION The purpose of the final transitional stage, Data Manipulation, is to label the data the corporation is entitled to use. This is achieved through investigation and the compliance process. The reality is that data comes into your user environment on multiple devices and media, in multiple formats. Common practice may be to rename a line when it comes into a corporation, which breaks the relationships among the contract, the database, and the data. This may create a situation where the seismic file names no longer have a reference to the contract or database line names. This is where using technology can assist in discovering all the seismic data in your environment. If the relationship between the seismic line in the database and the seismic data is broken, there are methods utilizing scripts that can interrogate the metadata within the interpretation project and the SEG-Y files to rebuild the relationships. For example, Python and FME are tools that can be used to scrape metadata from the binary, the EBCDIC, and the trace headers to identify first and last shot points, line name, and unique identifiers. GIS systems can calculate line length or square kilometers, and proximity routines can be used to establish spatial probability. The results can then be evaluated against your Source of Truth (database) and against your Source of Entitlement (contracts) to identify and resolve what the data is. This creative approach using your existing systems can determine and ultimately rebuild the data to database to contracts relationship. The next step is to write a radio tag

in the binary header: the line name as recorded in the database along with the contract file number(s). Utilizing binary in the binary header secures the information, as it is difficult to alter, unlike ASCII text within the EBCDIC header. The imbedded radio tag (contract name) in the binary header allows the crawling application to locate the name and contract number and identify whether the corporation has the entitlement to use the data. Continuous monitoring, reporting, and manipulating of the data (data compliancy) in the user environment is a critical step to ensure the corporation moves to a data governance state. Data is dynamic and is constantly acquired, moved, renamed, and divested, and a governed environment will maintain a robust legal position for the corporation. In Alberta, as in many other jurisdictions, users of the data are accountable for ensuring that they are entitled to use that data. Imagine that my field truck is in your parking lot. Does that mean you can drive it or use it as you please? Of course not, and the same rules apply to data. “APEGA members who fail to consider, or who disregard, the rights and obligations of data owners or licensees could place themselves in a position where their actions might constitute unprofessional conduct or could result in legal liability.” Association of Professional Engineers and Geoscientists of Alberta (APEGA): https://www.apega.ca/ assets/PDFs/geophysical-data.pdf

CONCLUSION To answer whether you are compliant or not, ask yourself these guiding questions: Are you compliant in your use of data within your corporation? Are you using the data you are entitled to use, in the manner for which it was intended? How do you know? Can you prove it? Are you audit ready? To be fully compliant, you must measure your data against your database, against your contracts. A six-stage compliancy process resulting in a data

Foundations | Vol 6, Issue 1 | 19


governance state is required to constantly monitor your digital environment to minimize and mitigate risk. About the Authors Sue Carr, Manager Consulting Services, Katalyst DM: more than 35 years of implementing software and data management systems and leading subsurface data teams. Sue is focused on building a DM consultants group to help solve E&P companies’ data challenges. Trish Mulder, Director of Business Development, Katalyst DM: a seismic data expert with over 18 years of experience in data management at both E&P and service companies. Trish has a strong vision for the future of data transactions and compliance.

Guest Editorial

Technology Basics in 2019 - Web 4.0 Buzzwords and

Technology Basics in 2019 - Web 4.0

By Guy Holmes, Tape Ark

ORD EM E P RO GR

E

S

S

O

t least once a week we hear buzzwords tossed out in conversation or in office meetings. Some of the new trends being talked about will imminently change our lives and work in profound ways, while some are a long way from helping us. However – there is one thing that I find consistent about the buzzwords (Figure 1), and that is that just about all of them are tossed around like we should all know what they mean and how they will help our work. The reality is that many of these new by us, tools are simply not well understood and because of that, we do not know what potential use they will be or what impact they will have in our work. Some of the words we hear are IoT, Big Data, Public Cloud, Data Analytics, Blockchain, and Web 4.0. There are of course many others — in fact I learn or hear a new one almost every week. Starting with this issue, the Foundations editors have created a feature about technology trends and buzzwords. In this regular column, I will do my best to explain what the terms are, how they can be used in the Oil and Gas sector, and summarize how the technology will change the industry over the coming months and years. The first area I have chosen is Web 4.0. I start here because all of the other buzzwords I mentioned above form a part of the Web 4.0 ecosystem. In fact, Web 4.0 is in many ways both a tool in itself and an enabler of many other tools – a symbiotic

A

20 | Journal of the Professional Petroleum Data Management Association

Web 4.0 IoT

Big Data

Public Cloud

AI Blockchain

Figure 1: Trending Technology Buzzwords. relationship of technology-to-technology and technology-to-human integration built to enable and better our future. Web 4.0 is the fifth generation of the internet, starting with Web 0. As a bit of background: • Web 0.0 was the initial, raw internet, the time when a simple open network was able to first talk to a web “Browser.” It was initally designed for research, but the power of the technology quickly spread. • Web 1.0 differentiated itself from Web 0.0 in that it added index engines that provided directory listings for all of the several million sites that had been developed. It essentially evolved from “browser-to-content” in a read-only state to “browser-toindex-to-content” still as read only.


The extra index layer was added to help people get to meaningful content without having to visit every possible site on the web until they find the one site that is relevant to them. • Web 2.0 integrated sharing into its core. This gave read/write capability and allowed people to update pages from their browser so that sharing and collaboration became a critical component of many sites. This newfound capability to read and write spawned web sites like MySpace, YouTube, Facebook, LinkedIn, and Wikipedia, where you could update, modify, and share content easily. Suddenly the web was not just a destination for information, but a place to gather, connect, network, and socialize for both business and pleasure. This also led to large-scale global adoption of the web, more sites, and the need for improved indexes and index sites. • Web 3.0 moved on to allow machineto-machine interactions. The most efficient way to get machines to communicate reliably is to standardize languages and protocols so that communications and processing have a shared foundation to interact (kind of like the PPDM’s core missions for data). Web 3.0 marked the uptake of “hard” communication channels between machines – preplanned handshake-style connections. The soft connections were really yet to come. For example, the evolution of Web 3.0 included the integration of cloud computing into our everyday lives and Software as a Service (SaaS) becoming widely available and accepted. Today, we are in the era of Web 4.0. The best example of its difference from Web 3.0 is the area of the Internet of Things or IoT. This is where the soft connections of devices that come in the form of watches, cameras, sensors, personal navigation systems, etc., expand and multiply to meet supply chain, health monitoring, digital twins, and many other areas. Web 4.0 gives small

IoT implementations a place to coexist both with and within larger scale IoT kingdoms. Take, for example, a gas processing plant that has smart sensors that detect irregular temperatures in a pipe. The sensors may send an alert to the oil company while at the same time shutting the valve on the pipe. Add to this small implementation of IoT sensors the fact that the pipe sensors themselves communicate with the manufacturer of the closest valves, so that, when a pipe closes, a replacement valve is automatically shipped to the plant for replacement. Connect that manufacturer to the steel factory where an order for new stock is automatically placed to replace the valve they shipped to you. Add the notice sent to the iron ore producer where the iron is mined to increase production to meet the newfound need. Don’t forget that Fred, the trusty valve mechanic, gets notified that he is needed urgently at your plant to replace the valve, and you can see how all of these soft connections (many you may not even know exist) collaborate to act intelligently, machine-to-machine, machine-to-human, across a read/write web that is Web 4.0. The Web 4.0 ecosystem enables all scale and manner of communications

like the one mentioned above, creating a symbiotic, non-touch relationship between technology and people. You may not see or feel the devices and sensors that provide your life with many new benefits, but all around you are hundreds of billions of devices that are communicating like an electronic skin around our world and soon, likely around our solar system. Web 4.0 seemed like a good place to start our Foundations journey into technology trends as all of the other buzzwords are kind of a product of or a key component in the Web 4.0 ecosystem. REFERENCE:

EVOLUTION OF THE WORLD WIDE WEB: FROM WEB 1.0 TO WEB 4.0 Sareh Aghaei , Mohammad Ali Nematbakhsh and Hadi Khosravi Farsani International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.1, January 2012 Located here: https://pdfs.semanticscholar. org/8cb3/93c3229e8f288febfa4dac12a0f6298efb93. pdf

About the Author Over the past 19 years Guy has chased his passions wherever they led. In some cases, his passion led him to starting a company that imported wine accessories, and another to founding a leading global data management company.

Foundations | Vol 6, Issue 1 | 21


Feature

Why Move to the Cloud? Three Key Benefits of Cloud Computing By Uwa Airhiavbere, Microsoft

W

e use the cloud or cloud computing, through our devices and other tools, for many different reasons. Perhaps it has become commonplace to us, but why exactly do we use it? Recently, I was with a client who asked this fundamental question and wanted his team to understand the basics. ORD EM E P RO GR

E

S

S

O

WHAT IS THE CLOUD? Fundamentally, the cloud is the vast network of remote servers hosted on the internet, providing businesses and individuals with data storage and scalable processing and computing power through the internet. File servers are computers or storage devices dedicated to storing data. Technology is changing at a rapid pace and the transition from old to new information technology (IT) dynamics has created a need for the cloud. The old IT structure was comprised of limited tools and vendors, and was created with the assumption that there would be a lack of mobility. Today’s IT infrastructure is more complex with multiple tools. There are different platforms — including Windows, Android, IoS — and a mobile-first approach with users consuming content ubiquitously. Also, more devices are creating data and lots of it. Not just smartphones, but cars and even home appliances are creating data and adding to the complexity of today’s technology fabric. Capturing and analyzing this data has become vital for businesses to remain competitive.

HOW IS THE CLOUD USED? From a business perspective and, perhaps

also non-business, the cloud can be used for a wide variety of activities. It can be used to create new applications and digital services. Many businesses today find that they have various applications in silos and are leveraging the cloud to aggregate these applications on the same platform and then make them ubiquitously available. The cloud can be used to store, back up, and recover data. Businesses that do not have a business continuity plan (that includes data storage) are at risk of failing after a data loss event. The cloud can be used to process Artificial Intelligence (AI) use cases. Digital assistants and chat bots are excellent examples of how AI use cases can scale globally. Using the cloud, most of us can leverage digital assistants to make our work easier and to consume business services provided by service providers. It can be used to keep businesses secure and compliant. It can be used to unlock the potential of connected machines. Especially in industrial scenarios, businesses are finding that by leveraging cloud services, they can learn more about their devices and avoid unplanned downtime by analyzing data and making predictions.

WHY THE CLOUD? From experience, businesses choose to leverage the cloud for three main reasons: to save time, build confidence, and increase performance. • The cloud saves businesses time in various ways. A new server can be available in minutes instead of months. Unstructured data can be saved and shared across the world ubiquitously to build resiliency and

22 | Journal of the Professional Petroleum Data Management Association

low latency in apps. The cloud helps to reduce the amount of time IT teams spend to manually monitor and manage physical servers, and to track and upgrade the server software. The IT organization can become a broker of services. • Also, the cloud helps businesses to be more confident in their IT infrastructure. It offers the possibility of making it easier to manage cybersecurity threats. Disaster recovery is easier and faster to set up and manage. It is also easier to comply with regulatory requirements. • The cloud can be used to offer a platform of services and products that can help to improve and accelerate performance for businesses of all sizes. A global and widely distributed team can function more effectively. Businesses also only pay for what they use. Moving to a consumption model is very attractive to smaller companies and start-ups, allowing them to tie business performance to their investment in their infrastructure. These three reasons provide business entities with an argument for incorporating the cloud into their business models. About the Author Uwa Airhiavbere is Director of Microsoft’s Worldwide Oil and Gas sector. In this role, Uwa is responsible for Microsoft’s overall strategy in the Oil and Gas industry, field readiness and engagement, partner strategy, strategic investments, and integration among business groups within Microsoft.


Hands-On with the PPDM Association Board of Directors By Jamie Cruise, Target Energy Solutions

ORD EM E P RO GR

E

S

S

O

F

rom the board level down, the management of oil companies are demanding that their scientists and engineers use new digital technologies to reduce cycle times, lower costs, and create competitive advantage. Radical systems are being built that use AI, machine learning, and traditional physics-based techniques to create fresh insight and unlock recovery. The value of data to an operating company has never been so clear. After years of toil and drudgery, the data management department has been thrust into the heart of their corporation’s “digital transformation.” A common consensus is now emerging that these transformative application and AI initiatives must be underpinned by a revolution in data management that liberates data to produce a “new subsurface data foundation.”

THE CORPORATE DATA ECOSYSTEM Before we examine what the “new subsurface data foundation” of the future looks like in my next article, we must examine our history. After all, we are not exactly new to digitalization. Our industry has been building digital data management systems for subsurface data for more than 30 years. A typical large corporation will have multiple “data management systems” spread throughout their organization, with each one providing support to the business in a highly specialized role. At the simplest level, “data cataloguing

systems” provide the business user with a way of finding data inside their corporation. These types of systems are the equivalent of a library card index, with each data item describing a record containing “metadata” – data about data. The actual data being catalogued is stored elsewhere, on shared folders or often as physical records in warehouses: log prints, seismic sections, core samples, printed reports, etc. The catalogues themselves are not regarded as authoritative data; they are just used to find the real data. Catalogue systems are very business oriented, and many organizations have built valuable data catalogues using simple tools as such as spreadsheets or PC databases. A larger corporation may have systems to track data in business units around the world, built on relational databases and with millions of records. Each record may have many columns, but otherwise be simple flat structures. “Master data management systems” also use a database to maintain a catalogue of data sets, but unlike a cataloguing system, the digital records are held online and can be accessed directly without having to order them from a warehouse. The data model or schema for a master data management system can be very rich and allow capturing of real asset data, not just metadata. Whilst the bulk data in seismic data sets, documents, and well logs are still typically stored in their “original format” (structured or semi-structured files, with only their headers stored in the

database), well headers, seismic surveys, directional surveys, markers, etc., are all captured directly as structured data. “Project databases” or “application data” systems reformat data from their original acquisition or transfer format into specialist data structures, optimized to enable applications to read, perform calculations and analytics, and write results back to the application data store with the highest efficiency. The internal engine of an application data management system might be a client server database that supports collaboration between multiple application users (a project database), or it might be a single user file system data store. In many cases, application data stores are designed to hold data for a particular application family and the data held in them cannot normally be easily accessed without proprietary application software licenses. A “search engine” is another important class of data management system used in our corporate environment. It’s the simplest way of creating a corporate view of data. Specialist utilities “crawl” through existing content repositories to extract metadata records, just like a cataloguing system. Unlike a cataloguing system, even low-cost search engines can scale to billions of rows and provide lightningfast searching across hundreds of attributes without requiring any complex database setup or predefined schema. The document-oriented search engine is just one example of next-generation corporate databases providing new

Foundations | Vol 6, Issue 1 | 23


Now Available For Members

Contact projects@ppdm.org to get your copy!

Thank you to our sponsors and work group members for making this work possible.


Feature

ways of handling data and metadata. These systems prioritize large-scale storage, flexible integration and data aggregation, and fast querying rather than transactional data integrity. Therefore, they often move away from the relational database technology that we use for transactional databases. These non-relational technologies are known collectively as “NoSQL.” Another example of a non-traditional database is the “corporate data lake.” This holds all the data needed for running complex machine learning and AI algorithms. These algorithms need vast swathes of “Big Data” to discover new relationships between data elements and optimize previously intractable or expensive equations. Whilst traditional data systems involve complex data structures organized into silos for specific data types or business transactions, the data lake provides an environment that integrates data from multiple data sources and enables massively parallel processing using platforms such as Hadoop. Search engines and the data lake are both powerful venues for enhanced access to data, but it’s worth noting that they normally work on a copy of the data. They must be stocked from some other authoritative source of data. In addition to these formal data management systems, there will be many other informal systems for managing data, such as unindexed shared folders, flash drives, emails, and spreadsheets. This informal data economy must also be represented in the new subsurface data foundation. Learn more about what’s coming in my next article, Building a New Subsurface Data Foundation, in this same issue. About the Author Jamie has been working in upstream data management for 25 years. He is an experienced implementor, designer, executive, and entrepreneur.

Building a New Subsurface Data Foundation By Jamie Cruise, Target Energy Solutions

ORD EM E P RO GR

E

S

S

O

DIGITALLY TRANSFORMING CORPORATE DATA MANAGEMENT

D

espite having delivered such wealth and diversity of digital data management systems to the industry, each advance in data management has only introduced a quantitative change in our data management efficacy; we are still burdened by waste and low quality in the data domain. What we need is to consolidate the quantitative advances of the last 30 years to make a great qualitative leap forward. Only when we make this leap will we be able to say that we have delivered the revolution needed to establish the new subsurface data foundation. In our work with customers we observe four common trends driving us closer towards the new subsurface data foundation: 1. Desiloization of data stores. 2. Decoupling of data from applications. 3. Cloud-based data processing and analytics. 4. Agile solution delivery.

DeSiloization of Data Stores Existing data management products for subsurface data management often have a very narrow focus. We have separate systems for logs and well headers, for drilling, production, seismic, and physical records. This multiplicity of overlapping software and data stacks is reminiscent of the wasteful racks of physical servers that populated our data centres 10 years ago. In the same way that we consolidated these using standardization and virtualization, we should consolidate our subsurface data silos by migrating them to an integrated, standards-based data repository that is capable of “mastering” all data types needed by the business. All new data types will be integrated through a common framework of master and reference data. Each time we migrate a legacy silo we create a new data type in the foundation and each migration provides a chance to improve the quality of our master and reference data, which will benefit the next legacy data silo migration project. In this way, eliminating legacy data silos frees resources (money, IT, people) to work on a truly corporatelevel data foundation and starts the long journey to data harmonization and a single view of the truth that is fundamental to establishing a “digital twin” of our businesses.

Foundations | Vol 6, Issue 1 | 25


Feature

Decoupling of Data from Applications As we start to hunt down data silos and to liberate their data and bring them under the big tent of the new subsurface data platform, we will find that the migration exercise is made more complicated by the tight coupling between applications and data. As daunting as it may be, it is essential that we break this linkage and take back control over our data. In the past we delegated control of our data to the application vendors so that they could deliver us more innovation. Future business innovation will come from many sources and will all be powered by access to our data. We will create competitive advantage through our own innovation, including AI and machine learning algorithms that will cross traditional workflow boundaries. In the short term we may need to retain data in application formats, but we can immediately break away from proprietary data stores at the corporate and master data levels. We can replace proprietary solutions with standardsbased integrated data repositories configured to supply data to a wide range of applications and algorithms using standard, non-proprietary APIs. We can then build agents (plugins, scripts, sync tools) that enable a seamless two-way transfer of data between the application format and standardsbased repositories at the corporate level, progressively transferring longterm data value from the application vendor to the business owner. Cloud-Based Data Processing and Analytics In our traditional workflows we “load� data into a desktop application and perform processing on it to create results. Our businesses are experts at building big desktop workstations to hold the most amount of data to produce the most insightful results. Going forward, this approach makes about as much sense as trying to download the World Wide Web into a word processer to do web searches on our desktop.

In the new subsurface data foundation, more and more analytics will be performed directly against the authoritative (public or private) cloudhosted copy of data. This will increase the scope of data available for analysis, both in terms of volume and variety. Elastic compute and storage resources mean that we will never run out of capacity for new data or algorithms and we can leverage commodity, low-cost IT, and move away from the expensive specialist petrotechnical computing environments of the past. Agile Solution Delivery Agility is an important concept for the new subsurface data foundation. The transition to a common environment will be a long-term journey. The foundation is not a grand cathedral of data management designed by a central committee that binds us to any particular technology or API. The new subsurface data foundation will be an emerging federation of loosely coupled and closely aligned data services. It will embrace extensibility, it will be polyglot, and it will provide sustainability by remaining closely aligned to our most important business problems. We will treat our new subsurface data foundation as a product, unique to our own business, managed by our own business. We will deliver the foundation using short, iterative development cycles and a welldefined backlog of user stories that chart a path towards our ultimate destination. The new subsurface data foundation platform will need to support this way of working. We must be able to extend it in the field without major software updates. It must provide flexible APIs that can work with data types that were not defined at software build time. It must enable the progressive elaboration of the data foundation by using flexible virtual schemas that overlay the physical stores. The platform must manage not only the data, but also the metadata, reference data, process data, transactional data, audit trail, and quality data that are needed to build up

26 | Journal of the Professional Petroleum Data Management Association

trust in the corporate-level content.

THREE USER STORIES FOR THE NEW SUBSURFACE DATA PLATFORM In our work with clients we see some common user stories that benefit from a new subsurface data foundation. Automated Data Ingestion for Rapid Data Distribution One of the major challenges we see in our client projects is the problem of data loading to corporate data stores. In many cases, existing data silos require incoming data to be quality checked, with a fully populated catalogue attached. Of course, the data that we receive from our suppliers or partners is rarely that neatly prepared; therefore, considerable time and effort is required to prepare the data so that it can be loaded. This leads to significant waste from a business efficiency perspective. We need a team of people, with another set of data management tools, just to populate our data management systems! In practice this means that there is normally a large backlog of data waiting to be loaded in the corporate systems. As a consequence, the assets and business units tend to bypass the corporate systems, taking the data directly into working projects, and low-quality data percolates through the business, causing potential problems with the quality of our decisions and actively working against building up a standardized data capability. The new subsurface data foundation should ingest all incoming data directly to the cloud regardless of the quality or completeness of the data sets. Intelligent, format-aware parsers will scan all incoming data sets and compile data catalogues automatically. Extracted metadata will be compared against standard corporate master and reference data (well names, company names, tool names, units, field names, country names, etc). Data managers will be presented with interactive data quality reports that enable them to correct metadata errors and create rules that correct similar errors in the future. As metadata


Feature

errors are corrected, data will be made available for immediate distribution to user applications via a notification and agent plugin framework. As users create new results these same agents will write data back to the cloud platform, with complete and consistent metadata. This positive feedback cycle will accelerate the ingestion and distribution of data and results, and improve collaboration between business users. It will also reduce the backlog and burden of data loading on the central team as “quality data first” becomes the new normal amongst the user community. Going forward, data suppliers (e.g., data acquisition contactors and data vendors) can also be brought under the tent by providing them online tools to ensure that the data they provide is compliant with corporate standards. Creating New Value from Disconnected Data Our existing data silos are normally provided by petrotechnical application vendors and so their scope tends to be somewhat self-selecting – they hold the data used by traditional workflows in traditional application suites. In the new era, we want to access all of the data, and the new subsurface data foundation is an ideal medium for collecting and standardizing the hundreds of data sets swilling around the organization in spreadsheets, reports, and emails. This part of the transformation is likely to have already started in your organization if you are in the process of creating a data lake. The AI and machine learning algorithms are so hungry for data that people naturally pour their informal data sets into the data lake. However, the data lake is no substitute for a true “System of Record.” The loose nature of the lake makes it all too easy to accumulate low quality and untrustworthy data. The new subsurface data foundation provides all the capabilities that we need to create an authoritative system of record for any new data type. The extensible, integrated data repository toolkit that we use to migrate legacy silos can also be used

MEERA

Automated Collaborative Data Ingestion

Service Company

Data Manager

MEERA Workspace

MEERA Data Management

Log Files

QC Report

Petrophysicists MEERA Visualization

PowerBI

MEERA Client

HTML

OData

gRPC

Desktop App

Interp Log Data

Geoscience Desktop App

App Project

Interp Log Data

Master Log Data

MEERA Trusted-Data Engine Staging Service

Validation Service

Transform Service

Load Service

Business Rules

Access Service

Documents Service

EPSG Metadata

Audit Trail BPM

Schema Metadata

New Data Notification

Automated Well Log Data Loading Workflow

Discovered Data

Original Format Log Files

NoSQL

Discovered Wellbores

Discovered Curves

Agile Data

Discovered Operators

Analyze Discovered Content

Connect to Wells

Discovered Areas

PPDM3.9

Check Curve Quality

Standardize Log Curves

Original Log Files

SQL

Wellbore

Wellbore Aliases

Connect to Operators and Areas

Log Curve Dictionary

Commit data to PPDM

Areas

Trusted Data

Operator

Well Log Curve Header

Well Log File Header

Trusted Data

Figure 1: An Example of a New Subsurface Data Platform Implementation

to create a trusted data store for currently unmanaged data sets. The powerful data ingestion and distribution framework allows the new data types to benefit from the same governance and quality control policies that would previously have been available only to major data types. Putting Your AI to Work In the future, a greater part of our data insight and decision support portfolio will be made up of smart AI and machine learning algorithms. At the moment, many of these projects are delivering results based on carefully curated data sets that were wrangled manually for a small subset of our assets. A fancy algorithm that works for a few wells in one region cannot be applied economically in another set of wells or another region unless all of the right data are available automatically. Already we say that our geoscientists and engineers were wasting 40% of their time looking for data, and now we run the risk of our data scientists spending 40% of their time wrangling incomplete, inconsistent, and low-quality data. The new subsurface data foundation provides an ideal medium for these projects to read and write from an agile and trusted data lake fed from corporate data stores, using APIs that can easily be called from the Python and R languages, which are typically used by data scientists

and AI/ML algorithm builders.

WHAT DOES THE NEW SUBSURFACE DATA FOUNDATION LOOK LIKE? In our view there is no one size fits all new subsurface data foundation. We believe that each company’s foundation will be unique to their needs and priorities (Figure 1). The data foundation must be a living system that responds as your business moves forward. That said, we think that there are some essential elements of the foundation making up the common core around which your data foundation will be built: 1. A standards-based structured master data repository. This is the home for complete and trusted master and reference data. We choose PPDM as the preferred data store as it provides thousands of man years of investment in modelling the common upstream data structures. Its implementation using relational database technology means that it cannot be used as the only data store in the foundation, but it does provide a powerful backbone for data integration with the ultimate assurance of a “schema on write” data model. We implement PPDM on major commercial cloud providers’ platforms or client’s private cloud platforms, using managed RBMS services.

Foundations | Vol 6, Issue 1 | 27


2. A data staging area. This is a more flexible data storage and indexing area for incoming data. It provides bulk data and data catalogue storage. It is at the heart of the automated data ingestion and distribution workflows. This area is designed to work with untrusted and incomplete data using a flexible “schema on read” model. We implement using document search and index technologies such as Lucene/SOLR/Elasticsearch. This service includes the format that aware parsers needed to intelligently extract catalogue data on an automated basis. 3. A model-based data virtualization and business schema service. This is the glue in the system that abstracts away low-level physical storage details and presents a business-oriented data model. It is a common schema that is shared across both the master data repository and the staging area. It is a dynamic service that can be updated by administrators without changing other software. It is used by all other services to drive standardized and integrated data access. It produces code and APIs that implement highlevel business objects on top of PPDM. 4. A data transformation/quality metrics service. This engine allows data administrators to describe the business rules that should be applied to extract, transform, enrich, and load their data as it migrates from existing silos or from routine data ingestion processes. It also allows the ongoing monitoring of data quality across the entire foundation and continuous quality improvement. 5. A workflow orchestration service. Automation is normal in the new subsurface data foundation, but there will still be a need to coordinate input from humans to get data through our data ingestion and delivery pipelines. A workflow orchestration service allows the coordination of the work of widely distributed teams. 6. An online data management

Taking Back Control of Your Data for Digital Transformation Thesis Data value exists in the domain models and structured databases we trust to feed our classical applications

Agenda Data Supply is the Foundation for Upstream Digital Transformation

Synthesis Agile, Trusted-Data Platforms that liberates data whilst preserving the value in our structured data models

Anti-Thesis Data value is to be discovered. Agile, model-free analytics liberates data from the bonds of legacy systems

Figure 2: Taking back control of your data for digital transformation.

workspace. The business should have access to their data from any location at any time without requiring the users to download it into expensive to own and operate applications. Simple visualizers will support data assessment, collaboration, and quality control. 7. Analytics APIs. The business should not be limited to the data management workspace. Data should be immediately accessible using programmable APIs that make their data visible in a wide range of scripting and business intelligence tools, including common productivity packages. In addition to these core services, your environment will include other services for bulk data storage, which may be simple, original format, file-oriented “blob” storage/object stores or more sophisticated databases that support parallel processing, including aggregation and in-place analytics. You may also have services for utility functions such as spatial data transformation and unit conversions. Your data foundation should incorporate the services you need for your transformation.

28 | Journal of the Professional Petroleum Data Management Association

SUMMARY By now it should be clear how the new subsurface data platform is different from your existing corporate data ecosystem. Unlike legacy silos, the new platform is integrated and agile. Unlike your data lake and search engines, the content of the subsurface data platform is built on a trustworthy data repository with all the structure plus master and reference data needed to maintain standardization and consistency for long-term sustainability. It is a true digital transformation enabler. About the Author Jamie has been working in upstream data management for 25 years. He is an experienced implementor, designer, executive, and entrepreneur.


Community Involvement

Thanks to

our Volunteers

NOVEMBER 2018 Tony Knight The November Volunteer of the Month was Tony Knight. Tony is the chair of the Australia East Leadership Team, which formed earlier this year. Tony earned a Bachelor of Science with Honours in Geology from the University of Wollongong, then began his career as a geologist with BMA. He moved his way up in the industry and became Vice President of Exploration with Arrow Energy. Today, Tony is the Chief Government Geologist for the Queensland Government. “From the beginning, Tony has been a natural chair for this leadership team, and under his leadership, the team has seen tremendous growth at recent events along with an invigoration of the Brisbane community. It’s a delight to work with Tony and we look forward to growing this community with him and the Australia East Leadership Team,” said Elise Sommer, Senior Community Development Coordinator at PPDM.

DECEMBER 2018 Lewis Matthews PPDM’s December Volunteer of the Month was Lewis Matthews. Born and raised in Great Britain, Lewis immigrated to the US at the age of 17. He enlisted in the United States Navy, serving nine years as a corpsman with the Marines. Lewis has since earned several degrees in economics and geology, and a master’s degree in geophysics and seismology. During his studies Lewis independently discovered fractal clustering in petrophysical logs.

Matthews is currently a data scientist at CrownQuest Operating, where he evangelizes solutions to complex problems. To encourage understanding and broad collaboration across companies, Lewis teaches machine learning applications for Oil and Gas problems. His workshops have proven to be incredibly popular and helpful in enhancing a general understanding of the strengths and limits of these incredibly hyped technologies. “Lewis has gone above and beyond for PPDM in 2018. He has spoken at multiple events all over North America and conducted a machine learning workshop in Calgary in conjunction with our Data Management Symposium. We very much appreciate all Lewis has done for PPDM and look forward to working with him in 2019 as well,” said Pam Koscinski, PPDM’s USA Representative.

JANUARY 2019 Dave Fisher Dave Fisher is a retired geologist whose career began in mineral exploration and continued in oil exploration in Canada. Dave was a director of the PPDM Association from 1993 to 1999 and was made a lifelong honorary member by the Board of Directors. From “What is a Well?” and “What is a Completion?” to the Global Well Identification Best Practices, Canadian and US Well Identification standards, the PPDM Rules Library, the PPDM data model, and more, Dave has always been ready to lend a hand wherever it is needed. In PPDM committees and work groups, Dave is an advisor, expert, cheerleader, editor, and all-round supporter of PPDM. Dave spends a lot of time at the PPDM office, where he helps with whatever needs doing at PPDM. Nothing is too difficult or simple for Dave to tackle with the energy, enthusiasm, and determination that true grit brings to the table. He has

CI

mentored PPDM’s project managers in many aspects of the petroleum life cycle and helped craft many of the amazing technical documents we all use today. Dave has also been instrumental in the development of the certification program. He has worked tirelessly to ensure that the contents of the invigilation process are authentic and practical. As a champion, Dave encourages others in our industry to get involved in what we are doing and represents the embodiment of what is great about PPDM.

FEBRUARY 2019 Shawn New Shawn New was the February Volunteer of the Month. Shawn has been a data management professional in the Oil and Gas industry for more than 20 years. He graduated from the University of Phoenix with a B.S. in Information Technology. Shawn is manager of Operational Data Management at BHP, where he has grown his career for the last seven years. In September 2018, after receiving his CPDA designation, he joined PPDM’s Certification Committee. Since then, he has been a strong advocate, sharing the value of certification and working to make CPDA certification a key part of the data management career development path. In addition to his work on certification and career development, Shawn is participating in the Tier Two Review of the Well Status and Classification work and sits on numerous other professional committees. Please join Shawn in Houston on April 9 and 10 at the Houston Professional Petroleum Data Expo to hear him talk about his experiences as a CPDA and how he is making CPDA part of the career development path within his organization, working with the senior levels to understand how CPDA certification adds corporate value.

Foundations | Vol 6, Issue 1 | 29


Simplified Data Management. Dynamic Insights. Value-Driven Results.

EnerHub™ is the game-changing enterprise data management solution that lays the digital foundation for business transformation in oil and gas.

Learn more at www.sbconsulting/solutions/enerhub. Business advisory and technology solutions for next-gen oil and gas www.sbconsulting.com | info@sbconsulting.com | 866.390.6181


Upcoming Events

UE

LUNCHEONS APRIL 30,23, AUGUST 2019 2016 DALLAS Q3 DENVER DATA DATA MANAGEMENT MANAGEMENT LUNCHEON LUNCHEON

JUNE 5, 2019 BRISBANE DATA MANAGEMENT LUNCHEON

JULY 11, 2019 MIDLAND DATA MANAGEMENT LUNCHEON

AUGUST 21, 2019 BRISBANE DATA MANAGEMENT LUNCHEON

Dallas, TX, Denver, CO, USA USA

Brisbane, QL, Australia

Dallas/Fort Worth, TX, USA

Brisbane, QL, Australia

MAY 7, 2019 ADELAIDE DATA MANAGEMENT LUNCHEON

JUNE 11, 2019 TULSA DATA MANAGEMENT LUNCHEON

AUGUST 1, 2019 DENVER DATA MANAGEMENT LUNCHEON

SEPTEMBER 10, 2019 DALLAS/FORT WORTH DATA MANAGEMENT LUNCHEON

Adelaide, SA, Australia

Tulsa, OK, USA

Denver, CO, USA

Dallas Forth Worth, TX, USA

JUNE 4, 2019 DALLAS/FORT WORTH DATA MANAGEMENT LUNCHEON

JULY 9, 2019 HOUSTON DATA MANAGEMENT LUNCHEON

AUGUST 8, 2019 OKLAHOMA CITY DATA MANAGEMENT LUNCHEON

SEPTEMBER 19, 2019 CALGARY DATA MANAGEMENT LUNCHEON

Dallas/Fort Worth, TX, USA

Houston, TX, USA

Oklahoma City, OK, USA

Calgary, AB, Canada

WORKSHOPS, SYMPOSIA, & EXPOS

APRIL 2019 S M T

APRIL 9 – 10, 2019 HOUSTON PROFESSIONAL PETROLEUM DATA EXPO

AUGUST 14 – 15, 2019 PERTH DATA MANAGEMENT WORKSHOP

NOVEMBER 13, 2019 DENVER PETROLEUM DATA SYMPOSIUM

Houston, TX, USA

Perth, WA, Australia

Denver, CO, USA

7 14 21 28

1 8 15 22 29

2 9 16 23 30

MAY 2019 S M T

MAY 14, 2019 OKLAHOMA CITY PETROLEUM DATA WORKSHOP Oklahoma City, OK, USA

5 12 19 26

OCTOBER 22 – 23, 2019 CALGARY DATA MANAGEMENT SYMPOSIUM, TRADESHOW, & AGM

2 9 16 23 30

CERTIFICATION - CERTIFIED PETROLEUM DATA ANALYST MAY 22, 2019 NOVEMBER 2, 2016 CPDA EXAM

AUGUST 28,2, NOVEMBER 2019 2016 CPDA EXAM

NOVEMBER 13, 2019 CPDA EXAM

(Application Deadline September April 10, 2019) 21, 2016)

(Application Deadline September July 17, 2019) 21, 2016)

(Application Deadline October 2, 2019)

3 10 17 24

4 11 18 25

JULY 2019 S M T 7 14 21 28

1 8 15 22 29

2 9 16 23 30

AUGUST 2019 S M T

ONLINE & PRIVATE TRAINING OPPORTUNITIES Online training courses are available year-round and are ideal for individuals looking to learn at their own pace. For an in-class experience, private training is now booking for 2019. Public training classes are also planned for 2019.

VISIT PPDM.ORG FOR MORE INFORMATION

7 14 21 28

JUNE 2019 S M T

Calgary, AB, Canada

All dates subject to change.

6 13 20 27

4 11 18 25

5 12 19 26

6 13 20 27

W

T

F

S

3 10 17 24

4 11 18 25

5 12 19 26

6 13 20 27

W

T

F

S

1 8 15 22 29

2 9 16 23 30

3 10 17 24 31

4 11 18 25

W

T

F

S

5 12 19 26

6 13 20 27

7 14 21 28

1 8 15 22 29

W

T

F

S

3 10 17 24 31

4 11 18 25

5 12 19 26

6 13 20 27

W

T

F

S

7 14 21 28

1 8 15 22 29

2 9 16 23 30

3 10 17 24 31

Find us on Facebook Follow us @PPDMAssociation on Twitter Join our PPDM Group on LinkedIn


IMMEDIATE VISUALIZATION BETTER INSIGHT INFORMED DECISIONS

A geoLOGIC software solution

With the Data Analytics Module in geoSCOUT you can identify trends and spot opportunities with interactive tables and chart-based visualizations. •

Integrate data from a variety of geoSCOUT modules to access and manipulate well, production, frac data and more, all in one place.

In a variety of customizable charts, you can visualize and label your data in easy-to-understand groupings. •

Create canvases to draw and arrange your charts.

Utilize templates to quickly create complex projects.

It is time to switch to geoLOGIC's most popular and powerful, map-based visualization, analysis and forecasting suite for Windows and its extensive library of premium data.

Contact us to find out more at Marketing@geoLOGIC.com.

Premium data, innovative software, integrated analytics for oil and gas.

geoLOGIC.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.