Foundations, Volume 3, Issue 2

Page 1

Foundations Journal of the Professional Petroleum Data Management Association

Print: ISSN 2368-7533 - Online: ISSN 2368-7541

Volume 3 | Issue 2

1st Place Foundations Photo Contest Winner; After the darkest night, always comes a brighter day (Vikrant Lakhanpal)

25

th

Special Anniversary Edition FEATURE

Riding the Wave

PPDM is celebrating our 25th Anniversary - A few things we have learned along the way. (Page 5)

PLUS PHOTO CONTEST: This issue’s winners and how to enter (Page 16)


Make Better Decisions EnergyIQ creates information certainty in an uncertain E&P world. Rapid and accurate analysis with certainty Consistent data-driven decisions across all assets Operational excellence through the life of the well End-to-end planning, execution and performance Real-time data routing and synchronization Contact EnergyIQ to learn how you can work with the most trusted E&P data available. www.EnergyIQ.info  info@EnergyIQ.info Š 2016 EnergyIQ, LLC. All Rights Reserved.


Foundations Foundations: The Journal of the Professional Petroleum Data Management Association.

Table of Contents Volume 3 | Issue 2

COVER FEATURE

5

Riding the Wave A few things we have learned along the way. By Trudy Curtis & PPDM Members GUEST EDITORIALS

The Early Years CEO Trudy Curtis Senior Operations Coordinator Amanda Phillips Senior Community Development Coordinator Elise Sommer Article Contributors/Authors Robert Best, Jim Crompton, Trudy Curtis, Dave Fisher, Fred Kunzinger, Charles Mathes, Jordan Tucker, PPDM Members

11

The Start of PPDM. By Dave Fisher

Things Every Petroleum 14 Data Manager Should Know Helpful Hints. By Fred Kunzinger

The Information Relay Race

DEPARTMENTS

Photo Contest

16

This issue’s winners and how YOU can get your photo on the cover of Foundations.

Thank You To Our Volunteers

19

Featuring Susan Hopkin & Mark Craig.

20

Sponsors Help Bring the Community Together

21

Editorial Assistance Emma Bechtel, Beci Carrington, Dave Fisher

Critical Information Exchanges. By Jim Crompton

Houston Data Management Symposium & Tradeshow.

Graphics & Illustrations Jasleen Virdi

FEATURES

Upcoming Events, 27 Training and Certification

Graphic Design

BOARD OF DIRECTORS Chair Trevor Hicks Vice Chair Robert Best Secretary Lesley Evans Treasurer Peter MacDougall Directors Brian Boulmay, Trudy Curtis, Jeremy Eade, David Hood, Allan Huber, Christine Miesner, Joseph Seila, Paloma Urbano Head Office Suite 860, 736 8th Ave SW Calgary, AB T2P 1H4 Email: info@ppdm.org Phone: 1-403-660-7817

Student Article

Less Is More

Data Management in Academic Research. By Jordan Tucker

18

Join PPDM at events and conferences around the world in 2016. Learn about upcoming CPDA Examination dates and Training Opportunities.

Student Article

Big Data Has Unique Needs

22

Information Governance and Data Quality. By Charles Mathes

Hands On With The 26 PPDM Association Board Of Directors What We Are All Here For. By Robert Best

ABOUT PPDM The Professional Petroleum Data Management Association (PPDM) is the not for profit, global society that enables the development of professional data managers, engages them in community, and endorses a collective body of knowledge for data management across the oil and gas industry.

Foundations | Vol 3, Issue 2 | 3


POSSIBLE FULL-PAGE AD SPACE


Cover Feature

Riding the Wave ORD EM E P RO GR

E

S

S

O

By Trudy Curtis, with members of the PPDM Association

T

he Professional Petroleum Data Management (PPDM) Association is celebrating its twenty-fifth anniversary this year. From the start, we shared a vision with industry: create data appropriately, keep it complete and trusted, make it available to anyone who needs it, and steward it for our needs today and in the future. Along the way, we found that we needed to support and recognize the experts who manage data and information as skilled and valued human assets. Like all professionals, they need access to appropriate training and education, career ladders and professional development opportunities. David Hood, president of geoLOGIC systems ltd., has been on the PPDM Board of Directors for many years in various roles, including Chairman. Hood is proud of what PPDM has accomplished: “As PPDM celebrates its 25th year, the success that the Association has achieved is remarkable. I believe this success has been due to the relentless focus on listening to our members. PPDM is truly a memberdriven organization and I believe that increasing acceptance of the importance of collaborating has reduced costs and improved the effectiveness of the upstream industry. In these

difficult times, these factors are more important than they have ever been.” Achieving this has not been simple. There have been barriers along the way. Obstacles can be frustrating - they present apparently insurmountable barriers to getting to where we want to be. In spite of the pain they brought, every challenge showed us opportunities that ultimately made PPDM stronger and more resilient. During the last quarter century, PPDM has encountered and overcome more obstacles than most of us care to remember, but that shared industry vision made us strong and determined, and together we have made great strides forward. Today, PPDM is a recognized and essential part of corporate cultures around the globe. PPDM standards and best practices are used by operators, regulators, data vendors, software vendors and consulting companies. The PPDM Certified Petroleum Data Analyst (CPDA) program has global reach, and PPDM training materials are practical and popular. Here are a few things that we have learned along the way.

VOLUNTEERS BUILT US, AND CONTINUE TO MAKE US STRONG. Every standard or best practice at PPDM has been created by teams of industry experts who volunteer their time. The

value of volunteer participation from industry has been estimated at hundreds of millions of dollars – an investment that no single company can match. We know that volunteers who devote expert attention to creating standards and best practices with PPDM simply don’t have time for a lot of administrative details. But organizing and leading meetings, keeping track of the intentions, decisions and reasoning behind development decisions is important. Today, the PPDM Association has more than 20 workgroups and committees. PPDM events, publications, and training programs reach thousands of data managers each year. Working together with many volunteers, PPDM is dedicated to maximizing the value for each donated hour. We Learned That… … every member volunteer is a precious resource. Their time and experience move us forward. To make the most of volunteer time, we use project funding to provide tactical support to work groups.

BEING A COWBOY IS FUN… FOR THE COWBOY. For the users, however, not so much. In the early days of PPDM, groups of interested members formed working groups to develop different parts of the

Foundations | Vol 3, Issue 2 | 5


Feature

data model. Each group did whatever it took to ensure that their part of the model was as ‘fit for purpose’ as they could make it. There were few, if any, rules, so each group could do whatever seemed best for the work they were doing. While this allowed PPDM to develop new parts of the data model very quickly, different subject areas were not compatible with each other, and the learning curve for implementation teams was steep and difficult. For PPDM 3.4 in 1996, a group of courageous members (the modelling committee) developed a common methodology, called the Architectural Principles, and applied the results to the model. Trudy Curtis recalls the consequence of doing this during her first months at PPDM. “Telling the members about the amount of destructive change we made to the data model to make it more internally consistent and cohesive was terrifying. Most of the members were furious. I was pretty sure that I had made my first and last presentation as a PPDM representative.” Tim Downing, manager of gDC Development for geoLOGIC systems ltd., was part of the data modelling committee as PPDM 3.8 was being readied for publication. Tim recalls many spirited debates as the committee decided how to proceed. “The passion which members of the group brought to their area of interest was amazing. We’d discuss the intricacies of how changes fit the real world and how the changes interacted with the existing model. Once the changes and additions were done, we’d spend most of our time discussing implementation, such as how to properly certify an implementation, how to help people with implementations and training - generally the same things that still seem to be discussed today. Working with the group gave me immeasurable

experience and I never felt that my opinion, although less experienced, was considered less than anyone else’s.” Today, standards development follows rigorous processes, so our members can use PPDM for very sophisticated functions. The Architectural Principles, so unpopular at first, have proved to be the foundation for powerful and scalable data management strategies used by companies of all sizes today. Today’s industry wants more consistency, more rigor, and more rules about standards.

We Learned That… … it’s worth the effort to work for long term strategic success, even if it’s difficult in the short term.

DATA STEWARDSHIP REQUIRES AN APPROPRIATE MINDSET. For many years, the philosophy of ‘user empowerment’ made it logical to allow creators and users of data to design data management strategies that suited specific purposes. The price of oil was pretty high, so hiring hordes of developers to write scripts that moved data from system to system seemed like a small price to pay. It worked - for a while. Then the price of oil dropped, the number of technology tools in use sky-rocketed, and data became harder and harder to find, let alone move between stakeholders and processes. Management began to question the time and cost of looking for, converting, and migrating the same data over and over again. Wouldn’t it make sense, they asked, to get a single copy of trusted data and have everyone go get a copy whenever they needed it? And really, they wondered, is the creative nature that makes geoscientists an asset to exploration projects the best kind of person to take on the rigor and consistency of data management expectations? When you think about it, data management needs to be handled with a level of discipline that is more like accounting than exploration. Perhaps, they reasoned, it

6 | Journal of the Professional Petroleum Data Management Association

would be sensible to deploy people who are wired for success with data management. This has turned out to be the path to success. Skilled and strategically positioned data management resources know the business well enough to grasp what each stakeholder group needs, understand technical systems well enough to determine what they need, and have expertise in data management practices. Data management disciplines that have emerged in many companies demonstrate the value of this path.

We Learned That… … data management is a strategic initiative that is supported from the executive suite. … data management professionals need to be grounded in the business and technical systems, but also need to be specialized in data and information management. … attitude and aptitude must be geared towards a consistent and integrated data management framework.

USING STANDARDS IS HARD. Today, architecture teams understand that implementing industry standards is a lot harder than building your own solution. Years ago, the idea of “buy not build” was new to our industry, and finding the right balance between developing a rigorous standard and allowing each implementation team the flexibility to build fit for purpose was difficult. It’s still difficult. Starting in 2007, a group of members decided to implement the RMOTC (Rocky Mountain Oilfield Testing Center) Teapot Dome data in PPDM 3.7. Tim Downing’s role was to map and load well location information into the data model. Tim recalls, “This was my first taste of ‘What is a Well’ (long before it was even called that). The PPDM 3.7 model allowed different wellbore components to be related to each other through the WELL_


Feature

XREF table. Wes Baird and Kim Kelln had modelled the data so that it followed surprisingly closely to today’s ‘What is a Well’. I remember spending hours at a white board with Kim going over how well locations fit into the model and how they fit into each individual component of the well. I’ve since seen the RMOTC data used by a number of different people as a starting point for PPDM, as a demo for software, and as a talking point for presentations. The collaboration between experts in the industry led to a solution that we shared with many other people, and I value the experience that I gained from being a part of it.” Industry likes standards, for the most part. Trevor Hicks, Managing Director for Stonebridge Consulting, says “I work with a lot of oil companies and they are always reassured when I tell them we will be working with a PPDM standard.” Despite this, industry standards have not yet been universally adopted by industry. Perhaps the answer lies less in how standards are perceived, which seems to be mostly positive, and more in how they are actually adopted. Finding answers to this challenge is important. Industry leaders ask PPDM, as they do every standards organization, the same question: ‘Why is it that your standards are not being used by everyone? What would it take to have a standard universally deployed?’ That’s a complicated set of questions, and the real crux may be in the word ‘deployed.’ Once a standard has been deployed, how much like the standard does the final result actually look? Has industry successfully created an environment where multiple services and products can share a single standards deployment? This interoperability is, after all, one of our primary reasons for developing standards. Sadly, once a standards based solution has been built, it is rarely (if ever) interoperable with another solution based on the same standard. Why? Most of the time it’s difficult for a corporate implementation team to figure out which of the many existing

Why adopt standards and best practices? Standards based

Proprietary based

Strategic

Tactical

Strong

Weak

Life-Cycle

Functional

Vendor-neutral

Embedded

Industry

Vendor

Enterprise

Application

Shelf life

Longer

Shorter

Implementation effort

Harder

Easier

Interoperability

Higher

Lower

Effort to integrate other products

Lower

Higher

Decision to use Support for full data life cycle Fit to business needs Data design Developed by Focus is on needs of the

Implementing Standards is strategic, not tactical. standards are suited to their strategic and tactical needs. We need better ways to help teams figure this out across the board; the Standards Leadership Council is working on this problem. Once a useful standard has been found, implementation teams find that a robust industry standard occupies far more ‘business space’ than the niche functions that are in scope for a particular tactical project. During deployment, teams are faced with decisions about whether to deploy the entire standard in a way that supports long term, phased scope growth (hard and expensive), modify the standard to suit their purpose (usually makes the deployment incompatible with other

users of the standard), or build their own (highly proprietary and unsustainable). The decision path above illustrates many of the questions asked by implementation teams. As you can see, from a tactical perspective, it often looks like the easiest path is to build solutions internally, or purchase proprietary solutions. The path to using data standards is mostly based on strategic considerations. When you start thinking about standards from the perspective of data, a different analysis path emerges. Data-centric strategies are not based on software or technical solutions, but on the importance of the data that is needed by the organization and other stakeholders. That importance can be based on the financial value of the data (what it

Foundations | Vol 3, Issue 2 | 7


Feature

costs to create it or use it), on the number of processes and stakeholders who need access to the data, and how long data must remain trusted, complete, useful, and accessible.

We Learned That… … information about standards needs to be easy to understand and consume, and examples are crucial. We made simple, easy to read booklets for public use that summarize and illustrate each standard. … decisions to implement standards form a strategic intention must be supported by the executive suite. … training and good support materials can help an implementation team conform to the standard more readily.

DATA AND INFORMATION ARE ASSETS. Obvious to us now, this simple statement has been something of an epiphany for industry in the last decade. Historically, industry treated data simply as a means to an end. Maybe this approach was a child of the digital age, the transition from mainframe to workstation, the emergence of client-server architectures, the division of data into function related silos, or some other factor. It’s probably all, and more, of those things. But simply acknowledging that data is, of itself, an asset to be stewarded, changed industry’s mindset towards data and information. Steve Cooper, founder and president of EnergyIQ says: “The need for consistent master data across the Well lifecycle has become extremely important in recent years as companies look for ways to streamline operations, reduce risk, and increase efficiency. The need for master data will become even more critical when the industry emerges from the recent downturn as there will be fewer people available to do the same work and many more processes will need to be automated. Master data will drive the next wave of digital transformation within the E&P industry.” Understanding the value of data and information changes the way you think about whether or not to implement a

standard. Software and technical platforms have a much shorter shelf life than data does. Forcing data to follow the same path as software does costs us a lot of time and money to separate the two when new opportunities arise. Technological advances in areas such as spatial engines, analytics software and reasoning engines, have shown us that data we once thought to be ephemeral (used for a short time and then discarded), can now be used for unexpected and very useful purposes. Decisions that relegate expensive or irreplaceable data into tactical and proprietary bunkers prove shortsighted in the long term. Getting data out of proprietary bunkers is expensive and slow. Everyone is frustrated when finding and making data usable causes delays in projects to implement new capabilities. It’s much better to plan for the future, and ensure that industry is positioned for technical agility.

We Learned That… … when data and information are treated as corporate assets they become part of the strategic planning process, with dedicated and properly trained staff, and supporting budgets. This approach positions companies to steward data across the life cycle, for all stakeholders and processes.

CONVERGENCE IN BEST PRACTICES IS CRITICAL. In a highly competitive environment, data is the common resource that is shared with partners, regulators, service companies and even the public. Each piece of data is needed by many stakeholders, often throughout the decades long life of a well or facility. Despite this reality, our industry has a history of inventing and reinventing our data management strategies for corporate wells. Without industry-wide collective action, we are doomed to continue to rework our problems again and again, because no matter how efficient one company’s

8 | Journal of the Professional Petroleum Data Management Association

data management strategies may be, the truth is that most data comes from outside sources over which an individual company has little (if any) control. Semantic problems abound. Different companies, agencies, divisions, software products and systems use the same word to refer to different objects, or different words to describe identical concepts. The Rosetta Stone concept behind ‘What is a Well’ is being applied to other semantic issues. These will help us to create navigable paths around complicated obstacles. One of the keys to successful and interoperable adoption of standards is the development and use of consistent vocabularies. We have known this for a long time, of course, yet it is difficult to achieve. Differences of opinion about what constitutes data that is ‘good’ or ‘fit for purpose’ have resulted in highly diverse systems and expectations for data content, quality and completeness. Creating a uniform set of expectations and rules about data will create a stable foundation upon which operators, regulators, vendors and service companies can build. The PPDM Rules program targets this business need; with nearly 3,000 rules in place, it is now positioned to drive out this essential clarity. Focusing on the collective strategic needs of data management professionals around the world makes a difference. The simple truth is this; if we are to fully realize corporate data management capabilities, we must move towards convergence in fundamentals. We have to agree what the words we use mean, and use them consistently. We have to agree what ‘good’ looks like for each kind of data, and establish mechanisms that ensure data starts off and remains ‘good’ for all stakeholders through the life cycle. We need to focus on eliminating data attenuation, especially for irreplaceable data. Twenty-five years ago, PPDM was focused on well data. Developing standards like the PPDM Data Model, best practices like ‘What is a Well’ and ‘Well Classification,’ and Global Well Identification practices have made a


Feature

difference to industry, but we are not done. If we are truly to make progress, every stakeholder group needs to work together to address and resolve all of the challenges that keep us apart. Steve Cooper led the PPDM ‘What is a Well’ workgroup in 2008. Cooper sees standards as crucial to industry success: “Establishing an accurate Well Hierarchy is fundamental to the success of any E&P master data management solution, yet we are still learning how to build out the Well Hierarchy from multiple data sources due to different group’s interpretations of, for example, a completion done at different phases of the Well lifecycle and across different groups. The PPDM ‘What is a Well’ initiative provided a great boost towards understanding and establishing baseline definitions. This is now being implemented as part of data management solutions. There is still of a lot of work that can be completed in this area but, in my opinion, it has greatly increased our understanding of how to manage data across the Well lifecycle and will have a significant impact to the bottom line of the business in the future.” We Learned That… … companies can achieve internal excellence in data management, but in an industry where most of our data comes from outside ‘uncontrolled’ sources, we must converge in practice if we are to achieve long term success. … working collectively is not competitive, but allows every stakeholder to develop robust data management strategies in which all stakeholders are successful.

PROFESSIONAL RECOGNITION IS THE CULMINATION OF MANY THINGS. In 2008, the PPDM Association made the bold move to expand the vision of the Association to become a true professional society. Our name, “the Professional Petroleum Data Management Association” reflects this. In 2016, the PPDM Board of Directors adopted a new vision statement: “The Professional Petroleum Data

PPDM Committees and Workgroups

Board of Directors

Functional areas Community Leadership Teams

Community Leadership Development

Calgary

Brisbane

Houston

Perth

Data Analyst

Job Families, Career Ladders & Surveys

Denver

Jakarta

Geospatial Data Analyst

Training Program Library

OKC

Communication

Foundations Editorial

Volunteer Programs

Certification

Governance

Professional Development

Steering Committee

Abu Dhabi

Competency Profiles

Dallas/Fort Worth

Business Plan & Value Proposition

Boxes with fill are in place June 2016 Boxes with no fill are partially formed and in development

Body of Knowledge

Rules Library

Steering Committee

SME team

Regulatory Data Standards

Data Model

Regulator/ Operator Steering

Modelling Committee

What is a Completion?

PPDM 3.10

Realizing Business Value

Hydraulic Fracturing

Planned

Compliance with Teeth

Liaise / Collaboration

Truly a member driven society, PPDM has more than 20 committees and work groups in place. Management Association How to Build a Recognized Profession (PPDM) is the not for profit, global society that Professional Recognition enables the development Professional HR support Certification Training & Development Careers of professional data & Maintenance Education Salaries managers, engages Professional Professional Discipline Governance them in community, and Body of Knowledge & Standards & Professional endorses a collective body Best Practices Best practices Specifications Expectations of knowledge for data Organizational Community Governance management across the Community of oil and gas industry.” Community Publications & Events Practice Leadership Communication Globally Margaret Barron has worked around the globe The process for building a recognized profession is described in many education and in Foundations Volume 3, Issue 1. corporate environments in areas of professionalization, accreditation and collaborate with…to name a few. The and certification. Barron led education benefits to employers or organizations development for the CEAMS initiative seeking such expertise are also substantial: (Center for Energy Asset Management hiring and training benchmarks, clear Studies) for land management professionally related pathways for human professionals. “Becoming a professional resource departments to refer to, common discipline is complicated,” she says. “Most global language and practices that directly begin with grassroots movements to impact the bottom line through improved establish fundamental levels of acceptable work efficiency, quality of service and an qualifications, rules of conduct for members inherent best practices approach.” of the profession, and of demarcation of the qualified from unqualified amateurs. The “The Professional Petroleum Data value of this process to any professional Management Association (PPDM) is the group is not to be underestimated. The not for profit, global society that enables impact for the individual includes: greater the development of professional data job status, higher pay, demonstration managers, engages them in community, of knowledge, skill and expertise in a and endorses a collective body of particular discipline, job access and knowledge for data management across promotion, and a global network of the oil and gas industry.” professionals to learn from, share ideas

Foundations | Vol 3, Issue 2 | 9


Are trust issues affecting your relationship with

Your Data? POSSIBLE FULL-PAGE AD SPACE

Poor-quality data costs the average company

$14.2 MILLION ANNUALLY *

From business decisions based on bad data. From bad data replicated across multiple systems. From manual data compilation and massaging for reporting.

Bad data is bad business. EnerHub™ from Stonebridge Consulting is a cloud-based enterprise data management solution for oil & gas. EnerHub validates and syncs data across your organization, connects source systems, and delivers the information that matters to your people. EnerHub’s open architecture plugs into your existing systems and enables you to elevate data quality to a corporate competency.

Get your relationship with your data back on track. Contact us to schedule an EnerHub™ demo. Business advisory and technology solutions for next-gen oil and gas

www.sbti.com | info@sbti.com | 800.776.9755 * Gartner, The State of Data Quality: Current Practices and Evolving Trends, Dec. 2013


Guest Editorial

The Early Years ORD EM E P RO GR

By Dave Fisher, Dave Fisher Consulting

T

E

S

S

O

hirty years ago, the buzzword in the computing world was ‘client-server.’ Until that time, the technical databases and seismic processing of the large E&P companies ran on centralized mainframe hardware and proprietary software. The emergence of relational database management systems, distributed client-server computing architecture and desktop workstations for technical staff offered dramatically lower costs, greater efficiency, and better ways to handle the expanding volumes of digital data. As oil prices declined sharply in early 1988, Gulf Canada Resources looked for ways to cut costs. Mel Huszti and his Exploration and Mapping group in Calgary received an unexpected order: “Remove all mapping systems from the Unisys and locate to a different computer environment as quickly as possible. It must handle our current capabilities and be flexible to accommodate future needs. Use as much of Gulf’s current hardware as possible.” Although the team quickly opted for the new client-server environment, they were in uncharted territory. Mel remembers, “Migrating all mapping applications to a

client-server environment had never been done before, to our knowledge. Could it manage the large volumes of E&P data and different data types? How could we develop a model to store information in a relational data management system?” The daunting task and tight timeline of 8 months would have been impossible except that the team had recently completed a full evaluation of all their systems, infrastructure, data flows and business needs. Gulf decided to use a network of Unix workstations for technical data processing and data access functions, while retaining an IBM system for some data and plotting. The short timeline pressed them to shop for third-party software. Bill Wally, who was with Gulf in Houston prior to the 1985 merger into Chevron and the sale of Gulf’s Canadian business, told Mel about a new desktop mapping package from Finder Graphics of California. It used the recently developed Oracle relational database system. The timing was fortuitous: Mel and Davis Swann attended the SEG (geophysicists) convention in Anaheim in November 1988 to check out Finder’s exhibit. At the same time, several small software firms in Calgary were also working on the Oracle system in a client-server environment. Digitech Information Services was integrating their clients’ private well data with public data from government sources, especially the ERCB (Alberta’s energy regulator). The ERCB database was designed for data input from 80-character punched cards; Applied Terravision Systems (ATS) saw the opportunity to

store and access this data in an Oracle database for multiple customers. Buying ‘off the shelf’ has attractions. You can test-drive first, you get the creativity of software specialists, the cost can be much less than in-house development, and the delivery time is much shorter. However, each product has its own data model. What happens if the software is no longer available or the product evolution does not keep pace with your needs? As Mel said, “If any company goes into receivership, I want the data model to reside in the public domain.” Gulf proposed a project to Finder, ATS and Digitech. One of the conditions was there would be one data model and it would be open to the industry. The original partners first met together in April 1989. Three competitors (the vendors) and one customer (Gulf) shared their data schemas and agreed on several basic principles for a common data model (see sidebar on the next page). At the end of the April meeting, each partner took away the draft schema. Within a month, changes were agreed. In June 1989, version 1.01 of the data model was defined and each partner began changing their software. An expanded version was announced to the public in April 1990 as PRISM (Petroleum and Resource Industry Storage and Management.) Pat Rhynes presented PRISM to the AAPG (geologists) convention in San Francisco in May and his paper was published in AAPG’s Geobyte (Oct 1990). The idea was spreading. ‘Public’ meant that anyone could get the model in a 3-ring binder plus an Oracle script file on a floppy disk. ATS made

Foundations | Vol 3, Issue 2 | 11


it available for a handling fee of $100; no downloads in those days! These early versions were distributed by the vendors; the first version (3.0) from the PPDM Association was not until 1992. Pat Rhynes was a data modeler and software developer at ATS. Working to deliver Gulf’s requirements, he also worked with president Bob Tretiak to promote the vision and sell their products to other customers. Pat recalls the challenge for a small company: “The main focus was to survive. The three vendors on the Gulf project required revenue to sustain software development and implementation, so sales were very necessary. At the same time, the data model needed additions, refining, updating, and support. Everyone wanted information on the model; their questions took time to deal with.” At Digitech, Kim Kelln was responsible for loading well scout data into the new database. Shortly after, he moved to ATS to manage the PPDM databases supporting the ATS product line including custom migrations and loaders. He remembers, “This was an exciting time for PPDM as we matured the model away from its reporting base, and as PPDM strove to become the de facto standard for petroleum industry information.” Nearby in Calgary, Amoco Canada Petroleum was also lured by the cost and efficiency benefits of ‘client-server’. Hearing about the work being done for Gulf, Amoco created its Datavision project. Mike Verdiel was the Exploration Systems Supervisor. “In 1989 we were downsizing. It was apparent that the Exploration department could not justify the infrastructure needed to support the existing systems” (quoted in PPDM News, March 1993). In Houston, Amoco’s John Deck and his team also worked on Datavision. John recalls “There were hundreds of challenges. The team was new in the client-server arena, we were breaking new ground in nearly everything we were doing, and we had to address management’s inflated time expectations.” Back at Amoco Canada, Jeff Davies was a young project manager. His assignment: ‘get off the mainframe.’ Jeff had two

project goals. “We needed to load monthly well data from our vendors, and we needed to retrieve data from the Oracle database (PPDM) and reformat it into flat files for import into our desktop applications.” By October 1990, all of Amoco Canada’s exploration systems were on PPDM and the mainframe was history. As a result of moving to this more efficient and flexible system, Exploration’s costs for support, development, training, database administration and networking were cut by 60% and computing costs were down by $450,000 per month. Mike concludes, “No E&P company is big enough to ever again build its own systems from scratch.” Meanwhile, Canadian Hunter Exploration was hunting for a new way to handle the vast amounts of digital well data that were becoming available from government sources. Wes Baird selected the ATS product and caught the PPDM vision for an industry standard. He joined the board of directors in 1993. Not everyone was similarly enthusiastic, however. According to Wes, “the most frustrating thing about change is the slow adoption. True then and true now.” Bob Tretiak remembers those very early days at ATS. “I never envisioned that PPDM would continue this long. We were just concerned about solving a real need at the time. The longevity of PPDM substantiates that the need was real and permanent. Mel Huszti and I had the early vision and we were the force that got it started. But without the support of the early adopters, PPDM would be just another good idea that died.” Another Calgary company, Home Oil, was eager to reduce costs and improve the effectiveness of its seismic data management. They could not find a single vendor with all the necessary products and services. By participating in the development of the seismic part of the PPDM model, and by working with a few vendors who were building on PPDM, Home was able to implement new tools for seismic locations, records management, data sales and entitlements (PPDM News, Fall 1992). The compelling vision was a shared data model as the basis for a creative

12 | Journal of the Professional Petroleum Data Management Association

MODEL DESIGN PRINCIPLES 1989 • T he data model should honor the earth (real world) • Universal domains should be in common tables • Facts and interpretations should be kept separate • Avoid concatenated variables, locally invented keys and multipurpose columns • Data should be normalized as much as practical for performance • Variable names should be meaningful • Focus on the end user. Source: Geobyte, October 1991

and responsive software marketplace. This would free the E&P companies from the expensive and lengthy process of in-house development. The software and data vendors saw great potential in ‘build once, sell many’ and being able to compete on quality rather than being shut out because a customer was already using a competitor’s data structure. Of course, the vendors’ other big vision was money. Stephen Lougheed was the president of QC Data in the 1990s. Looking back, he says “there are some millionaires who made their fortune on PPDM.” All four of the founding partners wanted to share the vision. They decided to create a user group of volunteers who would develop extensions to the model and steward its further development. An invitation letter outlining the objectives, working group details and perceived benefits was sent to the contacts lists of all three vendor partners. The first meeting was held in Calgary on February 28, 1991. An interim executive committee was mandated to pursue incorporation as a not-for-profit society. Work groups were established for seismic, well tests, land and crossover issues. This meeting invented the name Public Petroleum Data Model. The first year budget was set at $47,000 based on a membership fee of $900; how times have changed!


PPDM, Public Petroleum Data Model Association was incorporated in Alberta, Canada, on July 22, 1991. The mission statement was: “…the ongoing implementation and refinement of a standard, vendor-independent data model which addresses a broad range of petroleum exploration and production business needs.” There were 24 initial members; within months, the membership had doubled. PPDM volunteers gathered in work groups to develop and refine the model. During 1991, these groups tackled data architecture, well tests, digital well logs, lithology, seismic and land. The Houston chapter held its first meeting on December 4, 1991. In early 1992, Ian King was hired as the data modeler to pull all the work into Version 3.0 using the Oracle CASE tool. In early 1991, Petroleum Information Corporation (now IHS Energy) heard about the new data model in a conversation with Pat Rhynes of ATS. The company has been involved with almost every aspect of PPDM ever since, especially on model development, governance and promotion. Their PIDM database was adapted from PPDM 2.1 and has evolved with subsequent versions. Andrew Wolberg and Ginny Hampson spent over 3 months writing the table and column definitions for PIDM; much of their work was contributed to PPDM and is still there. Andrew was seconded to PPDM for two weeks in May 1992. “The goal was to produce a consistent, relationally sound model with all the constraints and database objects using the Oracle CASE tool set. The primary achievement was the addition of industry standard definitions for all the tables and columns. I never typed so fast in my life!” Projects have obstacles. If we are feeling diplomatic, we call them challenges or opportunities. PPDM was no exception. Yogi Schulz, a long-time PPDM director, recalls, “the focus was on technical excellence and sophistication of the deliverables while giving inadequate attention to the importance of marketing and communication of the vision and the deliverables.” Concurrent with PPDM’s development, two other visions were

active in the industry. IBM had developed their Mercury project data model. And the Petrotechnical Open Software Corporation (POSC, now Energistics) had Epicentre, a sophistical logical model. Although POSC and PPDM had two joint efforts in the 1990s to merge the best features of each model, the technical challenges were insurmountable. The PPDM Association was determined to focus on essential performance in current technology and developed the tag line “the business-driven standard”. Paul Coward got started with PPDM in 1989 when Norcen (a Calgary oil and gas producer) was looking for a database solution. Norcen was one of the first adopters of the well and seismic parts of the PPDM model. He remembers that a major challenge was “not to build the data model too close to one vendor’s concept or idea. This would have given a head start over other vendors.” This commitment was sustained through the collaborative development process whereby each version of the model requires approval of the membership: one member, one vote. Kim Kelln identifies some challenges from the early days. “I was adamant on applying more rigour to normalization, and getting rid of external identifiers as primary keys (i.e. the UWI question).” Another challenge, not yet fully overcome, is that “in the very early years, two customers could be 100% model compliant and yet couldn’t share data. The models were structurally the same but they were filled with data using different data management paradigms.” Today, PPDM is all over the world. In various adaptations and implementations, the data model is near the heart of many corporate databases, software applications and records management systems. Many customers do not realize their dependence on PPDM. Perhaps we should have promoted a ‘PPDM inside’ sticker. PPDM also pioneered the way for collaboration in our industry, whereby competitors share their experienced staff to develop solutions that we can all use. About 10 years ago, PPDM estimated that on the basis of time contributed, the data model represents more than

PPDM Data Model Tables 3000

2500

2000

1500

1000

500

0

$100 million in development effort. Starting with the data model, PPDM has expanded to a wide range of information management essentials including semantics, community development, education and professional development. The future is bright if we keep working together. About the Author Dave Fisher is a retired Calgary geologist. He had never heard of data models until he joined the PPDM board of directors in 1993.

Foundations | Vol 3, Issue 2 | 13


Guest Editorial Things Every Petroleum Data Manager Should Know By Fred Kunzinger, Noah Consulting

E

very decision a business makes is based on the data and information available to it at the time the decision is made. On the surface, this statement is obvious. Many of these decisions are multi-million, or possibly, billion dollar decisions, but the underlying efforts and responsibilities to ensure that the business has the data and information it needs, when it needs it, where it needs it, and in the format that is required is trivialized. It is the responsibility of the people who manage the data to not only make sure the above tasks are fulfilled, but it is also their responsibility to help the rest of the organization, including senior management, understand the immense impact that data management has on the entire organization. Moreover, this not only means the subsurface functions where data management is usually the most mature, it includes drilling and completions, EH&S, production accounting, facilities and operations, mergers and acquisitions, and even supply chain. ORD EM E P RO GR

E

S

S

O

While the first portion of my career was in cartography and geology, I have spent the better part of it in data management. Over the course of that time, a list of helpful hints has evolved that helps to keep me focused on what our goals are both in what we are doing, and just as importantly, how we should go about doing it. Some of these came naturally, while others were expressed by others and I am just the scribe (special thanks go to many people, including Martin Lange, Ellen Nodwell and Mark Wiseman). Hopefully all, or some, of these will help you continue to advance the profession of Petroleum Data Management. So let’s start with something very basic. What is your job? Whether the question comes from someone at home, or from a passport control officer while crossing an international border, this has always been a very difficult question to answer without going into a detailed explanation. For our purposes however, the answer is very straightforward…Data and Information Management’s job is to help find, produce and sell oil and gas. If you work for an Upstream company, whether an Operator or a Service company, this principle holds. We help carry out the task of finding, producing or selling oil and gas by ensuring the people who make decisions have all of the needed information at their disposal when they need it, how they need it, and where they need it. Think holistically: Data Management is a complex organism, not just one part. To do it right, you need to combine standards, systems, processes, hard

14 | Journal of the Professional Petroleum Data Management Association

work, and a knowledge of the business you are trying to support. Many times companies try to solve a problem with the deployment of a software solution, or a reorganization, or just putting in more hours. The right technology, combined with the appropriate processes, and clearly defined roles and responsibilities all contribute to the business solution. What is success? While it may seem apparent, something that many projects fail to do is to define what success looks like. This is critical, because without doing so, many people will expect perfection, and perfection does not necessarily equal success. Let’s take a look at some examples: • Success as a baseball hitter. Joe Dimaggio is widely considered one of the greatest baseball players ever with a career batting average of .371. While those who know baseball recognize this is an amazing number, it still means he did not get a hit almost 63% of the time. • Success as a professional sports team. The most successful professional sports franchise over their lifetime is the Los Angeles Lakers, with a career winning percentage of around 60%. This means, the most successful sports team ever still loses 40% of the time! Fit for purpose: If you want to build the perfect taxonomy, go to work for the Library of Congress. If you want to build the perfect technology solution, go to work for IBM, or Microsoft, or Cisco, or ... Your job is to build and support the most fit for purpose data


0

1 0

0

1

1

1

0

0

0

1

1 1

0 management solution to support your business. Do not try to win the Noble Prize for Data Management. Work to help find, produce, and sell hydrocarbons. Stop the Bleeding… do not risk the success of a project by encumbering it with the sins of the past. Many projects fail to achieve their objectives or are subject to multiple delays by trying to implement a new solution at the same time they are trying to clean up decades of legacy data. Focus on the point forward. If you put in a good seismic data management system today, in three years you will have thirtysix months of really well managed seismic data. You can adopt a ‘scan on demand’ approach to the legacy data and bring it into the new way forward as it is needed. Be like water and follow the path of least resistance… do not force it. Sometimes the business units who need the help the most are also the most resistant to change. Trying to force a solution on them will not succeed. Work with those who want the help and work on your relationships with those who do not. When you have a working solution, you can leverage those new relationships to expand the deployment of the solution. Metadata is as important as the real data. Do not rely on carbon-based metadata. This is especially important with the current state of the industry. Between retirements and redundancies, there is a good chance much of your carbon-based metadata is no longer in the office, so you cannot just seek out the person with the knowledge.

Store the metadata with the actual data on the silicon-based life forms. The urgent overwhelms the important. Do not try to make marked improvements in your capabilities while supporting the day-to-day…BUT… do not create an ivory tower for your initiatives. Remember, “Necessity is the mother of invention.” The real business needs will make themselves known when gaps in the current state are exposed. Supplement your teams, do not separate them. If necessary, use outside expertise and manpower to help you move up the curve. Otherwise, every time you make a little progress, a daily support crisis will occur and continually delay your new initiative. Get executive support. Endorsement versus Mandate. I first saw this in a slide during a presentation by the CIO of AT&T. His point was that too many times senior executives verbally support initiatives, but do not back up that support with specific, accountable, directives. When a mandate is lacking, many people will just keep their heads down for a while until it blows over and then go back to their old ways. Data Management should be part of the job. Make adherence to Data Management standards and processes a part of everyone’s annual objectives. When life gets overwhelming, people focus on what they have to do. If it is not in their objectives, there is a good chance it will not get done. Do not believe your own press. Make sure your solutions are embedded within the fabric of the organization, otherwise

you are only one new senior executive from starting over. Get the business involved in the new solution. Employ good training and change management. Then when a new senior executive comes on board, he/she sees these solutions as things to leverage, not replace. Know your audience. Motherhood and apple pie is great on the Fourth of July, but should not be the tone when gathering executive support for your program. Explain the why in business and financial terms. When a Well AFE is created, there are both costs and financial benefits in the form of expected production amounts. Data Management projects should be no different. You need to put forward your proposal with a cost-benefit analysis if you want to win support for your project or program. Most importantly, never forget…what business problem am I trying to solve? If you cannot answer this question, you are working on the wrong things. About the Author Fred is a Senior Principal and Upstream Subject Matter Expert with Noah Consulting.

Foundations | Vol 3, Issue 2 | 15


Photo contest

Foundations photo contest

“LAKE VIEWS ALBERTA” BY APPLE & ERIC CHABOT 2nd Place in the Volume 3, Issue 2 Foundations Photo Contest “Playing with colour striking strategy and lights, Upper Kananaskis Lake” – May 6, 2015.

16 | Journal of the Professional Petroleum Data Management Association


Photo contest

On the cover:

“AFTER THE DARKEST NIGHT, ALWAYS COMES A BRIGHTER DAY.” BY VIKRANT LAKHANPAL 1st Place in the Volume 3, Issue 2 Foundations Photo Contest

“This photograph depicts the current state of the petroleum industry. With the falling oil prices there is a widespread pessimism in the industry. What once shined the brightest is today setting deep in the ocean, but we should not forget that after the darkest night comes the brighter day. With the oil price at $100/bbl. the industry has spent an extravaganza, but now it faces a new challenge: how to make profit at the current oil price. Diligent petroleum data management could be the key to success as the industry is still trying to make sense out of the huge gigabytes of data it has.” – February 16, 2016

Enter your favourite photos online at photocontest.ppdm.org for a chance to be featured on the cover of our next issue of Foundations!

Foundations | Vol 3, Issue 2 | 17


SA

Student Article course of action chosen by researchers to account for errors depends on the dataset. For example, with reservoir characterization, researchers will dive into public data to supplement the donated data. The lessons learned by incorporating public and private data can be of real value to the industry, helping data managers to maximize the effectiveness of their data systems. Correlations and equations developed to overcome missing data might be academia’s greatest contribution to industry. They add direct value to operators in two ways. First, they allow corrupted or missing parameters to be calculated. For example, different correlations were developed to solve for torque in drilling operations because the torque data available to researchers was missing or corrupted. These correlations were then passed on to industry to calculate torque even if instrumentation failure occurs during drilling. Second, correlations and equations can allow operators to omit gathering certain data parameters in situations where downhole data acquisition is expensive. Some correlations developed in research have been proven to yield acceptable accuracy levels; this allows operators to omit collecting certain data. For example, correlations have been developed and tested to determine various reservoir fluid properties without expensive testing. The cost savings are tangible, especially in a low commodity price environment.

Less is More: Data Management in Academic Research By Jordan Tucker

SQUEEZING EVERY OUNCE OUT OF THE DATA

0

1

0 1 0

0

1 0 0

0

1

1

While operators are drowning in data, academic research teams are hard-strapped. This frustration leads to a high degree of innovation. Operators in the big data world are focused on interpreting large datasets quickly and efficiently; they are swamped with just getting through the datasets, leaving little room for innovation in the way data is analyzed and presented. On the other hand, researchers are forced to innovate in data analytics. For example, an operator might apply the same analytic method to drilling parameter data from hundreds of wells whereas a researcher might apply ten different analytical methods to only 5 available wells. Researchers may therefore discover what methods extract the most value out of datasets. This will be passed onto the operators who donated data to help optimize their data analytics. 0

1

1

0

0

0

1

1

0

he arena of academia is a unique world of data management. While companies are plump with data, academia scavenges for whatever gracious donors are willing to part with. Although data acquisition is frustrating for the academic world, it leads to new ideas and out of the box thinking. ORD EM E P RO GR

E

S

S

O

DIFFICULTIES OF DONATED DATA To begin this journey into understanding the workings of academic data management we must consider some of the difficulties associated with donated data. When operators are aware of what data analysis will likely be conducted, the data acquisition program can be designed to ensure data coverage and quality for all needed parameters. Academic analysis is limited by what data is available. With no control over what parameters are measured, researchers often develop workarounds that can be of great value. When certain data parameters are missing or corrupted, researchers must be creative in their analysis. Academic researchers often have incomplete information on data acquisition methods, sources, and assumptions. For example, many service companies include calculated properties in the data delivered to their clients. Operators usually know the methods used to determine these properties. However, this supplementary information may be excluded from their donated data, driving the researchers to develop their own correlations and equations.

SUPPLEMENTING DATA Missing or corrupted data is an issue within a dataset. Operators can retake measurements in certain scenarios, while academia does not have this luxury. The

Normalized Bit Run Length vs. Normalized Torque Color by Section

1

Normalized Bit Run Length

T

Intermediate Production Surface

Shape by Bit Type

0.8

PDC Tricone

0.6

0.4

0.2 Exponential curve y=3.67*Exp(-5.94*x)

0.2

0.4

0.6

0.8

1

Normalized Torque

This graph presents run life of a drilling bit versus the average torque felt by the bit. The torque sensors on the rig were malfunctioning but a correlation was used to calculate torque so that this trend could still be analyzed.

18 | Journal of the Professional Petroleum Data Management Association


DRIVING HOME VALUE TO DATA DONORS Researchers are always under pressure to demonstrate value. The only way to ensure a continual flow of new data from industry is to innovate and demonstrate added value to whatever data is donated to research programs. As long as researchers are granted access to company data, new innovation will continue to flow out of the academic domain.

LESSONS FOR INDUSTRY 1. Less is more Sometimes the amount of data can be overwhelming. Being able to focus on a smaller dataset allows the data to be evaluated from every angle and the maximum amount of value squeezed out of this data. Lessons learned from the evaluation of a small focused set of data can then be used to determine which data analytic methods should be used on the rest of the dataset. 2. Correlations are your friend Correlations can be used to eliminate the need for certain types of data acquisition. If a correlation demonstrates an acceptable level of accuracy it can be used to lower the cost of data acquisition needed to achieve the same result. 3. Integrate private and public data The requirement of academia to supplement donated data with public data can bring a wider perspective. Incorporating public data with private data collected by companies has the potential to be a valuable addition to any data management system. 4. Industry and academia can support each other T he inner workings of data management in academic research is different out of necessity. Some of the lessons learned by academia can be valuable if applied to operators’ data management systems. The close connection between industry and academia will continue to be a mutually beneficial relationship. About the Author Jordan Tucker graduated from the Colorado School of Mines in May with a degree in Petroleum Engineering and has recently begun his career as an Evaluations Engineer for Verdun Oil Company, a small Private Equity backed E&P.

Thanks to

our Volunteers

Susan Hopkin May’s PPDM Volunteer of Month was Susan Hopkin. Susan is a principal at Noah Consulting. A member of both the Petroleum Data Management Certification Committee (PDMCC) Data Analyst Subcommittee since its inception as the Petroleum Education Task Force in 2011 and the Calgary Leadership Team, Susan has provided invaluable assistance in launching the CPDA certification and making Calgary events both possible and better. She has also shared her experience with the Data Management community through her article in Foundations. A native of Perth, Australia, Susan has worked with a variety of companies, including Woodside Energy, Petrosys in Canada and Australia, the Western Australian Police Department, and in current role with Noah Consulting. She holds a Bachelor of Business, Information Processing, from Curtin University, and is also the President of the DAMA International – Calgary Chapter. “PPDM is truly fortunate to have Susan as such a dedicated volunteer. She has been a tremendous support for our organization, by helping to coordinate several Calgary events and all of her hard work on the PDMCC. We appreciate all of Susan’s hard work and look forward to continuing to grow the data management community with her.” – Elise Sommer, Senior Community Development Coordinator, PPDM Association.

Mark Craig PPDM would like to recognize Mark Craig as our June Volunteer of the Month in recognition of all of his hard work as Chair of the new Professional Development Committee (PDC). The PDC has a mandate to raise awareness, coordinate Professional Development activities, and support opportunities to members within the Petroleum Data Management discipline that will help guide and advance the profession. Mark has a great deal of experience in the petroleum industry, including previous roles as General Manager Information Technology at PennWest, CIO and Business Integration Manager at BP Canada, IT Manager at Encana, and Environmental Research Associate at the University of Calgary. He holds a Bachelor of Science in Ecology with a Chemistry Minor, along with other diplomas and certificates. “It is a mandate of our board to grow data management as a professional discipline – and this will happen through the work of the PDC. We appreciate all of Mark’s hard work standing up this Committee and guiding its growth through these early and challenging times. We look forward to seeing the amazing results that Mark and the PDC will generate,” said Trudy Curtis, CEO, PPDM Association.

Titanium Sponsor THANKS TO OUR SPONSORS

Platinum Sponsor

2016 Perth

Platinum Sponsor

Speaker Sponsor

Data Management Symposium August 4 Curtin University

Registration & Sponsorship Opportunities now available

Venue Sponsor

NEW DEEPLY DISCOUNTED RATES!

More Info & Register Now ppdm.org/perthdms2016

Foundations | Vol 3, Issue 2 | 19


Guest Editorial 1

0

1

0

1

0

1

0

0

1

1

0

The Information Relay Race By Jim Crompton, Reflections Consulting

F

or those of you who follow athletics, you know that most events are contests of individuals competing to see who can run faster, jump higher or throw farther. But there is one event that requires teamwork, the relay race. No matter what distance, the key to winning the relay race is the baton handoff. In the short 10 metre space, one runner has to match the pace of his partner running the next leg and pass the baton without slowing or dropping the baton. This is the metaphor I want to use in the context of critical information exchanges. Let’s start at the 100,000 foot level with the E&P value chain (or lifecycle if you will). The key stages in the E&P lifecycle would look something like Explore – Drill – Produce – Manage (the reservoir) – Operate – Abandon. There are many ways you could define this lifecycle but for the moment let’s go with this version. The proper use of data management and data standards best practices add value within the core of the each of the major stages in the E&P lifecycle. In this article I want to focus on the handoffs between stages, the baton passing moments. These handoffs, or information exchange moments, force an organization to look beyond the silos, the functional perspectives, and solve the problem of enabling integrated or cross functional workflows to be able to broaden the view of management to see a larger perspective ORD EM E P RO GR E

S

S

O

of their operations and understand how to reduce costs, remove barriers, and increase revenue. These are the oftenquoted information bottlenecks and sources of productivity barriers for just about everyone in the organization. Let me get to some specifics on my favorite information exchange challenges: 1. Shared Earth Model (mostly seismic interpretation) to Reservoir characterization and simulation (reservoir management). The shared earth model is the interpretation of seismic data calibrated with well logs and other petro-physical measurements to produce a structural and often stratigraphic model of the subsurface. This model has to be up-scaled and reformatted to fit the input requirements of reservoir simulation programs. Structural information can usually be exchanged (layers and faults) but the challenge these days is to get more data about the stratigraphy and fluid properties from the seismic model for a better understanding of the reservoir. Standards for these important modeling steps are usually application-specific with well-known but time consuming processing flows serving as the baton pass in these situations. Energistics’ work on RESQML is an important project in this regard. 2. Subsurface interpretation (mostly geophysics and basin modeling) to drilling. The other major use of the shared earth model is to guide the well path for complex drilling projects. Today’s challenge is more than just identifying the desired bottom hole location and let the driller take it from there. Complex well paths need to avoid drilling hazards (salt, low pressure zones, etc.) as well as follow specific horizontal drilling paths to

20 | Journal of the Professional Petroleum Data Management Association

enable well bores to stay within a shale reservoir for thousands of feet. This integration requires a close coordination between the subsurface model and the proposed and actual drilling path. The challenge goes beyond just the trajectory of the well, to advance warning of drilling hazards, like low pressure zones and other well integrity obstacles, to avoid taking a kick or losing control of fluids in the well bore. The industry has yet to develop a reliable model of the well path that can guide the drill bit to the desired location with maximum safety and drilling efficiency in mind. 3. Facilities design and construction to Operations (the turn over to operations challenge). The document and drawing-centric information store that is generated from the design and construction of major capital projects is growing with the addition of CAD/CAM models. Operations and maintenance processes that take over after commissioning speak a very different information language with a variety of fit for purpose applications using primarily structured data in CMMS (maintenance) and other engineering tools. It is well known that this handoff to operations step is both time-consuming and inefficient, where a lot of data developed in the design stage never finds its way to operations and must be recreated when needed in the operations stage. Some commercial solutions are addressing the facilities engineering information lifecycle to help with this challenge. 4. Facilities engineering to Operations. Again these are two important communities that speak different information languages and have grown up using different tools. There isn’t just one baton handoff


but a series of them along the entire race to operate a field until economic end of field life. Understanding Equipment Health and Process Health aid reliability-engineering studies. Operations usually feels like they generate data that everyone else uses but they get little feedback to help them operate better. Operations run the control systems, field monitoring and surveillance networks and production accounting systems, but they wonder where that data goes after the alarm management systems have given them a green light. 5. Production engineering to Midstream. This handoff is often a big stretch as the midstream (transportation by pipeline, train, barge or ship) is often a completely separate department in the company. It is one thing to have to work with the engineer down the hall but yet another to deal with the logistics planner in a completely separate building. For needed handoffs in the planning and operations stage and every time there is an

operations upset or deviation from the production plan, there should be an effective information handoff. Instead, these are often relegated to panicked phone calls just before things get serious in the field. The development of data standards has usually occurred within the boundaries of major E&P lifecycle stages where operators, technology vendors, and service companies bring together subject matter experts to develop common ways to define critical data elements and ways to move data around functional workflows. Geophysicists talk to geophysicists, reservoir engineers talk to reservoir engineers, drillers talk to drillers and so forth, but it has been a little difficult to build communities around integrated workflows (the handoffs). Standards organizations are aware of these challenges but their stakeholders often put higher priority on internal stage standards. Standards groups have come under some criticism from the industry around the difficulty of passing the information baton in these betweenstages moments. There have been a few

SPONSORS HELP BRING THE COMMUNITY TOGETHER

and “Data: Fueling the Engine of Growth for Oil & Gas” (Rich McAvey, Gartner). Understanding that many travel budgets were slashed or cut completely this year, PPDM introduced virtual event attendance through our Webinar and Telepresence Robot, Professor Caprock. Virtual attendees could select whether they wanted to participate through a traditional webinar style (slides and audio) or by joining the live feed through the Professor, operated during specific times by Jess Kozman from Singapore and Tracey Dancy from the UK. The Professor allows the operator to interact in real time with attendees as though they are in the room, and viewers can log in to watch the operator ask questions and receive answers directly, face-to-iPad from speakers, other attendees, and exhibitors. Attendees can look forward to seeing the Professor at future PPDM events. Generous sponsors and exhibitors made both this event and the dramatically discounted registration rates possible. Thank you to our Houston DMS Sponsors:

By PPDM Staff The Data Management community came together, both in person and virtually, for the 2016 Houston Data Management Symposium (DMS) and Tradeshow, April 11 and 12, 2016 at the Westin Houston Memorial City. With deeply discounted rates this year, thanks to the many sponsors and exhibitors, PPDM was able to bring together more than 150 people for the two-day symposium and tradeshow. Attendees took in a variety of plenary presentations, breakout sessions, and tradeshow booths, and enjoyed catching up with each other during networking opportunities. Highlight presentations included “Things Every Petroleum Data Manager Should Know” (Fred Kunzinger, Noah Consulting), “Implementing a Data Quality Program” (Joseph Seila, Concho),

examples of this interdisciplinary work; the Open O&M initiative and the OGI (Oil and Gas Interoperability) pilot, the mapping of the WITSML standard to the PPDM data model, and sharing a common units of measure standard between several different standards models. More needs to be done to address the transition between work processes. If the industry is serious about integrated operations and deriving more efficiency and productivity from existing work processes and workflows, then the information handoffs will have to be addressed in a more effective way. Just like the relay team with the fastest runners but the sloppiest baton handoffs, you just can’t achieve higher goals with doing the basics better. About the Author Jim retired from Chevron in 2013 after almost 37 years with a major international oil and gas company. After retiring, Jim established Reflections Data Consulting LLC to continue his work in the area of data management, standards and analytics for the exploration and production industry.

• • • • • • • • • • •

geoLOGIC systems (Titanium) EnergyIQ (Platinum) Stonebridge Consulting (Platinum) Noah Consulting (Gold) Microsoft (Gold) DDC KPO (Silver) PDS Energy Information (Bronze) Seven Lakes Technologies (Bronze) Informatica (Bronze) SigmaFlow (Bronze) OSIsoft (Partner) We look forward to seeing everyone at our upcoming Symposia, including the 2016 Perth Data Management Symposium (August 4) and the 2016 Calgary Data Management Symposium, Tradeshow & AGM (October 24 – 26).

Foundations | Vol 3, Issue 2 | 21


SA

Student Article

Big Data Has Unique Needs for Information Governance and Data Quality By Charles A. Mathes

D

and the central knowledge domain that connects all the other domains: • Data Architecture Management • Data Development • Data Operations Management • Data Security Management • Data Integration and Interoperability • Document and Content Management • Reference and Master Data • Data Warehousing and Business Intelligence • Metadata Management • Data Quality Management

ata is growing at an astonishing rate. Individuals and small organizations, all the way to global enterprises, generate, use, and store data. From wearables and appliances to industrial equipment, devices are becoming sensor enabled, and all of these devices are fueling the growth of data. Increasingly business transactions, social interactions, and entertainment are becoming digitally driven. The mountain of data will either be turned into usable information or lose its value and turn into dark data. As enterprises embark on their big data adventure, there is a desire to capture and retain the value inherited with insights. Data lakes are a method for economically storing massive amounts of data, but in order for the data to be useful as information, thereby an asset, the integrity of the data must be maintained. Before investing time and effort to populate the data lake, a means to retrieve the data out of the data lake needs to be planned. ORD EM E P RO GR

E

S

S

O

1. BIG DATA ENVIRONMENT There have been established industry leading practices and standards for the care and feeding of enterprise data for an extended period of time. The Data Management Association (DAMA) created a framework guide in 2009 called the Data Management Body of Knowledge (DMBOK). This framework has data governance as the center of activity

Lakes. Their primary function is to store large amounts of data in an economical way. The incoming flows of data into the data lake can have a variety of formats including structured and unstructured. One of the issues that has plagued Information Technology (IT), data management professionals and the end consumer of data is that there is a Pareto relationship of around 80% of the time where effort is spent to prepare the data to be usable and around 20% of the time where effort is spent actually using the data.

Data Architecture

Data Modeling & Design

Data Quality

Meta-data Data Warehousing & Business Intelligence

Data Governance

Data Storage & Operations

Within the Data Lake, Data is Stored for Later Use

Data Security

Reference & Master Data Documents & Content

Data Integration & Interoperability

Figure 1. DAMA DMBOK Guide Knowledge Area Wheel

Figure 2. Image of stylized Data Lake

2. SEVEN FOUNDATIONAL PRINCIPLES OF BIG DATA

Image Credit: Data Management Association -

Volume

International (DAMA-I)

From the perspective of an enterprise with a more traditional business model and use of structured data, frameworks such as the DAMA DMBOK gave appropriate guidance from simple to highly complex environments. But now, the volume, types, and speed of data are challenging how data professionals address the needs to maintain the integrity of data in the new big data era – yet the underlining principles remain consistent. In the new big data era, there are a variety of technologies and vendors that foster the creation, usage, and storage of data in a wide variety of formats. The structures around these repositories are referred to as Modern Data Architecture (MDA) and the large data stores are known as Data

22 | Journal of the Professional Petroleum Data Management Association

Velocity

Value

Big Data

Variability

Virtual

Variety

Veracity

Figure 3. Seven V’s of Big Data The three ‘V’ words commonly used to describe Big Data – volume, velocity, and variety – define the proportional dimensions and challenges specific to big data, but fail to fully describe the whole concept of Big Data. The other Vs are aspirational qualities of all data and provide a more complete picture.


Student Article

They describe the attributes of big data and what is necessary to maintain the integrity of the data as well as be able to leverage data as a value generating asset. These are summarized in the seven foundational Vs of big data. • Volume • Velocity • Variety • Veracity • Virtual • Variability • Value Companies are increasingly turning to Big Data as a means of better using structured and unstructured data generated by operations not only to enhance safety, efficiency and productivity, but to predict events before they happen. Having a data quality issue is much more than just an inconvenience, missing or misleading data can be very expensive and can even cost lives. “Poor data can cost businesses 20%–35% of their operating revenue.” (Chad Luckie May 2012). Many of these principles complement each other and it is common for data to experience two or more of these principles at the same time and as data goes through its life cycle. The principles describing the data may change as well as the governing needs of the data.

2.1. VOLUME Volume is the scale of data. While big data is not all about the size of data, the growth in the volume in data is impressive. According to IDC estimates, by 2020, business transactions on the internet for business-to-business and business-toconsumer will reach 450 billion per day. According to IBM on their big data blog, over 90% of all the data in the world has been created in the past two years. The size of data used to be measured in megabytes, now data is being measured in terms of exabytes (1,000,000,000,000,000,000 bytes) and zettabytes

(1,000,000,000,000,000,000,000 bytes). The Industrial Internet of Things (IIoT) is a source of sensor data. Dylan Tweney reports that jet engines from GE generate 500 gigabytes of data during every flight. This data is stored for analysis of the health of the engines. Sensor and device data feeds like these are the major contributor of digital data growth yet social media, Voice over Internet Protocol (VoIP), and enterprise data are all contributing to the massive growth of digital data. If data is not properly sorted and tagged, then the usefulness and value of the data is dramatically reduced.

in order to complete a transaction, or sensor data flowing to the primary repository for processing. Depending on the use case, some data in motion can be processed near the source, thus diminishing the need to move the majority of the data. This is referred to as data on the edge. Each type of velocity has its own governance requirements.

Data at Rest

Zettabytes

Petabytes

Figure 4. Volume - digital data growth chart

2.2. VELOCITY Velocity is the rate of change that the data experiences. The velocity of data is described as data at rest, data in use, and data in motion. Data at rest is typically associated with master data, archived data and other data sources that are static - it is not changing. Data in use is typically associated with transactional data. An example of transactional data would be the interconnected processes of a sales order. The financial transaction that occurs with the financial institution, inventory updates and the corresponding inventory checking in the warehouse and material requirements planning (MRP), and the delivery process. Data in motion is the movement of data from one application to another application. An example of data in motion could be back-up and archiving, retrieving data from one application to another

Data in Motion

Data in Use

Volume is the scale of data

Terabytes

SA

Figure 5. Velocity

2.3. VARIETY Variety refers to the various forms that data can take. It is common to think of data variety as structured data, unstructured data, and semi-structured data. Structured data is the more traditional enterprise data that fits into rows and columns. It is easily stored in databases and there is a wide array of tools available to retrieve the data. Unstructured data does not fit neatly into rows and columns. It can exist in multiple formats such as e-mails, PDF documents, sensor feeds, images, audio, and video, as well as other formats. Unstructured data is more difficult to classify and the current commercial tools to retrieve unstructured data are still early in their maturity lifecycle. Semistructured data is unstructured data that has identifying tags called metadata to help identify the data for later retrieval. “Metadata summarizes basic information about data which can make finding and working with particular instances of data easier.” Keith Holdaway 2014.

Foundations | Vol 3, Issue 2 | 23


Student Article

SA 2.4. VERACITY

Data Lineage

Trust Integrity

Traceability

Figure 7. Veracity Veracity is correctness or accuracy of the data along with the context of the data leading to trust. Three aspects of veracity are data lineage, traceability, and integrity. Data lineage is knowing the source of the data. If the data comes from a trusted source such as the enterprise accounting system that has controls built into it, then the data itself is more trusted. Traceability is the ability to accurately trace where the data came from. If a report is generated out of the corporate data lake but the individual data elements have their source from a trusted system and it can be shown that the data elements have not been modified, then the trust of the source system is inherited. On the other hand, if the source of data is coming from an individual’s spreadsheet that was updated on their workstation, a compelling argument can be made that the data should not be trusted.

2.5. VIRTUAL Virtualization is extending the applications and their respective data sources to an abstraction layer so the data from disparate systems appears as a unified table. There are five patterns of data virtualization use: data federation, data warehouse extension, enterprise data sharing, real-time enterprise data, and cloud data integration. Data federation is used when there are multiple and comparable source applications. Data federation is useful for creating federated views, data services, data mash-ups,

caches, virtual data marts, and virtual operational stores. Data warehouse extension are useful for Master Data Management (MDM) hub extensions, data warehouse federation, hub and virtual spoke, complementing Extract Transform Load (ETL) interfaces, data warehouse prototyping, and data warehouse migrations. Enterprise data sharing is useful for shared data services, creating a data abstraction layer, standardcompliance data services, and data virtualization competency center. Realtime enterprise data is for the fast paced enterprise that needs real-time access to their data. Cloud data integration for access and delivery of data to the cloud. Before virtualizations, traditional techniques were used to move the data from disparate systems to the application for processing. But now, in a composable landscape, the logic is brought to the data.

2.6. VARIABILITY Variability refers to data whose meaning is constantly changing. Words do not have static definitions, and their meaning can vary wildly in context. As individual words without context can be misleading, the same can be true for a numeric stream. Data flows can be highly inconsistent with periodic peaks and valleys. Is the variance due to a seasonal trend or is there a true anomaly indicating a problem? The four measures of variability that are commonly used are range, mean, variance and standard deviation. These can occur in any of the previous Vs and can be dependent on their lifecycle. Establishing variability standards allows for using tools to monitor the data and manage the exceptions when they occur. Historical context can be used to generate regression analysis for insights on correlations or Principle Components Analysis (PCA) to help avoid the trap of making decisions on spurious relationships.

24 | Journal of the Professional Petroleum Data Management Association

2.7. VALUE

People

Process

Value Technology

Figure 8. Value Value is the accumulation of applying both tactical and strategic governance to big data in the previous six Vs. Enterprises commonly treat data as a cost but should treat data as one of the most valuable assets of the enterprise. Trusted information can be used for descriptive, predictive, and prescriptive analytics. Descriptive analytics report data generated events that occurred in the past. Dashboards are a common use of descriptive analytics. Predictive analytics use data-generating events that have occurred and predicts what is going to occur next. Analytics such as regression analysis are used for prediction. It is important to note that a co-variance in variables that show relationships does not imply causation. Prescriptive analytics is the next step following predictive analytics. Prescriptive analytics use data-generating events that have occurred, make a prediction on what is going to occur next, and provide advice on how to react to the prediction. Correct, complete, and trusted data will enhance people, process, and technology.

3. INPUT PROCESSING OUTPUT Every process, regardless if it is automated or manual, an over-arching macro process or low-level detailed micro process, contains the basic components of Input, Process, and Output. If a process is governed, then it has controls and mechanism. In the context of computer systems, inputs are the data feeds that go into the computer program. The inputs


Student Article

can be as simple as a person typing on the keyboard, or as complex as sensor data from a jet engine. The higher the quality of the inputs, the better. Processing is done by a computer program that takes the inputs, applies logic to the data, and makes some kind of decision. Outputs are the results of the logic and the decisions that were made. If there is information governance in place, then the controls are the rules and standards. Mechanisms are the way to apply and enforce the controls. If the process is more sophisticated, then there can be a feedback loop for continuous learning. Controls

Processing

x1

y1

10 8.04 8 6.95 13 7.58 9 8.81 11 8.33 14 9.96 6 7.24 4 4.26 12 10.84 7 4.82 5 5.68

x2

y2

10 8 13 9 11 14 6 4 12 7 5

Output

Mechanism

Feedback loop

The following example illustrates that making decisions with partial information without considering the bigger picture will lead to inaccurate conclusions. Anscombe’s Quartet The following table contains 11 observations from four different datasets. Each dataset is statistically very similar.

x1 y1

Analysis of Variance Source DF Model 1 Error 9 C. Total 10

Sum of Squares 27.510001 13.762690 41.272691

Mean Square 27.5100 1.5292

F Ratio

Source DF

17.9899 Prob > F 0.0022*

Model 1 Error 9 C. Total 10

RSquare 0.666324 RSquare Adj 0.629249 Root Mean Square Error 1.236311 Mean of Response 7.5 Observations (or Sum Wgts) 11

Analysis of Variance Sum of Squares Model 1 27.470008 Error 9 13.756192 C. Total 10 41.226200 Source DF

Mean Square 27.4700 1.5285

0.666242 0.629158 1.237214 7.500909 11

10

F Ratio

Mean Square 27.5000 1.5307

RSquare RSquare Adj Root Mean Square Error Mean of Response Observations (or Sum Wgts) Sum of Squares Model 1 27.490001 Error 9 13.742490 C. Total 10 41.232491

Mosley M., Et al…, The data management body of knowledge, DAMA-DMBOK Guide (Technics), USA, 2009 Luckie C., http://www.fathomdelivers.com/blog/analyticsand-big-data/big-data-facts-and-statistics-that-will-shockyou/, May 8, 2012 IDC, http://wikibon.org/blog/unstructured-data

Tweney D., http://venturebeat.com/2015/06/18/here-comesthe-industrial-internet-and-enormous-amounts-of-data/

x2 y2

y1

0.666707 0.629675 1.235695 7.500909 11

Mean Square 27.4900 1.5269

4

6

8

x1

10

12

14

16

2

10

12

10

12

14

14

16

18

16

9

8

8

7

7

5

x2

11

y4

9

6

8

10

10

F Ratio

6

12

11

18.0033 Prob > F 0.0022*

4

13

12

Figure 10. Results of Anscombe’s Quartet (SAS output)

6

3

2

y3

7

5

4

x4 y4

Kitchin R., The Data Revolution: Big Data, Open Data, Data Infrastructures and their consequences (Sage), USA, 2014,

4

5

17.9656 Prob > F 0.0022*

Holdaway K., Harness oil and gas big data with analytics (Wiley), USA, p.310, 2014

About the Author Charles is a Houston based consultant with 20+ years of experience working with Global Fortune 500 companies building and implementing IT solutions, primarily in the Oil & Gas industry Results of Anscombe’s Quartet (scatter plot) and is a founding board member of Houston Energy Data Science. Charles is currently with Slalom, LLC, and near completion of a PhD in Organizational Leadership. y2

7 6

F Ratio

Di Como P., Review of performance evaluation benchmarks of apache Hadoop, https://www.politesi.polimi.it/ bitstream/10589/93418/1/PUSTINA_749598_ ReviewOfPerformanceEvaluationBenchmarksHadoop.pdf, 2014

9

8

13

Summary of Fit

Source DF

6. REFERENCES

8

9

Analysis of Variance 17.9723 Prob > F 0.0022*

6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.5 5.56 7.91 6.89

10

11

RSquare RSquare Adj Root Mean Square Error Mean of Response Observations (or Sum Wgts)

Analysis of Variance

Summary of Fit

x3 y3

Summary of Fit

Sum of Squares 27.500000 13.776291 41.276291

y4

8 8 8 8 8 8 8 19 8 8 8

IBM, http://www-01.ibm.com/software/data/bigdata/whatis-big-data.html

Results of Anscombe’s Quartet (SAS output) 0.666542 0.629492 1.236603 7.500909 11

x4

Enterprises that leverage and use the data they currently have access to as valuable information used for analytics and treat data as a value generating asset that complements people, process, and technology will have a competitive advantage. The big data era is upon us and the established trend is that the volume of data will continue to grow at an astonishing rate. The fundamental concepts and processes for managing and

4. EXAMPLE FOR DATA QUALITY

RSquare RSquare Adj Root Mean Square Error Mean of Response Observations (or Sum Wgts)

y3

10 7.46 8 6.77 13 12.74 9 7.11 11 7.81 14 8.84 6 6.08 4 5.39 12 8.15 7 6.42 5 5.73

5. CONCLUSION

Figure 9. Input Processing Output (IPO)

Summary of Fit

x3

9.14 8.14 8.74 8.77 9.26 8.1 6.13 3.1 9.13 7.26 4.74

Table 1. Anscombe’s Quartet When statistical analysis is applied, the common measures of R 2, Means, and P values are identical for the four data sets up to three significant digits, strongly indicating that these data sets represent the same values. But with further investigation and visualizing the data, it becomes obvious that these four data sets are not the same after all. This further illustrates the point that ungoverned and incomplete data can lead to inaccurate conclusions.

Input Processing Output

Input

governing data with the goal to maintain data integrity and usefulness have not changed, but the application and details of information governance are going through a significant metamorphosis. Big data introduces challenges that go beyond the proportional properties of volume, velocity, and variety. Information governance also requires management of veracity, virtual, variability, and value. At any point in time during the lifecycle of data, two or more of these seven properties can unite and create unique sets of circumstances and requirements. Information governance and data quality professionals have to be flexible and adaptive in this new landscape of big data.

Use statistics as a metaphor for data quality 1 2 3 4 5 6 7 8 9 10 11

SA

6 5 2

4

6

8

10

x3

12

14

16

6

8

x4

20

Figure 11. Results of Anscombe’s Quartet (Scatter Plot from SAS)

Foundations | Vol 3, Issue 2 | 25


Guest Editorial Hands On With The PPDM Association Board Of Directors By Robert Best, Noah Consulting

‘W

ORD EM E P RO GR

E

S

S

O

elcome to Exxon, we are stationing you in Roswell’. Those are the words my Dad must have heard upon getting his first job after getting out of the Army in the 1960’s. Armed with a Geology degree from Texas A&M and landing a job with a super-major, he was sure he would be assigned to Houston. From Roswell, my birthplace, we moved on to Los Angeles. There my Dad had the opportunity to do some of the initial mapping of the Alaskan North Slope prospect. In between trips, which were sometimes weeks in length, I would be fascinated with the odd looking paper documents that would be in found in his office. Long paper ‘strip charts’, vellum with strange dots and hand-drawn/ pleasing shapes, more documents with pieces of strip charts and nice colors in between. Of course these are known as well logs, subsurface maps and cross sections. Fast forward a few decades to the 90’s and I had the opportunity to become involved with a company that was focused on leveraging and converting these paper artifacts in Sun Workstations and later, the PC. It was at that time I became

A Well Log A Subsurface Map very interested in oil & gas industry data standards. In those days, one of the popular/new well log standards was the LAS (Log ASCII Standard) format. This format was created by the Canadian Well Log Society ‘Floppy Disk Committee’. Another interesting phenomenon at the time was the widespread absence of digital log data and a prevalence of paper well log copies. One of the issues of the time was ‘how can I efficiently convert well log paper copies to a useful digital standard?’ Various standards came into play including the TIFF standard for scanned raster well log images to LAS for digital well logs. It is clear that these standards revolutionized the use of well log data in the oil & gas industry. Later on, having many well logs to deal with, each having their associated information, I became interested in standards for managing well header information. Around that time, I became involved in the PPDM Association. At PPDM I found a group of like-minded people that believe in the power of standards to transform the efficiency of our industry. The efforts during those times were primarily focused

26 | Journal of the Professional Petroleum Data Management Association

on the creation of the PPDM data model that we all know and love. My relationship with PPDM continued and I am now honored to be serving my 3rd term on the Board of Directors. Today, the focus of PPDM is quite different. While the model and associated body of knowledge are an important component, PPDM’s new initiatives now also include Professional Certification and building a Community of Practice. I believe and promote this direction for PPDM as these initiatives are imperative to PPDM becoming a globally recognized professional society. In these challenging industry times, data managers must recognize the positive effect they can have on the business in efficiently producing hydrocarbons. At the end of the day, that is what we are all here for. About the Author Robert Best has over 25 years of E&P industry experience and capabilities coordinating people and activities to accomplish broadly defined objectives using a variety of technical and non-technical skills.


Upcoming Events

UE

LUNCHEONS AUGUST 23, 2016 DALLAS Q3 DATA MANAGEMENT LUNCHEON Dallas, TX, USA

SEPTEMBER 13, 2016 HOUSTON Q3 DATA MANAGEMENT LUNCHEON Houston, TX, USA

SEPTEMBER, 2016 CALGARY Q3 DATA MANAGEMENT LUNCHEON

NOVEMBER 8, 2016 OKLAHOMA CITY Q4 DATA MANAGEMENT LUNCHEON

Calgary, AB, Canada

Oklahoma City, OK, USA

OCTOBER 18, 2016 FORT WORTH Q4 DATA MANAGEMENT LUNCHEON

NOVEMBER 17, 2016 TULSA Q4 DATA MANAGEMENT LUNCHEON

Fort Worth, TX, USA

Tulsa, OK, USA

AUGUST 2016 S M T 7 14 21 28

1 8 15 22 29

2 9 16 23 30

W

T

F

S

3 10 17 24 31

4 11 18 25

5 12 19 26

6 13 20 27

T

F

S

1 8 15 22 29

2 9 16 23 30

3 10 17 24 31

T

F

S

6 13 20 27

7 14 21 28

1 8 15 22 29

SEPTEMBER 2016 S M T W 4 11 18 25

5 12 19 26

6 13 20 27

7 14 21 28

OCTOBER 2016 S M T W 2 9 16 23 30

WORKSHOPS & SYMPOSIA

AUGUST 4, 2016 PERTH DATA MANAGEMENT SYMPOSIUM

SEPTEMBER 29, 2016 DENVER DATA MANAGEMENT WORKSHOP

OCTOBER 24-26, 2016 CALGARY DATA MANAGEMENT SYMPOSIUM, TRADESHOW & AGM

Perth, WA, Australia

Denver, CO, USA

Calgary, AB, Canada

3 10 17 24 31

4 11 18 25

5 12 19 26

2016 Calgary Data Management

Symposium, Tradeshow & AGM

CERTIFICATION - CERTIFIED PETROLEUM DATA ANALYST NOVEMBER 2, 2016 CPDA EXAM (Application Deadline September 21, 2016)

ONLINE TRAINING OPPORTUNITIES

October 24-26 Telus Spark Science Centre

Online training courses are available year round and are ideal for individuals looking to learn at their own pace. For an in-class experience, private training is now booking for 2016/2017. Public training classes are available on demand.

All dates subject to change.

VISIT PPDM.ORG FOR MORE INFORMATION Find us on Facebook Follow @PPDMAssociation on Twitter Join PPDM on LinkedIn

www.ppdm.org/calgarydms2016


WITHOUT knowledge action IS USELESS AND knowledge without ACTION IS futile. POSSIBLE FULL-PAGE AD SPACE

Abu Bakr

Power your upstream decision-making with customer-driven data, integrated software and services from geoLOGIC. At geoLOGIC, we help turn raw data into actionable knowledge. That’s a powerful tool to leverage all your decision making, whether it’s at head office or out in the field. From comprehensive oil and gas data to mapping and analysis, we’ve got you covered. Get all the knowledge you need, all in one place with geoLOGIC. For more on our full suite of decision support tools, visit geoLOGIC.com

geoSCOUT

|

gDC

Upstream knowledge solutions


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.