First Break December 2024 - Data Management and Processing

Page 1


SPECIAL TOPIC

Data Management and Processing

EAGE NEWS Expanded workshop programme for Annual 2025

INDUSTRY NEWS Global investment of $78 trillion needed to meet Paris goals

TECHNICAL ARTICLE Angle-restricted FWI for shallow reservoir characterisation

With 90+ years of experience pioneering advanced technology to meet some of the world’s most complex challenges, we’re ready to face the demands of tomorrow. We are Viridien.

CHAIR EDITORIAL BOARD

Clément Kostov (cvkostov@icloud.com)

EDITOR

Damian Arnold (arnolddamian@googlemail.com)

MEMBERS, EDITORIAL BOARD

• Lodve Berre, Norwegian University of Science and Technology (lodve.berre@ntnu.no)

Philippe Caprioli, SLB (caprioli0@slb.com) Satinder Chopra, SamiGeo (satinder.chopra@samigeo.com)

• Anthony Day, PGS (anthony.day@pgs.com)

• Peter Dromgoole, Retired Geophysicist (peterdromgoole@gmail.com)

• Kara English, University College Dublin (kara.english@ucd.ie)

• Stephen Hallinan, Viridien (Stephen.Hallinan@viridiengroup.com)

• Hamidreza Hamdi, University of Calgary (hhamdi@ucalgary.ca)

Gwenola Michaud, GM Consulting (gmichaud@gm-consult.it)

Fabio Marco Miotti, Baker Hughes (fabiomarco.miotti@bakerhughes.com)

• Martin Riviere, Retired Geophysicist (martinriviere@btinternet.com)

• Angelika-Maria Wulff, Consultant (gp.awulff@gmail.com)

EAGE EDITOR EMERITUS Andrew McBarnet (andrew@andrewmcbarnet.com)

PUBLICATIONS MANAGER Hang Pham (publications@eage.org)

MEDIA PRODUCTION

Saskia Nota (firstbreakproduction@eage.org) Ivana Geurts (firstbreakproduction@eage.org)

ADVERTISING INQUIRIES corporaterelations@eage.org

EAGE EUROPE OFFICE

Kosterijland 48 3981 AJ Bunnik

The Netherlands

• +31 88 995 5055

• eage@eage.org

• www.eage.org

EAGE MIDDLE EAST OFFICE

EAGE Middle East FZ-LLC Dubai Knowledge Village PO Box 501711

Dubai, United Arab Emirates

• +971 4 369 3897

• middle_east@eage.org www.eage.org

EAGE ASIA PACIFIC OFFICE

EAGE Asia Pacific Sdn. Bhd. UOA Centre Office Suite 19-15-3A No. 19, Jalan Pinang

50450 Kuala Lumpur

Malaysia

+60 3 272 201 40

• asiapacific@eage.org

• www.eage.org

EAGE LATIN AMERICA OFFICE

EAGE Americas SAS Av. 19 #114-65 - Office 205 Bogotá, Colombia

• +57 310 8610709

• +57 (601) 4232948

• americas@eage.org

• www.eage.org

EAGE MEMBERS’ CHANGE OF ADDRESS

Update via your MyEAGE account, or contact the EAGE Membership Dept at membership@eage.org

FIRST BREAK ON THE WEB

www.firstbreak.org

ISSN 0263-5046 (print) / ISSN 1365-2397 (online)

A time-lapse, multi-component (9C4D) seismic data processing flow, Vacuum Field, New Mexico, USA

31 Angle-restricted FWI for shallow reservoir characterisation

Isabel Espin, Laurene Michou, Nicolas Salaun, Daniela Donno, Øystein Wergeland, Katrine Gotliebsen, Diego Carotti, Joachim Mispel and Ståle Høgden

Sp ecial Topic: Data Management and Processing

39 Optimising subsurface data management: a systematic approach

Rebecca Head

47 Exploring the benefits and pitfalls of using multiples in imaging

Gordon Poole and Milad Farshad

53 Optimal Imaging Aperture for computational efficiency in 2D and 3D Reverse Time Migration using SeisRTM

Richa Rastogi, Abhishek Srivastava, Monika Gawade, Bhushan Mahajan, Laxmaiah Bathula and Saheb Ghosh

61 A time-lapse, multi-component (9C4D) seismic data processing flow, Vacuum Field, New Mexico, USA

Steven L. Roche

69 Attenuation of non-compressional energy in ocean bottom node data

Tim Seher, Hassan Masoomzadeh, Yong Ren, Fons ten Kroode, Mark Roberts, Alexander Kritski, Harald Westerdahl, Mark Thompson and Åsmund Sjøen Pedersen

75 Data management transformation to drive subsurface autonomy

Richard Mohan and Sumeet Gupta

79 The bees of processing intelligence

Neil Hodgson, Karyna Rodriguez, Lauren Found and Helen Debenham

82 Calendar

cover: A velocity model generated through multi-parameter full-waveform inversion at 40Hz, presenting a depth slice at 200 m below the water bottom. This model is derived from a TGS 3D seismic survey conducted in the Grand Bank region (Carson/Salar Basin), located offshore Newfoundland and Labrador, Canada. (photo courtesy of TGS)

European Association of Geoscientists & Engineers Board 2024-2025

Near Surface Geoscience Circle

Andreas Aspmo Pfaffhuber Chair

Florina Tuluca Vice-Chair

Esther Bloem Immediate Past Chair

Micki Allen Contact Officer EEGS/North America

Hongzhu Cai Liaison China

Deyan Draganov Technical Programme Officer

Eduardo Rodrigues Liaison First Break

Hamdan Ali Hamdan Liaison Middle East

Vladimir Ignatev Liaison CIS / North America

Musa Manzi Liaison Africa

Myrto Papadopoulou Young Professional Liaison

Catherine Truffert Industry Liaison

Mark Vardy Editor-in-Chief Near Surface Geophysics

Oil & Gas Geoscience Circle

Yohaney Gomez Galarza Chair

Johannes Wendebourg Vice-Chair

Lucy Slater Immediate Past Chair

Wiebke Athmer Member

Alireza Malehmir Editor-in-Chief Geophysical Prospecting

Adeline Parent Member

Jonathan Redfern Editor-in-Chief Petroleum Geoscience

Xavier Troussaut EAGE Observer at SPE-OGRC

Robert Tugume Member

Timothy Tylor-Jones Committee Member

Anke Wendt Member

Martin Widmaier Technical Programme Officer

Sustainable Energy Circle

Carla Martín-Clavé Chair

Giovanni Sosio Vice-Chair

SUBSCRIPTIONS

First Break is published monthly. It is free to EAGE members. The membership fee of EAGE is € 80.00 a year including First Break, EarthDoc (EAGE’s geoscience database), Learning Geoscience (EAGE’s Education website) and online access to a scientific journal.

Companies can subscribe to First Break via an institutional subscription. Every subscription includes a monthly hard copy and online access to the full First Break archive for the requested number of online users.

Orders for current subscriptions and back issues should be sent to First Break B.V., Journal Subscriptions, Kosterijland 48, 3981 AJ Bunnik, The Netherlands. Tel: +31 (0)88 9955055, E-mail: subscriptions@eage.org, www.firstbreak.org.

First Break is published by First Break B.V., The Netherlands. However, responsibility for the opinions given and the statements made rests with the authors.

COPYRIGHT & PHOTOCOPYING © 2024 EAGE

All rights reserved. First Break or any part thereof may not be reproduced, stored in a retrieval system, or transcribed in any form or by any means, electronically or mechanically, including photocopying and recording, without the prior written permission of the publisher.

PAPER

The publisher’s policy is to use acid-free permanent paper (TCF), to the draft standard ISO/DIS/9706, made from sustainable forests using chlorine-free pulp (Nordic-Swan standard).

Sanjeev Rajput Vice-President
Laura Valentina Socco President
Martin Widmaier Technical Programme Officer
Andreas Aspmo Pfaffhuber Chair Near Surface Geoscience Circle
Maren Kleemeyer Education Officer
Yohaney Gomez Galarza Chair Oil & Gas Geoscience Circle
Carla Martín-Clavé Chair Sustainable Energy Circle
Diego Rovetta Membership and Cooperation Officer
Peter Rowbotham Publications Officer
Christian Henke Secretary-Treasurer

CONFERENCE

REPORT

Transition challenge was the priority at ECMOR 2024

Report on ECMOR 2024: Pioneering sustainable energy solutions through advanced modelling and collaboration.

This year’s ECMOR theme ‘Mathematical modelling and simulation for sustainable subsurface energy systems’ struck a chord among the members of the EAGE community and beyond. This was clear from the verbal feedback of the 145 participants, coming from 17 countries, and the animated conversation about the event on LinkedIn.

Although not directly integrated into the conference, most of the participants arrived early for the one-day workshop on ‘Enabling gigatonne disposal of CO2 in the subsurface’, and engaged in the lively discussions sparked by the excellent presentations of Philip Ringrose (NTNU) on lessons learned by storing CO2, Sarah Gasda (CSSR, NORCE) discussing the storage potential of the North-Sea Horda Platform/Northern Lights, Stephan Matthai (Melbourne University) on physical realism of storage models, David Ponting (OpenGoSim), on HPC enhanced storage simulation, and Odd Andersen (SINTEF) on surrogate modelling. Ringrose concluded the workshop commenting on post-closure monitoring.

The conference theme was revisited by the first keynote speaker Jarand Rystad, petroleum analyst and CEO of Rystad Energy. He placed direct carbon storage

(DCS) in the wider context of the emergent circular economy and its importance for achieving net zero. His talk conveyed optimism about the feasibility of DCS as part of the transition to sustainable and environmentally friendly energy systems. Presenting the findings of extensive market research, Rystad illustrated that, beyond government regulation, there are powerful socio-economic market forces that can accelerate this transition, and that Norway is at the forefront of these developments.

The launch of the conference by co-chairs Stephan Matthai (Melbourne University) and Arne Skorstad (Halliburton) was followed by many inspiring presentations delivered in two parallel sessions covering a wide variety of topics ranging from classic ‘Reservoir characterisation and modelling’ to new additions like ‘physics-informed AI Methods’, ‘Machine learning and data-driven and hybrid methods’ as well as ‘Engineering of open simulation software.’This topic was a first at ECMOR sparking a great deal of interest. Regarding applications of mathematical modelling and analysis, ‘Modelling and simulation of CO2 geosequestration’ and ‘Hydrogen storage’ figured prominently. The fact that hydrogen is a nutrient for bacteria motivated the

presentation of novel models of biodegradation and surrogate modelling of related biochemical reactions.

Almost half of the participants had attended ECMOR 2022, and most first timers came following recommendations by colleagues. Although participants from academia dominated, many were from industry and government institutions. While Norway was well presented, the participants came from all over the world. Yet only 23 were

female. Most perceived the event as a highly informative and excellent networking opportunity. They also plan to come again.

Many of the presenters committed to revise and submit their proceedings papers to a special issue of Computational Geoscience that, for the first time, will be pub-

More workshops at EAGE 2025 in Toulouse
Naples in person success for seismic inversion
Making of geoscientist wine buffs
Workshop on enabling gigatonne-scale CO2 disposal in the subsurface.

lished continuously online being populated with articles as soon as they pass through the review process.

ECMOR 2024 aimed to highlight the importance of sustainable engineering in the face of the breathtaking technological progress we are witnessing today. The powers of AI and the sophistication of mathematical models of subsurface processes grow every day while new algorithms and hardware facilitate computations with a dramatically reduced energy consumption, producing the same accuracy at order-of-magnitude less floating-point operations.

The questions that originally brought the ECMOR community together revolved

hydrocarbons, the human condition has improved, and the global population has more than doubled from 3.7 to 8.2 billion. Yet so has its demand for non-renewable resources. Irrespective of whether this concerns mineral resources, forests or fish stock, prolific resource overuse has accelerated decline. The biosphere is strongly impacted by anthropogenic climate change and so are we.

The knowledge and technical expertise of the ECMOR community is key for addressing these issues successfully and establishing a circular economy with a much-reduced environmental footprint. Everyone has their skin in this game and

around mathematics as a tool to improve oil recovery. Since then, science and engineering progress bucked M. King Hubbert’s forecast of the ‘end of oil’. So much so that hydrocarbon consumption today continues to rise. But at what cost? Since the Club of Rome highlighted the limits of growth in the 1970s, the world has seen devastating climate change and the environmental footprint of humanity has quadrupled, dramatically shrinking pristine natural environments needed to sustain biodiversity. Supported by cheap energy derived from

should aim to make a difference. In this spirit, ECMOR aligns itself with the UN’s Sustainable Development Goals. Achieving Goal 7 ‘Affordable & Clean Energy’ will not be possible without sophisticated mathematical modelling and simulation as an enabler. A deep understanding of the subsurface and the processes acting therein is vital for successful engineering in this now highly competitive space. There is little discussion that Goal 13 ‘Climate Action’ includes carbon capture and storage. Its rapid implementation is key.

ECMOR 2024 focused on these important needs and helped build research alliances through creating a shared understanding of emerging technologies and inter-disciplinary networking as well as by building and honing traditional strengths. Sharing of previously proprietary data and geomodels prevents them from being lost. Norway with its extensive reporting requirements is leading the way in such transparency efforts, and the multi-disciplinary Fluid Flower study featured at ECMOR’s Networking Reception marks a significant step in this direction. The second keynote speaker Prof Micheal King from Texas A&M discussed the SPE11 follow-up study of FluidFlower as another example of this emerging transparency.

ECMOR 2024 brought together specialists in AI and applied mathematics, harnessing the complementary strengths of these two enablers. While the future is different from the past so that there is no training data yet for AI interpolation, these can be generated by numeric simulation. Conversely, more physically realistic simulations become so complex that it takes AI to analyse their behaviour to identify cause and effect chains, creating a deeper understanding.

Looking back from the future, after the hybrid event in 2022 following the virtual event in 2020, ECMOR 2024 demonstrated the value of physically getting together as a community. Our thanks are due to EAGE for the organisation of this event as well as the selection of the conference venue and technical support. This event will be remembered!

Introducing Local Chapter Mumbai

Local Chapter Mumbai, the first one in India, joined EAGE’s global network this year. Our group was established to foster a vibrant community among geoscience and engineering professionals in the area. Our goal is to create a strong platform for networking, collaboration, and knowledge sharing for both newcomers and experienced professionals. Additionally, we aim to introduce university students to

global trends that will aid their professional growth and enhance their research skills.

The primary focus will be with initiatives intended to significantly enhance students’ visibility in their professional and academic journeys, empowering them to contribute to sustainable and impactful solutions for regional challenges.

If you reside in the area of Mumbai, or your professional interests lead

you there, we encourage you to get involved in this Chapter – not just as members, but as active contributors to the advances in geoscience. We especially invite students to engage in our activities and are confident that Local Chapter Mumbai will help you build valuable connections and shape your career path within a supportive network.

Delegates at ECMOR 2024.

DUG Elastic MP-FWI Imaging solves for reflectivity, Vp, Vs, P-impedance, S-impedance and density. It delivers not only another step change in imaging quality, but also elastic rock properties for quantitative interpretation and prestack amplitude analysis — directly from field-data input. A complete replacement for traditional processing and imaging workflows - no longer a stretch!

info@dug.com | dug.com/fwi

COURTESY OF SHELL
Conventional Workflow
DUG Elastic MP-FWI Imaging

YP community adapting to changing energy landscape

This past year the Young Professionals (YP) Community has hosted a wide range of initiatives for those aiming to understand energy in the transition era and explore different career paths.

Carrie Holloway, senior geologist at SLB and YP Committee member, kickstarted a series of online presentations with a talk on ‘Understanding site selection for CCS with global case studies’. The second webinar featured ‘Breaking the silos: Different knowledge areas are needed to advance geothermal system understanding’ with Lisl Lewis, geothermal business development manager at GeothermEx, as an invited speaker.

If you missed these conversations, check out the recordings at the EAGE YouTube channel. More editions are planned for 2025.

Setting the spotlight on career opportunities in the energy transition, the YP community also gathered top-class speakers from industry and academia to participate in two significant panel discussions.

In ‘Attracting and retaining talent’, held at the EAGE Annual 2024, the experience of professionals working in new energy domains, such as CCS, offshore wind and digitalisation, was presented to inspire early careers to join

the industry. The session concluded that the need for geoscientists with diverse backgrounds is going to increase in the years to come.

The conversation continued at GET 2024 in ‘Addressing the talent gap: Attracting new interest in geosciences through the energy transition’. According to Shi Yuan Toh (Heriot-Watt University), moderator

this one-year career development initiative, many members such as Chandramani Shrivastava (SLB) have had the chance to gain a unique perspective about the importance of mentoring the next generation of leaders. ‘Energy transition efforts are shaping young minds, and their creative thinking needs to be nurtured through these programmes.’

of the session, the panel discussed ‘helping to bridge the gap between traditional geoscience expertise and the innovative skills needed for tomorrow’s energy challenges. As the energy landscape evolves, we need to reimagine how we attract and retain the next generation of geoscientists’.

The YP and the Women in Geoscience and Engineering communities have also been encouraging members to exchange their knowledge and expertise at the EAGE Mentoring Programme. Through

Mentees, like Johan Alejandro Ibarra (Universidad de Los Andes), have also benefited from this exercise, ‘broadening my understanding of the global industry, as well as the academic and professional world of geoscience. I am excited about the prospect of continuing my learning journey’.

Connect with the YP Community

START AT ANY TIME GEOSTATISTICAL RESERVOIR MODELING, BY D. GRANA

START AT ANY TIME NEAR SURFACE MODELING FOR STATIC CORRECTIONS, BY R. BRIDLE

LECTURES

SELF-PACED COURSE 8 VIDEO LECTURES

SELF-PACED COURSE 11 VIDEO LECTURES

The YP community gathered top-class speakers from industry and academia to discuss career opportunities in the energy transition (Photograph from the EAGE Annual 2024).

Expansion for Annual 2025 workshop programme

More workshops than ever are planned for next year’s EAGE Annual in Toulouse (2-5 June 2025). We are offering a total of 19 workshops highlighting growing interest and demand for knowledge-sharing in key areas across the geosciences and energy transition. ‘These workshops offer a unique opportunity for geoscientists, engineers, and energy professionals, to tackle today and tomorrow’s critical challenges—whether it’s the latest technologies in geophysics, CO2, hydrogen, offshore wind, high-performance computing, AI, and more’, said Gautier Baudot, VP exploration excellence & transformation, TotalEnergies and co-chair of the 2025 Local Advisory Committee.

One standout feature will be two-day workshops, one on CO2 storage monitoring and the other on Full Waveform Inversion (FWI). The idea is to allow more in-depth exploration of these important

topics especially relevant to the energy transition.

The workshop programme covers an extensive range of disciplines, from artificial intelligence (AI) to quantum technologies. Highlights this year include the impact of AI-driven technologies, foundation models in geosciences, and navigating change, a workshop dedicated to leadership and adaptation in today’s evolving energy landscape. Gautier Baudot added: ‘By bringing together diverse minds and expertise, we’re building the technical

and leadership skills and generating the insights needed to lead in the sustainable energy transition. The aim is to empower participants to transform complex data and challenges into actionable solutions.’

The workshops on critical minerals and hydrogen in the subsurface reflect the industry’s shift toward sustainability and the need for multi-disciplinary approaches to the issues involved. Other workshop topics such as geochemistry’s role in advancing climate change research and seismic data challenges for offshore wind farm design topics also underline the increasing relevance of geosciences in addressing global environmental challenges.

More familiar topics have not been forgotten. For example, seismic quantitative interpretation (SQI) and distributed fibre-optic sensing will bring updated perspectives and fresh insights as will the workshops on 4D seismic monitoring, and the CCS value chain.

Kuwait geoscientists celebrate associated society inauguration

Kuwait Geosciences Society (KGS) last February organised a gathering of the regional geoscience community to commemorate its new collaboration with EAGE as an associated society.

Nearly 400 attendees, including members from both geological and non-geological communities, as well as presidents from various geosciences societies and organisations across the Gulf countries, came together at the Grand Hyatt Kuwait Hotel (Kuwait) to connect, share knowledge and foster collaboration opportunities. The occasion took place under the patronage of Ahmad Al-Eidan, CEO of Kuwait Oil Company (KOC).

The highlight of the evening was the signing by KGS of a memorandum of understanding (MoU) with EAGE, as well as with eight other international,

regional and local professional organisations, to cooperate as associated societies. Dr Mubarak Al-Hajeri, KGS president, said: ‘These agreements signify a commitment to advancing geoscientific research and knowledge exchange across borders.’

Building on the momentum of the inauguration, KGS, in collaboration with KOC, hosted the ‘First Technical Forum’ at the Ahmad Al-Jaber Oil & Gas Exhibition on 20-21 February 2024. This two-day forum featured six high-tech seminars that focused on issues such as high-resolution core logging, 4D seismic principles and applications, Paleozoic petroleum systems, and optimising reservoir integrity for CCUS.

Looking ahead, the collaboration EAGE-KGS aims to bring more edu-

cational opportunities, workshops, and seminars to the geoscientific community in Kuwait, ensuring continuous professional development and staying at the forefront of industry advances.

Learning and collaboration at last year’s workshops in Oslo.
Committed to promoting knowledge exchange across borders, KGS signed an agreement of cooperation with EAGE and other professional associations.

WORKSHOP REPORT

Fourth marine acquisition workshop in Oslo is another success

More than 75 international industry experts from operators, contractors, manufacturers, and academia met in September in Oslo to discuss recent advances in marine seismic equipment, operations, survey design and optimisation. The event was EAGE’s Fourth Marine Acquisition Workshop, a series held on a biannual schedule in Oslo since 2018 when it was initiated by the local geophysical community.

The workshop has evolved since into one of the key global events for marine seismic acquisition. Originally focused on marine seismic technology related to hydrocarbon exploration and production, the technical committee has extended the scope over the years to include energy transition-related applications such as CCS, offshore wind, and deep sea minerals.

As ECMOR 2024 (European Conference on the Mathematics of Geological Reservoirs – see report on pages 3-4) was also being held in Oslo next door to the acquisition workshop, joint breaks, lunches and conference evening, provided possibilities for networking across the disciplines.

For the first time in the workshop series’ history, the workshop committee managed to engage a senior executive to present to the technical community. Kristian Johansen, CEO of TGS, gave the opening keynote ‘Future of acquisition industry’. He explained the strategy and rationale behind the transformation of TGS. The company was previously mainly known as an asset-light multi-client data provider now transitioned to a diversified energy data and service company with a growing geophysical technology portfolio. The market outlook and growth opportunities in the hydrocarbon as well as in the new energy domain were addressed in the CEO’s keynote followed by a Q&A session.

All the keynotes on the first two days were very well received. Xander Campman (Shell) spoke about efficiency of OBN programmes, and Lars Jensen (Norwegian Offshore Directorate) discussed the objectives of sites surveys for offshore wind areas from a regulator’s perspective and presented case studies from the Norwegian Continental Shelf.

The keynote on fibre-optic DAS sensing systems by Hilde Nakstad (Alcatel) was presented in a very informative tutorial-like style and provided many interesting technical insights. Another (as always) inspiring presentation was given by Per Eivind Dhelie (Aker BP). He shared his company’s experience and technology strategy with surface DAS-based imaging especially in the context of monitoring CO2 storage sites.

A major topic of this year’s workshop was marine seismic sources with six presentations. Christophe L’Her (Sercel) kicked off with a keynote on the evolution of marine source technologies. It was followed by dedicated presentations addressing environmental aspects, advances in low-frequency source technology, marine vibrators, and machine learning-based monitoring of seismic source performance.

The third day featured a well-attended short course titled ‘Introduction to modern marine seismic surveys: scope, design and implementation’ given by Xander Campman (Shell). The one-day course covered the design and implementation of modern marine seismic surveys in the context of subsurface objectives, available acquisition, processing and imaging technologies, and environmental, financial, as well as operational constraints. Courses such as these are very valuable. Notably, many of attendees were seismic acquisition experts in geophysical advisory or chief geophysicist positions with many years of experience in the seismic industry. Naturally, the content of the course (presented from an energy company’s perspective) was compelling and triggered many in-depth discussions, improved mutual understanding, and enabled a fruitful knowledge and experience transfer.

Everyone who is interested in the marine seismic technology topic and does not want to wait for the 5th edition of the Marine Acquisition Workshop in 2026 may consider two related workshops that the EAGE has already announced for 2025: the first EAGE/SBGf Workshop on Marine Seismic Acquisition (Rio de Janeiro, 21-22 May 2025) and the third EAGE Seabed Seismic Today Workshop (Bahrain, 24-26 November 2025).

Xander Campman during his course.
Kristian Johansen giving the first keynote.

Two more conferences added to Near Surface Geoscience lineup in 2025

The Near Surface Geoscience Conference & Exhibition (NSG) is expanding with the addition of two new conferences in 2025. Inaugural conferences on Geohazards Assessment and Risk Mitigation, and UXO and Object Detection will be on the programme in Naples, Italy, making the event bigger and more impactful than ever.

1st

Conference on Geohazards Assessment and Risk Mitigation

The 1st Conference on Geohazards Assessment and Risk Mitigation will address the growing impact of geohazards, such as earthquakes, landslides, volcanic eruptions, floods, and coastal erosion, risks that have become more frequent and severe with climate change and rapid urbanisation. This timely conference has the backing of the EAGE’s Technical Community on Geohazards. It will be providing insights from experts in the field and fostering collaborative discussions marking a valuable step forward in research and the finding of solutions to mitigate these pressing risks.

The choice of Naples is a very appropriate location for these discussions. The surrounding region is a geologically active area historically susceptible to natural disasters, including eruptions from Mount Vesuvius and earthquakes.

With these expansions, NSG2025 is set to be an even more comprehensive and relevant event, offering a unique space for exchanging experiences and knowledge that can contribute to significant advances in near-surface geosciences.

1st Conference on UXO and Object Detection

The 1st Conference on UXO and Object Detection will focus on geophysical and geotechnical methods for detecting UXO (unexploded ordnance) and other buried objects. This is significant because the accumulation of explosive remnants in conflict areas continues to be a severe problem in many regions of the world. Advances in near-surface geosciences have improved detection methods, making these operations safer and more effective. The conference scope includes exploring object detection which plays a crucial role in various fields, for example in windfarm projects, but also agriculture, infrastructure planning, and environmental monitoring. In the case of offshore wind developments, identifying UXOs, submerged pipelines, and other obstacles is essential for safe and efficient construction. Similarly, in agriculture and infrastructure, detecting buried objects such as cables, archaeological artifacts, or natural features like sinkholes can significantly impact planning and development efforts.

The conference will draw on expertise from EAGE’s Technical Community focused on drones (UAVs) to discuss how they have proven valuable tools for mapping and inspecting large areas quickly and safely. This discussion is particularly relevant in Italy as the country still deals with unexploded remnants from World War II and has specialised teams for detecting and removing these materials.

Deadline reminder for Annual 2025 Call for Abstracts

Don’t forget that the Call for Abstracts for the EAGE Annual 2025 is currently open. Being part of the Technical Programme offers you the opportunity to share your expertise as part of the conference proceedings.

On www.eageannual.org you will find a full list of topics, guidelines, the submission process, and other relevant information. Abstracts need to be submitted by 15 January 2025, 23:59 CET.

Scope of advances in seismic inversion explored at Naples conference

More than 100 participants from around the world attended the third EAGE Conference on Seismic Inversion in Naples, Italy, marking the event’s first in-person gathering following two virtual editions. The focus was on advances in rock physics, feasibility studies and uncertainties in seismic inversion.

From 14 to 16 October attendees were presented with a series of technical talks that covered a wide range of scales and aspects, spanning from small-scale rock physics and log data to seismic scale analysis and beyond to the integration of all elements from rock physics to reservoir model-building using seismic inversion as the tie-in hub in the multi-disciplinary workflow. Topics ranged from conventional to unconventional, renewable and green energy. Intriguing questions were raised as to whether we are going ‘back to the future’ to obtain economically viable solutions for geophysical subsurface characterisation and monitoring in the context of the energy transition. The answer remains open.

Pat Connolly, director of PCA, kicked off the Technical Programme with a keynote presentation Constrained: How what we can measure affects our inversion objectives. His talk set the stage for the Rock Physics and Uncertainty session, which sparked discussion on how the

lack of data impacts seismic inversion outcomes. Subsequent talks included case studies and presentations on geostatistical inversion and full-waveform inversion (FWI) methods.

Day two opened with a keynote from Heidi Kjønsberg, senior research scientist at the Norwegian Computing Centre, on Estimating elastic and reservoir properties by Bayesian seismic inversion. This was followed by sessions on time-lapse inversion and analysis, as well as a dedicated session on machine learning applications in seismic inversion, highlighting its growing importance in the field.

The final day explored the themes of uncertainty, anisotropy and emerging inversion techniques. Dario Grana from the University of Wyoming delivered the keynote on Probabilistic approaches for seismic reservoir characterization: efficiency, uncertainty, and challenge The day concluded with a summary and discussion led by the chairs of the Technical Committee Øyvind Kjøsnes, lead geophysicist at AkerBP, and Tanya Colwell, product manager at GeoSoftware.

A recurring challenge frequently mentioned during the conference was the sensitivity to data quality and sometimes lack of data, both of which significantly impact model predictions. Several presentations demonstrated how more advanced and correct physical models can be included

in the seismic inversion to account for or even predict anisotropy, multiples, mode conversion of the wave propagation, and various elastic and petro-elastic reservoir properties. There were extensive questions and discussions about data acquisition, processing, conditioning, model-building, and the generation of synthetic well data. Amid a myriad of goals and inversion methods, there are different schools of thought and workflows on both data quality and data scarcity. Such topics could serve as candidates for discussion at the next conference in 2026, already awaited with anticipation.

Worth mentioning was the conference’s remarkable venue in a historical complex that includes a monumental church and a cloister adorned with 17th-century frescoes. The event concluded with a field trip to the Campi Flegrei volcanic area, a fascinating exploration of volcanic activity in the region. Participants visited the Solfatara Volcano and the historic centre of Pozzuoli, where they observed the phenomenon of bradyseism, with the ground’s water levels changing by approximately 12 mm per month, a tangible reminder of the region’s dynamic geological processes.

Sponsored by Eni, Delft Inversion, GeoSoftware, Ikon Science, and Qeye, the conference left participants inspired and eager for future advances in seismic inversion.

Delegates networking and enjoying the frescoed cloister garden in Naples, Italy.
Engaging poster sessions spark discussions among delegates in the cloister.

Local Chapters on why you should renew your EAGE membership

There are so many reasons to renew your membership with the EAGE community, here voiced by some of our Local Chapters.

Members of LC Germany say: ‘Our strength lies in collaboration and out-of-the-box ideas. In many projects, we collaborate with at least one other local chapter or organisation. We routinely add new topics to our portfolio, such as our series of meetings on ‘Startups in Geoscience’ and ‘The Importance of Geoscience in Our Society’. We also make a point of allowing ample time after presentations for meaningful discussions, thoughtful questions, and comprehensive answers. After all, a good presentation is important, but being part of an active community is even more so.’

Sometimes the most challenging part of a journey lies at the very beginning. That’s why EAGE – through a variety of programmes and communities – strives to support early careers and the transition from student to professional members. The team at LC Kuwait agrees: ‘Geosciences and the role of geoscientists are frequently underrepresented in school curricula. We

believe students deserve greater exposure to this field to make up for this gap, helping them to explore future career options early on. Students are a vital part of our community; these young, talented minds need encouragement and inspiration to reach their full potential.’

Engagement is also importance as affirmed by our friends at LC Mexico: ‘To expand our local community, we embarked on a journey from ground zero, as the majority of our members were early career professionals with limited industry contacts. Despite this initial challenge, we dedicated ourselves to strategic actions aimed at cultivating a vibrant and inclusive community. This enabled us to establish a strong foothold in the geoscience field, positioning us as a leading chapter within our region.’

Your journey with EAGE is hopefully one of continuous growth and discovery. So take the time to renew your membership now to ensure uninterrupted access to all the benefits EAGE offers, from professional development opportunities to a vast network of like-minded professionals.

LC London drink in the geology of wine

A toast to geology was the theme of a recent event organised by Local Chapter London at Denbies Vineyard in Dorking, Surrey. Members were offered the opportunity to learn about the connections between geology, climate, and winemaking while taking in the scenic landscape of the North Downs.

A highlight of the day was a very informative and amusing lecture by Professor Richard Selley (Imperial College of London). He discussed how the specifics of the North Downs geology influence the region’s suitability for viticulture, including how the chalk’s microporosity and natural fractures provide optimal water drainage to the wines, while the south-facing slopes of the hills ensure their maximum

exposure to sunlight. The subsequent store tour of the winery explored the production facilities before heading into the cellars for a guided tasting session of three different wines. It was a first-hand experience connecting two known indulgencies of geoscientists: unique geological features and wine!

The tour concluded with a scenic journey through the vineyard, ascending to the North Downs Way. With a clear sky and great weather, members enjoyed incredible views of the valleys and hills surrounding the vineyard, while the beautiful rows of grapevines loaded with fruit offered a first-hand connection between the land and the wine it produces.

Local Chapter Mexico celebrates first anniversary

The Mexican earth sciences and engineering community was happy to meet in July 2024 to celebrate the first anniversary of the EAGE Local Chapter Mexico.

The venue was once again the city of Villahermosa, Tabasco, thanks to the active participation of our members and friends in this city. The objective of the event was to reinforce our

sense of belonging to the EAGE and to recognise the effort of our members, companies and allied associations that, throughout this year, have been participating and supporting our activities.

With the aim of increasing our visibility among the community of professionals in the country, as well as to strengthen the relationship with other Chapters and Associations in

the region, our programme counted on the participation of speakers with extensive experience and recognition in the national industry: M.C. Mario Aranda, M. I. Irazema Olvera, and María de Jesús Correa (PEMEX) who discussed the most significant hydrocarbon activities in the country.

Special guests from SLB, the Mexican Association of Exploration Geophysicists (AMGE), and the SPE Mexico Section joined the meeting. The event offered a fruitful knowledge exchange and networking space for our 120 attendees. New relationships were established with national and foreign companies with presence in the city. We are grateful for the support of our sponsors, Jocelyn Vargas (VASE Sísmica) and CMIC Tabasco. We also thank EAGE for trusting us and making this initiative a reality.

Enthusiastic group of wine tasters.
Local Chapter Mexico celebrated its first anniversary with two technical talks and a networking session.

More E-Lectures added to Learning Geoscience platform

The EAGE Learning Geoscience platform has expanded its collection of E-Lectures delivered by experts in their field. These bite-sized educational videos are ideal for professionals seeking to enhance their knowledge and skills.

The new topics now available are Geometrical inversion with integration of geological modelling (Jeremie Giraud); Airborne Micro-Tem and deep learning inversion for base of sand ultra-resolution mapping (Daniele

Colombo); Carbon capture and storage plan for the decarbonization of the Asturias Industrial Hub (Mahdi Bakhtbidar); Change the default colourmap for optimum data visualization and more effective communication (Lindsey Smith); Time-lapse FWI for North Sea deep Culzean reservoir monitoring (Isabel Espin); and Towards 3D near-surface correction without NMO – A rank-based approach (Ali Alfaraj).

The E-Lectures reflect the ongoing commitment of EAGE to provide high-quality educational content, drawn from some of the best presentations at EAGE workshops and conferences.

The Learning Geoscience catalogue now features over 100 EAGE E-Lectures, covering a broad spectrum of topics across geoscience and engineering. Whether you’re a young professional or an experienced industry expert, these videos serve as valuable building blocks for continued professional development, offering insights into latest research and practical solutions. Access to the E-Lectures is free for members. You can explore the full E-Lectures programme and much more at the EAGE Learning Geoscience platform.

Basin Research (BR)

A new edition (Volume 36, Issue 5) has been compiled, featuring 14 articles.

Geophysical Prospecting (GP)

• A new edition (Volume 72, Issue 9) has been published, featuring 22 articles.

• Contributions for the Special Issue ‘Advances in Geophysical Modelling and Interpretation for Mineral Exploration’ are still welcomed until 31 December 2024.

Petroleum Geoscience (PG)

• A new edition (Volume 30, Issue 3) has been compiled, featuring 14 articles.

• Contributions for the Thematic Collection ‘Geoscience driving the North Africa and Eastern Mediterranean Energy Hub’ are welcomed by 30 April 2025.

All manuscripts available on

Get a snapshot of the recent advances in the field through EAGE E-Lectures. Free for all!

Spotlight on exploration in NW Europe at Oslo session

Session conveners Balazs Badics (Harbour Energy), Alyson Harding (Westwood Global Energy) and Jorge-Sanches Borge (Norwegian Offshore Directorate) present some highlights from the dedicated session ‘North-West Europe, UK and Norway Exploration Wells: 2018-2023’ held at the 2024 EAGE Annual.

This Dedicated Session, led jointly by the Technical Community on Basin and Petroleum Systems Analysis and the EAGE Annual Local Advisory Committee, was designed to address the current state of oil and gas exploration in NW Europe, especially in Norway and in the UK. Companies were invited to present their potential play-opener wells and high-impact prospects drilled in the last five years.

The session was opened by Alyson Harding (Westwood) with an introduction to the general trends and results of the last five years of exploration in Norway and in the UK with her talk ‘Exploration trends and strategies in the

wells, delivering almost six times the discovered resources, at just over half the drilling finding cost of the UK. A key trend in both countries has been the increase in infrastructure-led exploration (ILX) drilling and less focus on high impact (HI) and frontier wells. Jon Seedhouse, from the UK NSTA (North Sea Transition Authority) explained in detail the recent trends in hydrocarbon exploration in the UK.

North Sea UK

Daniel Collins from Shell presented a talk on the ‘Revival of the Zechstein carbonate play in the UK Southern North Sea - The Pensacola gas discovery’,

‘frontier’ part of the mature Southern Permian Basin, with an appraisal well planned for 2024 to test the crest of the micro-platform. The exploration well successfully tested gas, oil and condensate on the flank of the structure, and the appraisal will further de-risk in-place volumes and producibility.

Michael Hertle from TotalEnergies explained the exploration concept and the results of the 30/12d-11 Isabella well. The well encountered lean gas condensate and light oil in various Jurassic and Triassic reservoir levels. To explain the phase differences, TotalEnergies built a detailed basin model to match the hydrocarbon phase distribution and

UKCS and NCS’. Exploration activity over the last decade has been impacted by two major oil price crashes and a global pandemic, which have resulted in large fluctuations in drilling costs, exploration performance and discovered volumes. Over the decade, Norway has seen almost three times the number of

co-authored by Martin de Keijzer, also Shell, and Deltic Energy.

The 41/5a-2 exploration well was spudded in November 2022 and was drilled to total depth (TD) in the Carboniferous, completing in 2023. The success of the Pensacola exploration well has proven the resource potential in this

properties such as gas/oil ratio (GOR) and condensate/gas ratio CGR. The basin modelling calibration and results were presented including temperature, pressure, maturity, and migration. Their main conclusion was that even at higher source rock maturities, large reservoirs/ containers could preserve high liquid

The current state of oil and gas exploration in Norway and the UK was discussed in this dedicated session.

yields (in this case, the deep Triassic Judy Member reservoir), while smaller containers like the Upper Jurassic Kimmeridgian sandstones are likely to be filled with gas due to the leakage/spilling of the early liquid charge.

North Sea Norway

The first talk on Norway was given by Balazs Badics (Harbour Energy), who analysed the Norwegian North Sea exploration results from the last five years in his talk ‘Plays, prospects and wells in the Norwegian North Sea 20182023’. The industry has found 1247 million boe recoverable reserves in 64 discoveries, after drilling 128 targets in the 2018-2023 period. The overall technical success rate was 50%. The most successful plays have been the Upper Jurassic plays in the North Viking Graben, delivering 370 million boe in 14 discoveries. The traditional Middle Jurassic Brent play delivered 11 discoveries from 22 exploration wells. These two plays also had the highest average discovery size. There has been a lot of success in drilling Paleogene injectites in the Viking Graben, with a high success rate of 83%; however, estimated volumes in some of the discoveries have decreased during their appraisal campaigns. Overall, few exploration wells failed on reservoir presence and quality; most dry wells were either due to lack of hydrocarbon migration, or trap and seal failures.

Åshild Elin Olsen (OMV) presented the 15/2-2 S Eirik discovery in the Southern Viking Graben. The well targeted a stratigraphic trap in Late Jurassic, Volgian, Intra-Draupne Formation distal turbidite fans, believed pre-drill to be sourced from the UK East Brae Fan Complex. The 15/2-2S Eirik well penetrated an incomplete 505 m thick Draupne Formation, consisting of multiple thin sandstone layers totalling 67 m net sands characterised by suboptimal reservoir properties. The thin bedded reservoir had an average porosity of 7% and low permeabilities. Two distinct sandstone layers with different pressure regimes yielded oil samples, and oil shows were consistently observed throughout the entire section. The well proved a working stratigraph-

ic trap. Provenance studies suggested that sandstones in Eirik have a source similar to Gudrun, although some of the sandstones are disconnected to Gudrun and one unit might be connected to East Brae.

Gunnar Aschjem (Aker BP) illustrated a new play in the Norwegian Central Graben in very shallow Pliocene to Pleistocene shallow marine sandstones in the Overly discovery which

Erik Røed (Harbour Energy) explaining the Bergknapp discovery, which is located in PL836S on the Halten Terrace in the Norwegian Sea, 8 km west of the Maria Field. The discovery well (6406/310) was drilled in 2020 and proved hydrocarbons within Early to Middle Jurassic aged Fangst and Båt Group reservoirs, with oil-down-to (ODT) in all the segments. Preserving the reservoir properties at more than 4000 m depth

lies above the Eldfisk Chalk Field. Aker BP as operator of PL1085 together with licence partners DNO and Petoro drilled exploration well 2/8-19, completing in May 2022, resulting in a gas discovery in the Tanumåsen upper prospect target and an oil discovery in the Ringiåsen lower prospect. The well results proved the occurrence of non- to moderately biodegraded, very light oil mixed with a severely biodegraded early charge in the deeper target, confirming the pre-drill concept of fresh charge. Petrophysical logs, fluid sampling, pressure data and cores were collected giving a good foundation for detailed interpretation and description of the two stacked discoveries.

Norwegian Sea

The first talk on the Norwegian Sea exploration results was given by Bernt

is challenging as porosity is reduced by compaction and cementation. However, it can be seen in many of the wells on the Halten Terrace that good reservoir properties have been preserved at this depth, because of chlorite grain-coating. Due to uncertainty related to the extent of chlorite coating and corresponding distribution of good reservoir qualities, a seismic inversion study was initiated to predict the reservoir properties over the Bergknapp structure. The prediction of the reservoir properties from the seismic inversion matched the appraisal well results.

Mostafa Abdoli (OMV) showcased the first gas discovery in the eastern flank of Utgard High in the Norwegian Sea with the Velocette well. The 6607/31 S Velocette discovery was drilled in 2023 to investigate the reservoir and hydrocarbon potential of the Upper

The Bergknapp discovery was presented during an overview of the Norwegian Sea exploration results.

Cretaceous Nise Formation. A 9 m gas column was encountered in the upper part of the reservoir. The exploration well was the first well on the eastern flank of the present day Utgard High that encountered a thick, sandy interval in the Cretaceous Nise Formation and is regarded as a ‘play opener’ for this area.

Jozef Dziegielowski and Stine Hauge (ORLEN Upstream Norway) pre-

The Copernicus prospect was defined as a high volume and high risk gas prospect within an unproven play in the lower Nordland Group. Unfortunately, sandstones were not found in the reservoir section. Instead, low density smectite rich clays of Neogene age were encountered at the reservoir depth and were responsible for the soft Copernicus anomaly.

sented the results of a shallow, potential play-opener well in the Vøring Basin. The Copernicus prospect was recognised in 2019. The licence is located 48 km SE from the Aasta Hansteen gas field. The main objective of the well was to test a strong seismic anomaly recognised 2000 m below sea level.

Finally, Nico Huebner (Harbour Energy) showed Hati, a high-risk high-reward prospect in the northwestern Vøring Basin, under thick basalts, in the Skoll High area, in proximity to the known Cretaceous plays of the Aasta Hansteen and Irpa gas fields. The geological setting is a particularly

challenging environment for seismic imaging with shallow, heterogeneous volcanics. Existing 3D seismic data quality is insufficient for confident interpretation and solid trap definition. Both conventional seismic reprocessing and full waveform inversion have been carried out and they significantly improved the sub-basalt seismic imaging. This allowed a structural-stratigraphic framework to be defined with normal faults predominantly striking SW-NE. A major unconformity was identified potentially being equivalent to the Base Tertiary Unconformity. Several structural traps can be defined among which the Hati High stands out as a potentially prospective structural closure.

Conclusions

Exploration in Norway continues to discover commercial hydrocarbons and companies still have the appetite to drill exploration wells. However, there is an increasing focus on near field exploration in contrast to previous years, where companies would have more high-volume but riskier wells in their portfolios. Annual licence rounds will continue to give companies a chance to refill their prospect portfolios.

In the UK, exploration drilling is limited and is likely to continue this way. Few interesting large discoveries in proven plays, like Isabella, or in new plays, like Pensacola, are still found.

For all the details on the session’s proceedings you can refer to EarthDoc. Join the Basin and Petroleum Systems Analysis Community

The EAGE Student Fund supports student activities that help students bridge the gap between university and professional environments. This is only possible with the support from the EAGE community. If you want to support the next generation of geoscientists and engineers, go to donate.eagestudentfund.org or simply scan the QR code. Many thanks for your donation in advance!

The Pensacola gas discovery has proven the resource potential in the ‘frontier’ part of the mature Southern Permian Basin.

Beguiled by Africa Personal Record Interview

Charles Thomas was seduced by the magic of Africa on a visit to Angola in a gap year following his geoscience studies. He has retained the connection more or less his entire career, first with Halliburton, then EMGS and briefly Searcher Seismic mainly in business development. Now a UK-based entrepreneur, he works on various upstream projects in Angola and Namibia with frequent visits to the continent he loves.

Family background

I’m a Shropshire lad. My parents and extended family left school at the earliest opportunity. University wasn’t part of my plan. I left school with one decent grade GCSE – in maths until after I passed my retakes in 1988. By some miracle I took geology from the humble GCSE to master’s. It was the only science subject that I really understood, having grown-up under the shadow of Wenlock Edge.

My A-level Geography teacher was a proud Scouser. She took us on a day trip to Liverpool to visit the docks and Toxteth and arranged a lecture on urban renewal at the top floor of the Roxby Building at Liverpool Uni. It seemed like an excellent city, so I managed to find a way in.

Light bulb moment

‘Introduction to sequence stratigraphy’ by the late Prof Trevor Elliott in my second year – deltaic sedimentology, cyclotherms, transgressive systems tracts, etc. – totally hooked me! Imperial College London came next, where I remember writing up my research thesis on the original Mac that I’d ‘appropriated’ from the Geology department for the weekend. It got published in EAGE’s very own Petroleum Geoscience!

Angola first time

I took a gap year in Angola during a civil war. I lived with the parents of an Angolan friend who’d been at Imperial with me. It was an eye-opening experience for a 23-year-old unemployed expat. I ‘volunteered’ with Texaco (Tako Koning!) to witness a well and spent three weeks

offshore Block 2. I ended up getting paid and mud-logged for a year offshore Congo and Angola on Girassol-2 helping to box up some core – beautiful sandstone turbidites.

Halliburton years

After a few years back in the UK as a contractor for BG in Reading UK, I returned to Angola with Landmark Graphics. Everything changed. I fell in love with Africa and the business development side of geophysics – on the road throughout Africa with a Linux laptop and two passports. Lower Congo golden triangle blocks were being developed and the world was watching. I caught malaria, was highjacked. at gunpoint, and also met my future wife! Unbeknownst to me, a Statoil-led group of researchers was acquiring a proof-of-concept resistivity marine survey over Girassol and Dalia. The results gave birth to marine CSEM services. After a few more years in Cape Town and Cairo with Halliburton, I joined EMGS in 2008 as their ‘man in Africa’.

Offshore EM

EMGS had just made the transition from 2D to 3D CSEM capabilities and now had an inversion product that could be co-rendered with seismic. It was cutting edge tech that I believed in, but it wasn’t so easy to convince exploration managers of the value proposition. ‘Stop exploring, start finding’ was an early marketing mantra that with hindsight was an own goal. Team EMGS remain a great group of optimists who through time and a lot

of effort have grown the toolbox into a robust, repeatable, non-seismic method.

Covid turning point

Covid hit EMGS hard. We were two days from survey start-up in West Africa with a major customer, testing the new Deep-Blue source. Then out of nowhere the whole world shut down. EMGS went into immediate hibernation and I was unemployed for the first time in my career. I read Jeopardy by Wilfred Emmanuel-Jones, which motivated me to set-up Striped-Horse in Angola with Dilo Sa, a geophysicist friend and now business partner.

Entrepreneurial life

Striped-Horse has delivered two significant upstream multi-client projects in Angola to date with industry partners and we’ve created a National Well Cuttings Laboratory in Luanda, the second largest database of its kind in the world. More recently we have begun a full basin eFTG airborne survey over the Onshore Kwanza basin. I have also recently joined QXI as a founder shareholder. It’s a start-up E&P company focused on Namibia. Namibia has excellent potential and the new team has several former Galp executives who all want to find the next Mopane.

Time off

Trail running in the Shropshire Hills with friends. My two boys and I also play hockey for Bridgnorth. It’s a great way to stay fit, but can be brutal playing in the middle of January in the UK. I miss Africa!

Charles Thomas

New generation of Seismic instruments

Nodal Seismic data acquisition system

550,000 Seismic nodes to be delivered.

CROSSTALK

BUSINESS

Changing the script in 2025

There are many variations of the old joke about optimism along the lines of the group of retirees who meet regularly in a local coffee house to discuss the world’s many problems. One of them shocks the friends by stating: ‘I’m an optimist’. To which another asks: ‘Then why do you look so worried?’ The epic reply: ‘You think it’s easy to be an optimist?’.

That just about sums up the thinking on prospects in 2025 for business across geoscience-related fields. It would be nice to go along with the bosses of the Big Three in the seismic services field (TGS, Shearwater and Viridien), together with most financial analysts, in seeing better things to come in the new year. But on the available evidence there is not too much to support the optimist view.

What we can look forward to is the first full year in which the impact of the latest consolidation in the marine seismic business plays out. TGS is now fully in control of its PGS acquisition, abandoning its longstanding profitable ‘asset light’ model in favour of vessel ownership, a total of nine Ramforms. The reverse in its strategy was signalled nearly two years ago when TGS plunged into the ocean bottom node (OBN) survey market by buying Magseis Fairfield which owned the largest seabed seismic inventory in the business.

‘First

The inescapable reality that has never changed is that the fortunes of the geoscience sector are still tied to the budgets of oil and gas companies and how much they devote to E&P projects with a seismic component, posssibly more so than ever. This may seem a statement of the obvious, but the avenues for making a significant contribution to the coffers by other means seem limited.

The potential for more merger and acquisitions moves that may flatter the balance sheet is definitely running out of road. Putting TGS and Shearwater together is about the only option left, a strategy that would surely give pause to regulators and be unwelcome to oil and gas company clients who have so long been able to exploit the competition to keep prices to their liking. The anticipated IPO for Shearwater has yet to materialise, presumably because of market conditions, and in any case this would not alter the balance of the competition. The often rumoured break up or full takeover of Viridien has yet to gain credibility, but again this would not change the market fundamentals.

full year of latest consolidation’

We have witnessed so many of these consolidation moves over the decades that look attractive on paper, but it is hard to think of one that has been successful. Like other companies before it, e.g., Halliburton Geophysical Services, Western Geophysical, Schlumberger and CGG of old, TGS hopes that by buying a leading position in the marine seismic acqusition market, dominant in the case of OBN, not far behind Shearwater in the towed-streamer arena, and undoubted curator of the largest seismic data library, it can mop up more than enough business to establish a profitable future. What could possibly go wrong? As so often we come back to the nature of the market and the business model adopted by service companies to win their share.

As already suggested, any real improvement in the fortunes of the main players will depend on increased expenditure by oil companies, and here there is some good and bad news. We can probably take on trust a recent Wood Mackenzie estimate, based on analysis of 32 big oil company capital spending intentions, of a 5% increase in 2025 assuming a Brent oil price of around $75/bbl. The caveat is that this is the latest rise in an upward cycle which began from a lowest point in 2020. Sixty per cent of total investment will be upstream for the EuroMajors and up to almost 90% for emerging majors, according to this forecast. Yet exploration spend is only tracking at around half the levels of a decade ago.

Recent revisions by Shell and BP of their energy transition targets suggest that poor returns from low carbon investments and shareholder expectations are taking their toll, not a positive for the geoscience world looking for new opportunities. For

those wondering where the money for low carbon initiatives is coming from, Wood Mackenzie notes that ‘Euro Majors and NOCs account for almost 80% of investment in low carbon. The biggest company spenders are Shell, TotalEnergies, Saudi Aramco, BP and ADNOC. Each is investing around $5 billion a year, more than double the next tier of companies, which includes the two US Majors’. Whether money freed up from slower investment in low carbon solutions converts to E&P projects, mergers and acquisitions or share buybacks is a moot point.

However, it seems the seismic business can anticipate at least a continuation of current spending levels, and most analysts suggest that oil companies cannot put off too much longer the need to explore in order to top up declining reserves. The joker in the pack is of course the price of oil which always has the ability to alter the mood of industry investors. Here the consensus is that, short of a major global crisis, there is no scarcity of supply. OPEC+ continues to hold back, economic woes are suppressing demand from China, and then there is the US.

Trump’s bombastic promise to loosen the leash on oil and gas investment may not actually impact total US production as much as some might imagine. It is already at record highs under President Biden. Companies have of course been lobbying hard for deregulation measures but, in the current unpredictable market with downward pressure on oil price, they seem likely to stick to a continuing disciplined approach to investing in new opportunities. In addition, there have been signs for some time that the shale oil business bonanza could be topping out. That said, if some juicy US offshore prospects are no longer off limits, then there could be a boost to seismic exploration spending to check out what’s on offer.

reservoir characterisation options, especially with the focus on infrastructure-led exploration projects.

To date the customer base has tended to be confined to the bigger oil companies and NOCs. And the cake has to be shared among a number of hungry customers, currently TGS, PXGEO, Shearwater, BGP and SAExploration. A recent snapshot of OBN operations worldwide featured four TGS projects (US/Norway), four PXGEO (Brazil/Suriname/US), and five BGP (Middle East/ Nigeria and one Shearwater (India). Probably more work than ever is in the pipeline, and will include a modest contribution from offshore windpower investigations, but it will have to be a lot to compensate for towed-streamer survey revenue scarcity.

Just like the history of towed-streamer, technology differentiation in the seabed seismic field has been short lived: no company looks to have a major steal over any other. More automated, increasingly crewless OBN operations looks like the way the industry is headed (e.g., OBN for the transition era, R. Basili, First Break November 2024).

One recent analyst report stated that since the completion of the TGS/PGS merger on 1 July, only 150 days of new traditional streamer contracts have been announced TGS (90 days) and Shearwater (60 days) compared with around 1000 days in 2022.

‘Companies will be relying on the towed-streamer seismic survey market’

For the seismic industry the challenge is to navigate a path to greater profitability before patience of stakeholders is tested. It doesn’t take an accountant to deduce that the big players are operating uncomfortably leveraged businesses. In the case of TGS and Shearwater, this stems from their asset acquisition strategy. Success of this business model – proven so fragile over many decades – depends largely on whether marine seismic survey operations can show a level of profitability while reducing borrowings. Surprisingly perhaps, it is actually even more specific.

Even after so many disappointments in the past, these companies will be relying on the towed-streamer seismic survey market for any serious uplift. Why so? The clue is that ocean bottom node (OBN) surveys are now reckoned to be at least 50% of the global seismic survey market, and both TGS and Shearwater are invested in this business. So far so good. But the reality is that the total value of this business is relatively modest. No one questions the value of OBN for 4D seismic and other

For a brief moment Shearwater (which has a total of 16 vessels of one kind or another on its books) only had one vessel actually shooting seismic. That has now changed with at least three projects ongoing around the world. TGS (nine vessels available) had four working out of the total number of 12 surveys being carried out by all marine geophysical contractors worldwide. Even allowing for the onset of the winter season, the figures are concerning bearing in mind the overhead of fleet maintenance and the unavoidable requirement to renew ageing equipment. In the case of TGS, it can look to its more diverse portfolio, particularly its data library, to bring in extra revenue, but as CEO Kristian Johansen has admitted, the old multi-client business in which the company excelled is now out of time.

Viridien is really in a separate category. Except for some limited revenue earning multi-client survey generation, it has distanced itself from the messy offshore seismic operations business. It is intent on marketing itself as the resolver of ‘complex natural resource, digital, energy transition and infrastructure challenges’ with emphasis on its high-performance computing capability. Yet leadership in its core seismic processing and manufacture of on- and offshore seismic equipment manufacture still depends mightily on the health of the industry as a whole.

Next year feels very much like a pivotal one as to which direction we are heading, cue Albert Schweitzer – ‘An optimist is a person who sees a green light everywhere, while a pessimist sees only the red stoplight ... the truly wise person is colour blind’.

Views expressed in Crosstalk are solely those of the author, who can be contacted at andrew@andrewmcbarnet.com.

INDUSTRY NEWS

BGP expands world’s largest seismic survey

Global investment of $78 trillion is needed to meet Paris climate goals, says Wood Mackenzie

Some $78 trillion of investment is required across power supply, grid infrastructure, critical minerals and emerging technologies and upstream to meet Paris Agreement goals, according to Wood Mackenzie’s Energy Transition Outlook report.

Globally, energy demand is growing strongly due to rising incomes, population and the emergence of new sources of demand. Renewables capacity is expected to grow two-fold by 2030 in the base case, short of the global pledge made at COP28 to triple renewables by 2030.

Innovation will improve the commerciality of carbon capture and low-carbon hydrogen, driving uptake to 6 Btpa and 0.45 Btpa by 2050.

‘A string of shocks to global markets threaten to derail the progress in a decade pivotal to the energy transition. From the war between Russia and Ukraine to an escalated conflict in the Middle East, as well as rising populism in Europe and global trade tensions with China, the energy transition is in a precarious place and 2030 emissions reduction targets are slipping out of hand,’ said Prakash Sharma, vice-president, head of scenarios and technologies for Wood Mackenzie. ‘However, there is still time for the world to reach net zero emissions by 2050 – provided decisive action is taken now. Failure to do so risks putting even a 2 ˚C goal out

of reach, potentially increasing warming to 2.5 ˚C – 3 ˚C trajectory.

‘We are under no illusion as to how challenging the net zero transition will be, given the fact that fossil fuels are widely available, cost-competitive and deeply

peaking in 2027 and subsequently falling by 35% through to 2050.

Global final energy demand is projected to grow by up to 14% by 2050. For emerging economies, growth is 45%, whereas demand in developed economies

embedded in today’s complex energy system,’ added Sharma. ‘A price on carbon maybe the most effective way to drive emissions reduction but it’s hard to see it coming together in a polarised environment. We believe that these challenges are overcome with policy certainty and global cooperation to double annual investments in energy supply to $3.5 trillion by 2050 in our net zero scenario.’

In Wood Mackenzie’s base case, displacing fossil fuels with more energy-efficient electricity leads to global emissions

peaks in the early 2030s and enters a decline.

Electricity’s share of final energy demand steadily rises from 23% today to 35% by 2050 in Wood Macenzie’s base case. And, in a net zero scenario, the share of electricity increases to 55% by 2050.

The share of solar and wind in global power supply increased from 4.5% in 2015 to 17% in 2024.

Solar is the biggest contributor of renewable electricity, followed by wind, nuclear and hydro. Together, renewables’

Viridien shoots 3D survey
Australia
Shearwater wins OBN survey
Renwable energy, such as solar, is expected to double by 2030.

share rises from 41% today to up to 58% by 2030 and up to 90% by 2050, depending on the scenario. ‘But any number of challenges – from the supply chain, critical minerals supply, permitting and power grid expansion – could dampen aspirations for renewables capacity,’ said Sharma.

‘Despite strong growth in renewables, the transition has been slower than expected in certain areas because many low-carbon technologies are not yet mature, scalable, or affordable,’ said Sharma. ‘A key constraint is the high cost of low-carbon hydrogen, CCUS, SMR nuclear, long-duration energy storage, and geothermal. Capital intensity is high, but the business case is weak without incentives.’

Analysis shows that investment in upstream oil and gas will be needed for

at least the next 10 to 15 years to offset natural depletion in onstream supply.

Meanwhile, liquids demand peaks at 106 mb/d by 2030 in the base case. Demand stays high at 100 mb/d levels until 2047 in the delayed transition scenario but in a net zero world, falls rapidly to 32 mb/d by 2050.

More than 1200 projects have been announced in both the CCUS and hydrogen sectors in the past five years. However, few have taken FID. In contrast, upstream oil and gas projects remain attractive at 15% IRR or even higher at an industry planning price of $65/bbl Brent long-term. Capital allocation and finance continue to favour oil and gas projects in the base case.

The dynamics change completely under the pledges and net zero scenarios,

where a combination of higher carbon prices and faster cost declines of new technologies erodes the competitiveness of fossil fuels.

Key issues include finalising Article 6 of carbon markets and setting a new global climate finance goal that replaces the existing $100 billion a year. That figure was not achieved until 2022 and is considered grossly insufficient to meet the needs of the developing countries.

‘Strengthened NDCs and global cooperation will be crucial to mobilise $3.5 trillion annual investment into low-carbon energy supply and infrastructure, including critical minerals. If these challenges can’t be overcome, the goal of net zero emissions by 2050 will not be achieved.’

Viridien, TGS and Aquila Holdings complete Utsira OBN reprocessing project

Viridien, TGS and Aquila Holdings have completed reprocessing of the Utsira ocean bottom node (OBN) seismic survey in the Norwegian North Sea.

Utsira was the largest OBN survey conducted on the Norwegian Continental Shelf and required the deployment of more than 144,000 nodes and over 5.5 million shots. Covering an area of 2077 km2 in a mature part of the North Sea, the survey was

acquired in 2018 and 2019, and the initial processing results delivered in 2020.

Viridien reprocessed the resulting data set with its latest OBN imaging technologies, including time-lag full-waveform inversion, and advanced velocity model building techniques, to yield significant improvements in image resolution and frequency content for fault interpretation and reservoir characterisation workflows, said the company.

The Utsira area holds several significant oil and gas fields, including Edvard Grieg, Ivar Aasen, Balder, Gina Krog, Gudrun and Johan Sverdrup, along with a number of undeveloped discoveries and prospects.

Nils Haugestad, interim CEO of Aquila Holdings, said: ‘We are confident that our significantly improved image of the area will maximise our clients’ prospects for making new discoveries as well as optimising existing production.’

David Hajovsky, executive vice-president of multi-client at TGS said: ‘Through the reprocessing of this OBN data, TGS, in collaboration with Viridien and Aquila Holdings, can equip our clients with the resources to unlock the full potential of this well-established and highly productive petroleum basin. Whether through uncovering new reserves or optimising current operations, this data offers valuable insights that will greatly enhance understanding of the region.’

Dechun Lin, EVP, Earth Data, Viridien, said: ‘The new time-lag FWI model and resulting high-resolution images improve fault interpretation and reservoir characterisation, enabling operators to make more informed exploration, production, and reservoir management decisions.’

Location map showing the Heimdal Terrace, Utsira and Sleipner OBN data coverage.

BGP expands the world’s largest seismic survey

Chinese seismic contractor BGP has won a contract from ADNOC worth up to $490 million to expand the scope of the world’s largest combined 3D onshore and offshore seismic survey currently underway in Abu Dhabi. The contract will focus on identifying additional oil and gas resources in ADNOC’s producing onshore fields.

ADNOC and BGP will leverage advanced artificial intelligence tools to accelerate interpretation of the seismic data, maximise resource recovery and the use of existing infrastructure in producing fields to enhance efficiencies.

The 3D mega seismic survey project was initiated by ADNOC in late 2018 and marked the start of the world’s largest continuous seismic survey, covering 85,000 km2 across onshore and offshore areas in Abu Dhabi. The project is designed to provide high-resolution and high-fold 3D seismic data, offering a comprehensive understanding of the region’s complex geological structures.

Abdulmunim Saif Al Kindy, ADNOC upstream executive director, said: ‘ADNOC continues to maximise value creation and responsibly meet growing demand for energy. Our investment in

the world’s largest 3D Mega Seismic Survey emphasises the role advanced technologies play in our operations, as we continue to realise the full potential of our oil and gas resources to ensure the UAE remains a long-term and reliable energy provider to the world.’

FGS and Tarim publish patent to optimise layout parameters in seismic data acquistion

Forland Geophysical Services (FGS) and Tarim Oilfield Company have published a patent on the method and device for optimising layout parameters in seismic data acquisition.

belonging to the field of oil and gas exploration.

The method is based on the velocity model, using the illumination and inverse-illumination analysis method

The invention provides a method and device for optimising the arrangement parameters in seismic data acquisition,

to obtain the energy distribution map data reflected back to the surface by the underground directional target in differ-

ent dimensions for each group of first parameters.

It determines the group of first parameters corresponding to the energy distribution map data that meets the first preset condition as a group of second parameters, using the beam acquisition method to determine the relevant parameters of the beam seismic for the energy distribution map data corresponding to all the second parameters, based on the velocity model.

It also uses the compressed sensing method to obtain the relevant parameters of the beam seismic and the underground directional target imaging corresponding to all the second parameters. It uses the relevant parameters of the beam seismic and the group of second parameters corresponding to the underground directional target imaging that meets the second preset condition as the optimised first parameters.

‘The invention can achieve the best seismic acquisition design,’ said FGS and Tarim in a joint statement.

Dr Xianhuai Zhu, right, founder and CEO of Forland Geophysical Services (FGS).
BGP signs a contract with ADNOC that will cover 85,000 km2 in Abu Dhabi and leverage AI tools.

BRIEFS

Equinor is acquiring Sval Energi’s 11.8% share in the Halten East Unit to increase its ownership to 69.5%. Halten East is an offshore development in the Kristin-Åsgård area in the Norwegian Sea. The development comprises six gas discoveries and three prospects, which will utilise existing infrastructure at Åsgård B. Recoverable reserves are estimated at 100 million barrels of oil equivalents, of which 60% is gas.

EMGS has reported Q3 revenues of $1.1 million, down from $1.6 million in the third quarter of 2023. Adjusted EBITDA was a loss of 5.9 million, down from a loss of $0.7 million in Q3 2023. Free cash increased by $7.4 million to $13.2 million.

Aramco has reported third quarter net income of $27.6 billion, compared with $32.6 billion in Q3 2024. Cash flow from operating activities is $35.2 billion and free cash flow $22 billion. Capex totalled $13.2 billion in Q3.

BP is reported to be considering selling a minority stake in its offshore wind business, according to Reuters. The company is reported to have lined up Bank of America to find partners for the business.

EMGS has reported vessel utilisation of 40% for the third quarter, compared with 0% in Q3 2024. The company’s one vessel on charter, Atlantic Guardian commenced acquisition of prefunded multi-client projects, including an OBN seismic survey. EMGS expects to record $0.5 million in late sales multi-client revenue in Q3.

ConocoPhillips has reported third-quarter 2024 earnings of $2.1 billion compared with Q3 2023 earnings of $2.8 billion.

Shell has reported Q3 2024 adjusted earnings of $6 billion. Cash capex for 2024 is expected to be below the lower end of the $22-25 billion range.

BP has reported Q3 underlying RC profit of $2.3 billion, compared with $2.8 billion for the previous quarter.

Shearwater wins 3D surveys offshore Suriname and in Asia Pacific

Shearwater has won a large 3D seismic survey offshore Suriname for client Petronas.

The vessel Amazon Warrior has been allocated to the three-month project, commencing in Q4 2024. ‘High-quality seismic data is a key enabler for unlocking the vast resource potential of the prolific Suriname-Guyana basin and accelerating exploration activities,’ said Irene Waage Basili, CEO of Shearwater.

The 6000 km2 project marks the continuation of Shearwater’s commitment to Petronas in this frontier area having previously performed acquisition work in Block 52.

Meanwhile, Shearwater has won two consecutive towed-streamer survey projects in the Asia Pacific Region with a combined duration of three months. The Shearwater multi-sensor vessel Geo Coral is allocated to the projects which commence in early Q4 2024.

Viridien and SLB shoot 3D survey offshore NW Australia

Viridien and SLB have completed acquisition of a multi-client survey in the Bonaparte Basin, off the NW coast

of Australia. The resulting 6760 km2 ultramodern PSDM seismic data set will evaluate the highly prospective and underexplored area to improve industry

understanding, said Viridien. Processing is underway and final data will be available in Q2 2025.

The complex geological area has been historically challenging to image due to the presence of carbonates and the shallow water. The new survey will provide modern, high-quality data over an area lacking recent, or any 3D data. The data also partially covers a carbon storage block, recently awarded as permit G-13-AP. The survey deployed Sercel Sentinel MS multi-component streamers and the Sercel QuietSea marine mammal monitoring system.

Dechun Lin, EVP, Earth Data, Viridien, said: ‘We are delighted to have partnered with SLB for the first time in Australia to successfully complete this large data acquisition project. The new data set will give interested players greater insight into the exploration and carbon storage potential of this promising area. We will continue to look for opportunities to invest in the country.’

Amazon Warrior is heading to South America and Geo Coral to Asia Pacific.
Map showing coverage of the Bonaparte 3D multiclient survey.

TGS reprocesses 3D data offshore Sierra Leone

TGS has enhanced its Fusion 3D seismic dataset offshore Sierra Leone, focusing on the Vega prospect.

The reprocessing project leverages advanced seismic imaging workflows to enhance depth data, offering clearer subsurface insights for oil and gas companies currently exploring the area or that might secure oil and gas concessions in the near future. Recent discoveries in South America have intensified interest in the region, positioning Sierra Leone as a promising exploration frontier.

‘The Vega prospect has been identified as a promising target within Sierra Leone’s offshore waters, and modern depth data will be instrumental in further mapping the undrilled prospect, enabling progress toward drilling activities,’ said TGS.

To date, eight wells have been drilled on the continental slope region of Sierra Leone, targeting submarine fan systems that demonstrate reservoir quality at multiple stratigraphic levels. The region holds unexplored opportunities in several basin-floor fans in the northern Sierra Leone Basin, where transpressional events created a syn-rift plateau with attractive drilling depth to target.

The reprocessing project includes approx. 7500 km2 of 3D seismic data, complemented by 16,000 line km of 2D Pre-Stack Depth Migrated (PSDM) data, and supported by gravity, magnetic, and interpretive data products.

Oil and gas round-up

Petrobras has announced that the gas potential in the discoveries located in the Guajira Offshore Basin, in Colombia, is around 6 Tcf (trillion cubic feet) in place (VGIP), confirming the magnitude of the discoveries made in the area and its importance for the Colombian gas market. Petrobras, acts as operator (44.44%) in partnership with Ecopetrol (55.56%).

Equinor has have made an oil and gas discovery in the Gudrun field in the North Sea, estimated to between 0.1 and 1.2 million Sm3 of recoverable oil equivalent in the intra-Draupne Formation, and between 0.4 and 1.3 million Sm3 of recoverable oil equivalent in the Hugin Formation. The well’s primary exploration target was to prove petroleum in Late Jurassic reservoir rocks in the intra-Draupne Formation, as well as Middle Jurassic reservoir rocks in the Hugin Formation. Well 15/3-13 S encountered thin oil-bearing sandstone layers in the intra-Draupne Formation. In the Hugin Formation, the well encountered a total of 92 m of sandstone with poor reser-

‘Sierra Leone has all the geological ingredients to emerge as a leading exploration frontier in Africa. With accessible acreage, attractive fiscal terms, and this depth-migrated high-quality Fusion 3D data, the exploration potential is strong,’ said David Hajovsky, executive vice-president of multi-client at TGS.

This project is expected to deliver final results in Q3 2025.

voir properties. Gas was encountered in two intervals, with respective thicknesses of 8 and 7 m. Well 15/3-13 A encountered oil in an 85m-thick interval in the intra-Draupne Formation. 15/3-13 A also proved 100 m of sandstone with poor reservoir properties in the Hugin Formation. Extensive data acquisition and sampling were carried out. Well 15/3-13 S was drilled to 4826 m below sea level. Well 15/3-13 A was drilled 4814 m below sea level. Both wells were terminated in the Sleipner Formation in the Middle Jurassic. Water depth is 110 m.

TotalEnergies has made a final investment decision for the ‘GranMorgu’ development, offshore Suriname. The GranMorgu project will develop the Sapakara and Krabdagu oil discoveries, on which an exploration campaign was completed in 2023. The fields are 150 km off the coast of Suriname and hold recoverable reserves estimated at over 750 million barrels. Total investment is estimated at around $10.5 billion and first oil is expected in 2028. TotalEnergies is the operator of Block 58 with a 50% interest,

alongside APA Corporation (50%). Staatsolie is expected to enter the development project with up to 20% interest.

Aker BP and Wintershall DEA have found gas in the Norwegian Sea, 230 km west of Sandnessjøen and 12 km west of the Skary field. The discoveries are estimated to between 2 and 8.7 million standard cubic metres (Sm3) of recoverable oil equivalent, corresponding to around 13-55 million barrels of oil equivalent. The primary exploration target for well 6507/2-7 S was to prove gas in reservoir rocks in the Tilje Formation in the Lower Jurassic. The secondary exploration target was to prove gas in reservoir rocks in the Middle Jurassic (the Garn Formation) and in reservoir rocks in the Upper Cretaceous (Lysing Formation). In the primary exploration target, well 6507/2-7 S encountered gas in the Tilje Formation, which is 133 m thick, 43 m of which in sandstone rocks with moderate-to-poor reservoir quality. The gas/water contact was not encountered.

Map depicting TGS’ multi-client subsurface data coverage, including 2D and 3D seismic, offshore Sierra Leone.

TGS expands US 3D survey of Appalachian Basin

TGS is expanding the Birmingham 3D seismic survey onshore US covering 276 square miles. The survey is located on the western flank of the Appalachian Basin, aligning with the most prospective trend of the Utica-Point Pleasant formation and Clinton sands.

The Birmingham-Gemini 3D seismic survey will target key formations in the Appalachian Basin, including the Ordovician Trenton, Black River, Utica/Point Pleasant, Cambrian reservoirs, and Silurian Clinton sands. Positioned up-dip from the Utica condensate and gas trend, the survey aims to explore the under-explored Point Pleasant oil window.

The project will map deep structures to identify hydrocarbon traps, analyse facies changes and optimise well placement. TGS will enhance the seismic data by integrating it with its Appalachian geologic and well database, including over 480,000 well logs. Proprietary formation tops and well performance metrics, available through the TGS Well Data Analytics platform, provide clients with deeper analysis into the region’s potential.

Recording for the Birmingham-Gemini 3D survey will commence in early 2025, with the fully processed dataset available to clients by year-end.

Meanwhile, TGS has expanded its 3D seismic coverage for its Benin MegaSurvey, offshore West Africa, to deepen understanding of the region’s subsurface geology and unlock its untapped hydrocarbon potential.

In collaboration with Société Nationale des Hydrocarbures du Bénin (SNH-B), TGS will add 2248 km2 of conventional 3D seismic data to the existing Benin MegaSurvey. The expanded coverage will span from the continental shelf to the slope, offer-

Viridien

ing a comprehensive view of the area and revealing previously untapped exploration potential, said TGS.

In addition to the seismic expansion, TGS has incorporated 10 wells into its RockAVO atlas, which integrates seismic data with well information, enabling clients to explore rock physics models, analyse elastic properties, and visualise seismic AVO responses for detailed subsurface insights. It will also offer well data integration: access a wealth of well logs, and data atlases in a single platform, streamlining exploration workflows. Explorers will be able to screen for geological analogs, and assess lithology, fluid content, and porosity to de-risk plays. They will also be able to ensure data accuracy and seismic image integrity with enhanced quality control features.

reports third quarter net loss of $9 million

Viridien has reported third quarter net loss of $9 million and operating profit of $23 million on IFRS revenues of $219 million, compared with a net profit of $8 million and operating profit of $43 million on revenues of $293 million in Q3 2023.

Digital, Data and Energy Transition (DDE) revenue of $187 million was up 1% with strong revenue growth in Geoscience offset by lower aftersales in Earth Data. Profitability was impacted by -$12 million in penalty fees from vessel commitments.

Geoscience revenue of $103 million was 32% up. Order intake was up 91% as a result of the new UK HPC hub and increased activity in the Middle East.

The new businesses confirm positive momentum, both in CCUS with the release of the latest phase of Gulf of Mexico Carbon Storage Study to support upcoming lease rounds and in Minerals & Mining with the award of a sensing program in Oman, to identify, map and rank mineralisation prospectivity potential.

Earth Data revenue of $83 million was down by 22%, down 22% from $107 million in Q3 2023.

Prefunding revenue at $58 million was up by 4%, boosted by the first contribution of the Laconia project in the Gulf of Mexico, but offset by weaker after-sales (down 50% at $26 million).

Revenue was also boosted by the Norwegian survey for carbon storage leading to the reprocessing of legacy data in the area.

Sensing and Monitoring (SMO) revenue of $59 million was down 51% across land and marine products. Revenue was boosted by delivery of land seismic nodes for large-scale seismic surveys planned in urban areas to target energy resources, including geothermal.

Overall, 2024 revenue is expected to be in line with 2023. Earth Data cash capex is expected to be $230-250 million.

Net cash flow was $10 million and liquidity was $442 million (including $100 million undrawn RCF).

Sophie Zurquiyah, chief executive officer of Viridien, said: ‘Geoscience was particularly strong this quarter to achieve a record high order book. In Earth Data, the Laconia project had increased prefunding and is continuing to progress well. Sensing & Monitoring is actively implementing its adaption plan and is on track to achieve in 2025 the expected cost reduction.’

Selected TGS subsurface data coverage in the Appalachian area, including the Birmingham-Gemini 3D seismic survey and surrounding well data.

Norway to invest $24 billion in exploration this year

Norway's net cash flow from petroleum activities are expected to be $60 billion in 2024 with $24 million invested in petroleum activities.

The expected revenues from the petroleum industry in 2024 are lower than in 2023, when the state’s net cash flow was $91 billion. The decline is mainly because of lower estimates for the gas prices compared to last year.

Oil and gas production is expected to remain relatively stable towards 2030. In the national budget, the government expects that total Norwegian petroleum production in 2024 to be 239 million Sm3 of oil equivalents, while the production is expected to be 243 million Sm3 in 2025. The future production provides the basis for continued high government revenue.

‘Production from the Norwegian continental shelf contributes large amounts

of energy and is significant for the energy supply to Europe. We will continue to develop the petroleum industry and remain a stable and long-term supplier of energy to Europe’, said Norway’s minister of energy Terje Aasland.

Investments in the petroleum industry are estimated at $24 billion in 2024, including both new field developments and investments in producing fields. ‘Developments on the Norwegian continental shelf to employment throughout the country and make it possible to maintain production at a high level up to 2030. It is important that the industry develops all profitable resources in its portfolio and seeks to identify and develop more profitable discoveries. The service and supply industry needs new assignments over the next two years to maintain a high level of activity’, said Aasland.

SLB sign deal with Aramco to work on emissions reduction solutions

SLB and Aramco have signed an agreement to co-develop digital solutions to help mitigate greenhouse gas (GHG) emissions in industrial sectors. Solutions will be integrated within SLB’s digital sustainability platform, building on the collaboration announced in 2022.

The SLB digital sustainability platform will enable industrial companies to accelerate their progress towards net zero by more easily measuring, reporting and verifying (MRV) their emissions.

The data and intelligence also assists in implementing more strategic decarbonisation actions, such as enhancing energy efficiency, reducing methane emissions, and advancing carbon capture, utilisation, and storage (CCUS) initiatives.

‘Data is essential to support increasing calls for emissions transparency, and taking decisive actions on decarbonisation investments,’ said Rakesh Jaggi, president of Digital and Integration, SLB. ‘The digital sustainability platform provides the means to leverage data at scale to drive emission reduction outcomes. We aim to expand the SLB suite of solutions with Aramco’s innovative technologies.’

The agreement establishes a framework for development of several digital solutions on SLB’s digital sustainability platform. These include Aramco’s Flare Monitoring System (FMS) solution, as well as a new decarbonisation planning solution for the forecasting of emissions, and simulating scenarios that aim to determine optimal GHG emissions mitigation pathways.

ENERGY TRANSITION BRIEFS

An independent taskforce has been launched by the British Chamber of Commerce to map out an orderly energy transition for the North Sea. The North Sea Transition Taskforce will be led by Philip Rycroft, a former permanent secretary in the UK Government, who will pull together the expertise of supply chain businesses, unions, environmental groups, and energy policy experts to deliver the widest possible consensus.

Rystad’s annual report Global Energy Scenarios 2024 has found that solar, wind and battery costs are continuing to drop at unprecedented speed, and capacity is coming online at record pace, with solar installations surging 60% to 360 gigawatt-hours alternating current (GWac) in 2023. Electric vehicles (EVs) are expected to reach 23% of new passenger car sales this year, compared with 3% only four years ago, while annual investments in new renewable energy infrastructure exceeded oil and gas spending for the first time in 2023.

Getech has entered into an exploration collaboration agreement with Sound Energy. The companies will collaborate to seek to explore for natural hydrogen and helium in Morocco, with the initial phase of the agreement comprising a joint regional screening study to identify areas of potential interest for more detailed assessment by the parties.

After recent approval of the agreed terms for Ireland’s second offshore wind auction, ORESS ‘Tonn Nua’, the country’s Department of the Environment, Climate and Communications has made critical geophysical data sets available. These datasets will support prospective auction participants in their analysis to better inform and de-risk bid preparations.

The US Bureau of Ocean Energy Management has made available the final Environmental Impact Statement for the proposed SouthCoast Wind Project. The project could generate up to 2.4 GW of offshore wind energy, enough to power more than 800,000 homes.

TGS reports third quarter net income of $37 million

TGS has reported third quarter net income of $37 million on revenues of $501 million, compared to net income of $16 million on revenues of 292.5 million in Q3 2023. Operating profit of $60 million compared with $28 million in Q3 2023.

Multi-client revenues of $280 million compared with $160 million in Q3 2024, driven by solid pre-commitments for new investments and increased sales of existing data. Contract sales of $220 million compared to $132.5 million in Q3 2023.

Full-year pro-forma organic multi-client investments were cut to $425450 million as certain projects have been deferred to 2025. Order inflow reached $423 million during Q3 2024 – total produced backlog is $750 million.

‘We achieved record high utilisation of our OBN crews. Although the utilisation of the 3D streamer fleet has been lower than expected so far this year, we are on a positive trend based on negotiations and tenders. Finally, I’m pleased to see that our solid balance sheet and sound financial policy has prompted substantial upgrades to the credit ratings by both Moody’s and S&P which puts us in a good position to refinance the debt structure at attractive terms,’ said Kristian Johansen, CEO of TGS.

Meanwhile, TGS has launched the US Gulf Coast CO2 Storage Assessment package.

Spanning 70 million acres across the Texas and Louisiana Gulf Coast, the assessment offers insights that streamline storage evaluations and guide strategic decisions. With data from 9000 wells and evaluations of 22 key geologic formations, TGS provides comprehensive formation top and petrophysical interpretations.

Carel Hooijkaas, EVP of new energy solutions at TGS, said: ‘This assessment package is a game-changer for the industry, providing unparalleled data coverage and expert analysis to identify the most viable reservoir and seal formations for CO2 storage.’

The Gulf Coast CO2 Storage Assessment is strategically aligned with the recent Texas General Land Office CCS Lease Sale along the Texas coast. TGS’ initiatives feature a well-structured stratigraphic framework, comprehensive petrophysical analysis, and visual log curves, alongside extensive regional mapping of storage properties and volumetric visualisations.

Finally, TGS has completed its CO2 Storage Assessment for the Michigan Basin. Spanning 50 million acres, the

study provides detailed insights into the region’s geological formations, capacity estimates, and site suitability for carbon sequestration.

The study leverages data from 1650 wells in Michigan, alongside core data from Western Michigan University, to conduct an in-depth analysis of key geologic formations and their potential for CO2 storage. The assessment features a fully integrated stratigraphic framework, comprehensive petrophysical analysis, and advanced log curve interpretations. Both reservoir quality and sealing integrity are examined — critical for advancing carbon capture and storage (CCS) initiatives, said TGS.

Shearwater wins OBN project offshore Angola

Shearwater GeoServices has secured its second award from TotalEnergies for an Angolan OBN project. This follows the recent award of a project at Block B32.

The three-month deepwater OBN survey at B20/11 covering the Golfinho and Cameia fields, will utilise the Shearwater OBN platform comprising of the Pearl node and the company’s dual ROV-equipped vessel SW Tasman The SW Gallien will be acting as source vessel and the project will be in direct continuation of the previously announced project.

Shearwater CEO Irene Waage Basili said: ‘These consecutive projects underline the strength of our long-term strategy and adds to the already impressive utilisation of the Shearwater OBN platform. We are reshaping ocean-bottom seismic in West Africa, breaking through the industry’s quality and cost curves as leading OBN technology is adopted.’

Vessel SW Tasman is mobilising for the three-month project.

Carel Hooijkaas, EVP of new energy solutions at TGS.

Angle-restricted FWI for shallow reservoir characterisation

Isabel Espin1*, Laurene Michou1, Nicolas Salaun1, Daniela Donno1, Øystein Wergeland2, Katrine Gotliebsen2, Diego Carotti1, Joachim Mispel2 and Ståle Høgden3

Abstract

In the Barents Sea, bright amplitudes on seismic data below the Base Quaternary may indicate the presence of hydrocarbon reservoirs. To allow characterisation and quantitative interpretation of these potential reservoirs, analysis of amplitude versus reflection angle (AVA) information is crucial. Even with the availability of high-quality Q-Kirchhoff pre-stack depth migration (QPSDM) products, transmission-absorption relating to shallow gas pockets along with residual surface-related and interbed multiples may compromise the imaging of these shallow reservoirs. By using the full wavefield, imaging from full-waveform inversion (FWI Imaging) provides a reflectivity image without the need to perform the usual pre-processing and migration steps. While FWI Imaging has delivered superior results in terms of imaging compared to more conventional migration methods, it has so far only provided a structural image, without access to common image gathers used to derive AVA information. In this study, we extract elastic information from acoustic FWI Images, generated from impedance updates using angle-restricted raw seismic data. The results of the angle-restricted FWI flow provide comparable AVA products to those obtained from a conventional QPSDM approach and a cleaner and more structurally appropriate AVA image in areas with complex near-surface geology suffering from lack of resolution, noise content or absorption-transmission effects.

Introduction

The Greater Castberg area of the Barents Sea is characterised by a hard water-bottom, at approximately 400 m in depth, followed by a rapid lateral change in geology from west to east, across the Bjørnøyrenna Fault Complex and the Polhem Sub-platform to the Loppa High (Sollid et al., 2021). Overlaying this variable geology, a heterogeneous Quaternary layer was deposited, char-

acterised by the presence of shallow gas pockets, gas hydrates, pockmarks, and iceberg scours. Below the base Quaternary interface, identified as a strong seismic reflector, bright amplitudes suggesting the presence of hydrocarbons can be observed, which can be associated to sandstone reservoir plays located within the faulted blocks, as shown in Figure 1. To understand and delineate these possible reservoirs, analysis of the seismic amplitude versus

1 Viridien | 2 Equinor | 3 Vår Energi

* Corresponding author, E-mail: isabel.espin@viridiengroup.com

DOI: 10.3997/1365-2397.fb2024101

Figure 1 Geological cross section of the Johan Castberg area, illustrating the typical setting for discoveries and prospects sitting along the Bjørnøyrenna Fault Complex (from Sollid et al., 2021, reprinted by permission of the AAPG whose permission is required for further use).

reflection angle (AVA) information is crucial (Ostrander, 1984; Gassaway and Richgels, 1983). In 2019, a source-over-streamer (Vinje and Elboth, 2019) dataset was acquired with the goal of obtaining a high-resolution image of the overburden and deeper targets from the Early Jurassic to the Early Cretaceous formations. After this acquisition, an advanced processing and velocity model building sequence was designed to build a 13 Hz visco-acoustic full-waveform inversion (Q-FWI) velocity model and obtain a high-resolution Q-Kirchhoff pre-stack depth migration (QPSDM) (Salaun et al., 2019, 2021). However, imaging of these shallow reservoirs remains challenging as, in addition to the transmission-absorption caused by the shallow gas pockets, surface-related and interbed multiples contaminate the image.

FWI imaging (Zhang et al., 2020) has been successfully applied in the Barents Sea (Espin et al., 2023) to obtain ultra-high-resolution images. By inverting the full wavefield (primary reflections, ghosts, multiples and diving waves), FWI imaging can provide an elegant solution to obtain a reflectivity image without having to perform the usual pre-processing and migration steps. The resulting image demonstrates an improved signal-to-noise ratio (SNR) as well as continuity and focusing of the events when compared to conventional imaging methods. However, acoustic FWI Imaging lacks the ability to access the AVA or elastic information needed to properly characterise reservoirs and perform quantitative interpretation. One possible solution would be to employ elastic FWI to invert for Vp and Vs. This is particularly true in the case of a multi-sensor recording system, such as ocean-bottom nodes, where pressure and shear wave velocity can be properly decoupled thanks to the recording of vertical and horizontal components (Masmoudi et al., 2024). However, this inversion might be poorly constrained due to the nature of the available acquired towed-streamer data. Another solution is to extract elastic information through acoustic FWI, as proposed by Warner et al. (2022). AVA information can be retrieved by running several acoustic FWIs using angle-restricted raw seismic data as an input. In this paper, we describe how a high-resolution and angle-restricted acoustic FWI flow can be applied to successfully image and characterise shallow reservoirs.

FWI imaging to resolve imaging challenges

Despite the high quality of the conventional processing applied to our study area (Poole et al., 2020), the use of primary

energy alone does not allow optimal imaging of the subsurface, particularly the near surface. To improve the imaging, we performed a FWI velocity update up to a maximum frequency of 150 Hz (Figure 2c) inverting the full wavefield (primary reflections, ghosts, multiples and diving waves), which allowed retrieval of detailed high-resolution velocity variations between the various geological layers. The corresponding 150 Hz FWI image increased both the resolution and the SNR of the subsurface (Figure 2b) compared to the legacy QPSDM (Figure 2a). The FWI image revealed fine structures undetectable on the conventional QPSDM result and helped with the understanding and delineation of the subsurface.

While this improved image is expected to enhance reservoir understanding, it could also be of great interest for near-surface characterisation and for shallow hazard detection. One of the goals of shallow hazard surveying is to ensure that drilling will not cross a gas pocket in the first few hundred metres below the surface. Dedicated site surveys are often conducted in order to image the specific area with high resolution to spot possible drilling hazards in the very near surface. By using the full recorded wavefield, FWI imaging can increase the resolution of the images and can be used for shallow hazard detection with the benefits of the 3D imaging (Dinh et al., 2023).

The benefits of near-surface imaging can be observed in Figure 3, where the image is focused on the water bottom and the first hundred metres below. As for the previous observations, with the full wavefield, imaging of the fault is dramatically improved (Figures 3a, 3d), showing possible continuity of the fault up to the surface seabed. The auto-tracking of the water bottom performed over the two volumes (Figures 3b, 3e) illustrates the increased resolution of the FWI image. A clear map of the rugous and structured water bottom in this region would be of great help to plan surface installations and drilling. In our case, we chose to display the minimum amplitude along the base Quaternary coming from both conventional and FWI imaging (Figure 3c, 3f). In both cases, gas pockets can be reasonably well detected with this method, but the increased resolution observed on the FWI image is still visible on this map. Gas pockets appear sharper and smaller possible pockets are now visible. These gas pockets present along the Base Quaternary create seismic imaging problems for underlying targets. Indeed, this strong impedance contrast tends to reflect a large part of the emitted signal, reducing the SNR below these events. These

Figure 2 Comparison of a) legacy QSPDM image, b)150 Hz FWI image and c) 150 Hz FWI velocity. FWI imaging improved event continuity (blue arrow) and fault definition (white arrow). Thin-layer velocity slowdown highlights possible changes in rock property or fluid content (green arrows).

gas accumulations are also strong internal multiple generators, along with the hard and rugose water bottom.

The sections in Figure 4 illustrate this problem. On Figure 4a, from the legacy QPSDM, the poor SNR and internal multiples are indicated by the pink arrow. These internal multiples, that are not always obvious and depend on the varying impedance contrast of the gas, can be misleading and interpreted as a possible flat spot. FWI imaging naturally accommodates the modelling of internal multiples, and is therefore an excellent tool for confirming or dispelling the presence of flat spots in such an area and ensures the same interpretation quality all along the seismic sections. On Figure 4b, showing the FWI image, continuity below the gas pocket is clearly better and confirms the absence of flat spots. This is critical in this case, as possible reservoirs could be located at similar intervals where these multiples are present in the image, as indicated in Figure 1.

In this example, the structural image appears to be sufficient to define the presence or absence of a possible reservoir. However, having access to elastic information via amplitude-versus-angle information is often of great importance to further characterise reservoirs. To complement information provided with the FWI image, an angle-restricted FWI was performed.

Figure 4 Comparison of a) legacy QPSDM image, b) 150 Hz FWI Image below a gas pocket accumulated along the Base Quaternary. The pink arrow exhibits the presence of an internal multiple bouncing between the water bottom and the gas pocket which interferes with the QPSDM image, but is correctly modelled by the FWI and hence not visible in b) (white arrow).

Angle-restricted FWI

The first element for the angle-restricted acoustic FWI is the initial velocity model. In our case, the 150 Hz FWI velocity model obtained with the full-angle data and full wavefield was filtered at 100 Hz and used as input in a 100 Hz acoustic angle-restricted FWI. The second element is the separation of the recorded data into three angle ranges: near 5°-15°, mid 15°-25° and far 25°-35°. These angles are based on 3D ray-tracing using the initial model. Similar to the full-offset FWI, we start from the raw shot records. The same wavelet, extracted from the data, was used for the three independent inversions. In contrast to the method proposed by Warner et al. (2022), and in the context of this study, we invert for impedance rather than velocity. Initial impedance was calculated from the inverted 150 Hz velocity model filtered to 100 Hz and density estimated from the Gardner relationship.

To evaluate the quality of the inversion, modelling was performed with the initial FWI velocity and density models (Figure 5b) and compared with the updated angle-restricted models (Figure 5c). The analysis focused on a known reservoir, located at approximately 700 m depth. Despite the hard water bottom reflector, which generated strong ghosts and multiples, the reflection event associated with the reservoir was visible on the shot gather (green arrows in Figure 5a). The initial velocity

Figure 3 Comparison of legacy QPSDM image and 150 Hz FWI image for a shallow cross-section (a, d), picked water bottom horizon (b, e) and minimum amplitude map within the Quaternary layer (c, f). Depth of the maps c and f are shown by the dashed line on Figures a and d. On the FWI Image, near-surface faults are more visible (white arrows). The associated picked water bottom shows iceberg scours (blue arrows) that are not visible with conventional imaging. When focusing on the amplitude map, FWI imaging improves gas pocket delineation (yellow arrows), facilitating improved potential hazard detection.

5 100 Hz FWI modelled shot and residual over a known reservoir to assess the quality of the angle-restricted acoustic FWI. a) Observed shot, b) modelled shot with starting FWI model and its residual in (d); c) modelled shot with 100 Hz angle-restricted FWI update, one impedance model per angle range, and its residual in (e).

by deriving cross-products between the intercept and gradient.

7 Comparison of the intercept and intercept-gradient cross-product between QPSDM (a and c, respectively) and angle-restricted acoustic FWI (b and d, respectively) at 100 Hz. The FWI-based intercept exhibits improved resolution and fault continuity. A similar trend, in terms of AVA anomalies, is observed between the QPSDM and the FWI result.

and density models allowed modelling of the various events with accurate kinematics, as seen in Figure 5b. However, they lacked accuracy in terms of amplitude fidelity, as observed with the residual (Figure 5d). This amplitude-related residual was as

expected because only the starting velocity model was obtained from a previous FWI, the starting density model having been derived from the corresponding Gardner’s relationship. When inverting for impedance using FWI on each angle range

Figure
Figure 6 FWI sub-images for the near (a), mid (b) and far (c) angle range. AVA behaviour can be assessed
On this cross-product, bright spots are clearly visible, indicated by the blue arrows.
Figure

separately (Figure 5c), both the kinematic and amplitude matching of the modelled data with the observed data were improved.

The impedances obtained from the angle-restricted FWI were then converted to reflectivity by performing the derivative of the impedance with an angle compensation, which applied the required AVA correction to the angle-restricted acoustic FWI Images (Warner et al., 2022). We hence obtained three FWI sub-images for the near, mid and far-angle range (Figure 6a, 6b, 6c). These sub-images not only have the benefits of the FWI Image, namely the sharp faulting and the improved SNR, they also preserve the AVA behaviour. To assess the quality of the results, AVA attributes, such as intercept, gradient and cross-product, can be computed from the FWI sub-images, as performed in Figure 6d. We can appreciate here the presence of small hydrocarbon pockets sitting along the faults and their amplitude increasing as the incidence angle increases.

To validate the AVA information obtained from our FWI sub-images, the intercept and cross-product were also compared with the legacy QPSDM (Figure 7). The structural uplift observed with the full-angle FWI Image was preserved, fault networks were sharper and the shallow gas pockets were better captured. Regarding reservoir characterisation, the class 3 AVA was captured for this known gas reservoir (Figure 7d). Analysing Figure 7, AVA trends were quite similar between the angle-restricted FWI and the QPSDM, even in the deeper part of the section, below the arrival of the first-order surface-related multiple.

After confirming the good match in AVA behaviour between the well-established PSDM and our proposed angle-restricted FWI method, another assessment was made, this time in the Nor-

dkapp basin, also in the Barents Sea, where a gas accumulation is visible as strong bright amplitudes in the FWI image.

For this test, a model-based stratigraphic inversion was run using the FWI sub-images, which were derived with the same angle-restricted FWI workflow as explained before. In this case, the process inverted simultaneously the different FWI sub-image reflectivity obtained for the different angles using the same starting model used for the angle-restricted FWI as input (Coulon et al, 2006). During inversion, the initial model is iteratively perturbed using a simulated annealing procedure to find a global solution that optimises simultaneously the match between the input sub-images and the corresponding synthetics, calculated by convolution using the full Zoeppritz reflectivity equations. The Vp and Vs properties obtained after the inversion correspond, as expected, to a class 3 AVA anomaly (Figure 8), with a slow-down in the gas pocket for Vp while no fluid effect was visible for the Vs. This confirmed the good decoupling of the Vp and Vs estimation and hence the quality of the FWI sub-Images to conduct further reservoir characterisation.

Acoustic FWI for AVA: benefits and limitations

The benefits of angle-restricted FWI are particularly visible when conventional imaging tools, such as Kirchhoff migration, struggle to image the reflectors. Sub-optimal imaging results obtained by conventional methods can be attributed to different reasons, such as the complexity of the raypaths, lack of resolution, transmission, absorption, noise or signal leakage during the conventional data pre-processing. This is illustrated in Figure 9, where the QPSDM (Figures 9a and 9c) shows a possible reservoir affected by the presence of a shallow gas pocket in the overburden. The

Figure 9 Comparison of intercept-gradient cross-product, overlaid with intercept for subline view, QPSDM (a) and angle-restricted acoustic FWI (b), over a shallow gas pocket. Improved continuity vertically and laterally is observed in the FWI case (green arrows), showing sharp boundaries at the fault planes. Similar observations can be made in the corresponding depth slice view at approximately 575 m (c and d), with improved coherency and reduced noise.
Figure 8 A model-based stratigraphic inversion was conducted over a shallow gas pocket, in the Nordkapp Basin, using angle-restricted FWI Images as input. Depth slice view illustrates the obtained Vp (a) and Vs (b). As expected, a slow-down of Vp was observed within the gas pocket, while it is absent in the inverted Vs. Courtesy of AkerBP.

lack of resolution and absorption compensation in the near surface prevents the migration operator from collapsing the diffractions at the location of the faults, which appear smeared, thus compromising the imaging of possible hydrocarbon traps. The AVA cross-product obtained with the FWI flow (Figures 9b and 9d) exhibited a remarkably improved result in this shallow area. The FWI flow also had the benefit of starting from raw recorded data, reducing possible signal leakage or bias in the data processing stage.

Regarding the limitations of this method, it is important to consider that the angle mutes applied to the data are based on the incidence angle of the primary reflection, which is not valid for multiples. This means that the FWI Image for a given angle range would need to satisfy the amplitudes for both the primary reflection angle and the typically smaller multiple reflection angle. As both amplitudes, primary and multiple, cannot be solved at the same time because they correspond to different angles, there is a risk of primary amplitudes not being fully respected. In the case of this study, despite the limitations, results deeper than the multiple arrival depth (around 1.2 km depth) seemed to preserve the expected AVA response compared to the QPSDM result, suggesting that we can qualitatively identify the AVA anomalies. However, this method is particularly recommended for shallow reservoir characterisation, and results deeper than the multiple arrival depth must be interpreted with caution. Angle-restricted FWI could then be a complement to the FWI image used for shallow hazards, refining understanding of the near surface and providing more reliable information about potential drilling hazards.

Conclusions

In the complex geological context of the Barents Sea, three angle-restricted FWI’s were run to generate three angle reflectivities, for near, mid and far angles. With this methodology, high-resolution elastic information revealing the expected class 3 AVA response can be derived directly from raw data without the need for a full conventional processing and imaging sequence. Analysis of these results showed comparable amplitude variations between angle stacks obtained with acoustic FWI and with the conventional flow. Cross-product QCs extracted from the FWI sub-images showed a good coherence with the expected results observed from the QSPDM. Benefits of the angle-restricted FWI were observed where the conventional Kirchhoff migration struggled to provide a sharp image due to lack of resolution, noise content or absorption-transmission effects caused by the presence of shallow gas pockets. In these cases, high-frequency FWI imaging delivered a more detailed, cleaner and more structurally appropriate image than the conventional imaging approach due to the use of the full wavefield and more accurate solving of multiples.

Acknowledgements

We would like to thank Equinor, Vår Energi, Petoro, TGS and Viridien for permission to publish these results. We thank AkerBP for permission to show/publish the Nordkapp basin example. We also thank our colleagues Jean-Philippe Coulon and Bernard Deschizeaux for useful discussions.

References

Coulon, J.-P., Lafet, Y., Deschizeaux, B., Doyen, P.M. and Duboz, P. [2006]. Stratigraphic elastic inversion for seismic lithology discrimination in a turbiditic reservoir. SEG Expanded Abstract, 2092-2096.

Dinh, H., Latter, T., Townsend, M., and Grinde, N. [2023]. Dual-azimuth FWI imaging and its potential in shallow hazard assessment. 84th EAGE Annual Conference & Exhibition, Extended Abstract.

Espin, I., Salaun, N., Jiang, H. and Reinier, M. [2023]. From FWI to ultra-high-resolution imaging. The Leading Edge, 42(1), 16-23.

Gassaway, G.S. and Richgels, H.J. [1983]. SAMPLE, seismic amplitude measurement for primary lithology estimation. 53rd SEG meeting, Expanded Abstracts, 61G 613.

Masmoudi, N., Ratcliffe, A., Bukola, O., Tickle, J. and Chen, X., [2024]. Elastic FWI of Multi-Component Ocean-Bottom Seismic to Update Shear-Wave Velocity Models. 85th EAGE Annual Conference & Exhibition, Extended Abstract.

Ostrander W.J. [1984]. Plane wave reflection coefficients for gas sands at nonnormal angles of incidence. Geophysics, 49, 1637-1648.

Poole, G., Cichy, K., Kaszycka, E., Vinje, V. and Salaun, N. [2020]. On top of seismic sampling-benefits of high-resolution source-over-streamer acquisition. 82nd EAGE Annual Conference & Exhibition, Extended Abstract.

Salaun, N., Henin, G., Wright, A., Pellerin, S., Deprey, J., Deschizeaux, B., Souvannavong, V., Dhelie, P. and Danielsen, V. [2019]. Capturing the value of high-resolution source-over-streamer acquisition at Barent Sea. 81st EAGE Annual Conference & Exhibition, Extended Abstract.

Salaun, N., Reinier, M., Espin, I. and Gigou, G. [2021]. FWI velocity and imaging: A case study in the Johan Castberg area. 82nd EAGE Annual Conference & Exhibition, Extended Abstract.

Sollid, K., Henriksen, L. B., Hansen, J. O., Thießen, O., Ryseth, A., Knight, S., & Groth, A. [2021]. Johan Castberg: the first giant oil discovery in the Barents Sea. AAPG Memoir 125: Giant Fields of the Decade: 2010–2020, Robert K. Merrill and Charles A. Sternbach, eds., 213–248.

Vinje, V. and Elboth, T. [2019]. Hunting high and low in marine seismic acquisition; combining wide-tow top sources with front sources. 81st EAGE Annual Conference & Exhibition, Extended Abstract.

Warner, M., Armitage, J., Umpleby, A., Shah, N., Debens, H. and Mancini, F. [2022]. AVO Determination Using Acoustic FWI. 83rd EAGE Annual Conference & Exhibition, Extended Abstract.

Zhang, Z., Wu, Z., Wei, Z., Mei, J., Huang, R. and Wang, P. [2020]. FWI Imaging: Full-wavefield imaging through full-waveform inversion. 90th SEG Annual International Meeting, Expanded Abstract, 656-660.

DATA MANAGEMENT AND PROCESSING

Submit an article

The industry has continually innovated to offer greater volume and better quality data. Geoscience companies are competing to offer improved data processing and management packages for new acquisition and reprocessing of vintage data, enhancing their packages using machine learning and artificial intelligence. Greater compute power is aiding geoscientists’ application of complex algorithms and integration with other types of data to provide a more accurate and sophisticated picture of the sub-surface – both for oil and gas projects but also increasingly for renewable energy projects.

Rebecca Head outlines a systematic approach to increasing the value of subsurface data ecosystems.

Gordon Poole et al demonstrate how the use of multiples in imaging may provide improved shallow illumination and potentially reduce the requirement for extensive site-survey acquisition in some areas.

Richa Rastogi et al investigate 2D TTI and 3D isotropic RTM, focusing on the impact of shot-centric and fold-centric aperture selection on computational efficiency and imaging accuracy.

Steven L. Roche presents a specific processing flow for an onshore 9C4D seismic project to monitor CO2 injection in a carbonate reservoir.

Tim Seher et al use rotational measurements from a new type of ocean bottom node for the attenuation of non-compressional energy.

Richard Mohan et al explore key components in transforming data management to enable AI-ready subsurface data.

Neil Hodgson et al reflect on how geoscientists are the bees of processing intelligence, cross fertilising solution-strategies and benefiting from the diversity of practitioners they interact with as they explore the diversity of the world’s geology.

First Break Special Topics are covered by a mix of original articles dealing with case studies and the latest technology. Contributions to a Special Topic in First Break can be sent directly to the editorial office (firstbreak@eage.org). Submissions will be considered for publication by the editor.

It is also possible to submit a Technical Article to First Break. Technical Articles are subject to a peer review process and should be submitted via EAGE’s ScholarOne website: http://mc.manuscriptcentral.com/fb

You can find the First Break author guidelines online at www.firstbreak.org/guidelines.

Special Topic overview

January Land Seismic

February Digitalization / Machine Learning

March Reservoir Monitoring

April Underground Storage and Passive Seismic

May Global Exploration

June Technology and Talent for a Secure and Sustainable Energy Future

July Modelling / Interpretation

August Near Surface Geo & Mining

September Reservoir Engineering & Geoscience

October Energy Transition

November Marine Acquisition

December Data Management and Processing

More Special Topics may be added during the course of the year.

Optimising subsurface data management: a systematic approach

Rebecca Head1* outlines a systematic approach to increasing the value of subsurface data ecosystems.

Abstract

Subsurface specialists need complete, high-quality, gold-standard datasets in order to make the best interpretations, which ultimately lead to business decisions. The plethora of data types, data sources, file systems, physical locations, and team structures make managing subsurface data in energy companies a complex challenge.

Poor subsurface data management has resulted in interpretations being made that do not reflect all of the data or the best data. Furthermore, storage costs spiral due to data duplication, and companies suffer the consequences of regulatory infringement if they fail to meet the rules of national data repositories, such as the UK National Data Repository (UK NDR).

This article outlines a systematic approach to increasing the value of subsurface data ecosystems. First the ecosystem is programmatically assessed to find pain points such as duplication, lack of quality assurance and quality checking, or regulatory infringement. Next a rationalisation project is implemented to fix these issues. Finally, the newly streamlined subsurface data ecosystem is rigorously maintained through monitoring, ongoing reporting, and management. Four case studies are included to demonstrate the value that this approach has delivered to energy companies.

Introducing the subsurface data ecosystem

Access to complete, high-quality data is a prerequisite to making good subsurface interpretations, and maximising business value. Subsurface specialists — such as geophysicists, geologists, petrophysicists and reservoir engineers — can work most effectively when they have gold-standard data, easily accessible. This can only be achieved through corporate data governance; i.e., ensuring that subsurface data are curated in repositories and reference projects that are searchable, conveniently organised, and rapidly accessible to end users.

Unfortunately, all too often such well-curated data are not readily available. Subsurface interpreters report losing precious time searching for data. According to the International Data Corporation (IDC), ‘A quarter of upstream data users currently spend more than 80% of their time on data handling activities’ (Verma, 2023). Furthermore, nearly one third of respondents to an IDC survey thought that data quality was one of the main

1 Cegal

* Corresponding author, E-mail: Rebecca.Head@Cegal.com DOI: 10.3997/1365-2397.fb2024102

technical challenges when analysing data in G&G applications (Verma, 2023). Poor decisions are made due to poor quality-assurance (QA) and poor quality-checking (QC) of data. For example, a mid-sized energy company recently drilled a dry well due to data management problems. A deviation survey was incorrectly loaded into an interpretation project, and vital QC steps were missed. This incorrectly located object was not spotted and resulted in the well being drilled in the wrong place — a costly, but avoidable error.

The quantity and complexity of our subsurface data are increasing as new technologies are developed. Our understanding of the subsurface has benefited from advances in seismic data collection, including broadband data recording, simultaneous sources, full-azimuth, and higher channel counts (Monk, 2014). According to Monk (2014), ‘The overarching trend common to virtually every seismic survey is the need to acquire more and more data – more angles, more density, more channels, more frequencies’.

To make useful and accurate predictions, whether for oil and gas extraction, carbon storage, or wind farm installation (to name just a few!), requires all these relevant data to be integrated to best model the subsurface. These data comprise complex ecosystems that evolve as new data are created and new interpretations made.

Every subsurface data ecosystem is different. However, they often share common challenges, such as regulatory non-compliance, data duplication, unused projects, orphaned ZGY, and multiple versions of interpretation projects running parallel. For example, a detailed subsurface data assessment at a mid-sized UK oil and gas operator uncovered 102 TB of duplicated seismic data. Another operator had 69% duplication of ZGY data internal to interpretation projects, 25% digital well duplication, and 59 TB of unused interpretation projects. Cegal’s analysis suggests that most UK operators that have legacy wells (drilled 1965-1990) are only 25-30% compliant with UK NDR regulations.

Subsurface data ecosystems are often poorly understood and can lack systematic management. This comes about through the adoption of user-centric workflows that do not systematically govern the whole data environment. Management of new data is often prioritised to the detriment of legacy data, so these older

datasets end up being underutilised. The tight deadlines and short timelines inherent in the energy industry have encouraged speed over quality and long-term consistency when it comes to data management and utilisation. As described previously, this short-term focus can lead to missed opportunities and increased operational expenses that are detrimental to the business.

In this article, a systematic approach is described that builds a holistic understanding of the subsurface data ecosystem. This enables the value of subsurface data to be maximised to produce the most accurate interpretations and make the best business decisions. Case studies will demonstrate how this approach enables energy companies to extract more value from subsurface data and interpretations, as well as ensuring compliance with UK NDR regulations.

A systematic approach

Herein a systematic approach to understanding and maximising value from a subsurface data ecosystem is outlined.

It comprises three key stages: 1) Assessment, 2) Rationalisation; and 3) Monitoring and Maintenance (Figure 1). To deliver each stage successfully, multi-skilled teams need to work seamlessly to towards a common goal. Subsurface domain experts must work with seismic and well data managers, GIS specialists, compliance experts, programmers, data architects, and project managers to build, monitor and maintain a high-quality data ecosystem. This collaborative approach requires a clear roadmap for the project and defined goals. Wherever possible technology is employed to expedite repetitive tasks, thereby minimising human error and reducing time to value.

Figure 1 Overview of the recommended systematic approach. The Cegal Data Program consists of three phases, shown here from left to right: a thorough assessment of the existing data ecosystem (Data Assessment), a data rationalisation phase to clean and organise it (Data Rationalisation), and the development of a monitoring policy, delivered as-aservice (Maintenance and Monitoring).

Figure 2 Illustration of a Data Assessment focusing on getting an extensive overview of Petrel* projects available within a company. The overview provides an at-a-glance summary of the total number of projects, active projects, those not modified over a specific period, along with the modification year and the application version used.

Cegal’s systematic approach has been applied to two main subsurface data domains: 1) the subsurface data and project landscape; and 2) UK NDR compliance. The value of assessing, rationalising and maintaining these subsurface data domains is described in the ‘Case studies’ section.

Assessment

A Data assessment is carried out to provide a wholistic view of the data ecosystem, including what data are available and

where they are stored. This stage can be programmatically automated using proprietary file crawlers that scan file systems and projects.

The data gathered include data types, data volumes, file types, file locations, duplications, and corruptions (Figure 2).

This ‘bird’s eye view’ of the subsurface data ecosystem allows challenges to be identified, and a plan to be created to maximise subsurface data value.

Figure 3 This example illustrates the number of projects per application, the duplication rate, and the corresponding storage consumption. Duplicate projects trigger unnecessary search time and lead to inefficiencies during interpretation, characterisation, and modelling phases.
Figure 4 Another example that clearly highlights the potential for disk space savings by reducing duplicate projects.

Rationalisation

A rationalisation project is then carried out to address the challenges identified in the data assessment. The undertaken work is unique to the data ecosystem and is guided by the energy company’s key aims and priorities. These vary, but often include:

• Ensuring that only ‘gold-standard’ datasets are used in interpretations and decision-making (Figures 3 and 4),

• Rationalising data gained through mergers and acquisitions,

• Avoiding costly regulatory infringement by meeting the requirements of the UK NDR,

• Mitigating spiralling data storage costs by deleting and archiving (Figure 5).

Maintenance and monitoring

Subsurface data ecosystems are not static; they grow and evolve as new data are acquired, new interpretations are made, and

6 Example of active monitoring. This illustration shows a web-based application that generates weekly reports, allowing users to track the number of files in use per application, monitor storage consumption, and identify opportunities for optimisation. The regular reporting ensures continuous oversight, helping to manage resources efficiently and proactively address potential issues.

businesses merge. Post-rationalisation, it is imperative that the subsurface data ecosystem is maintained. This is achieved through adherence to corporate data governance, and comprehensive monitoring (Figure 6).

Nearly half of respondents surveyed by the IDC said that lack of data governance was one of the greatest hurdles to updating their organisation’s data management strategy (Verma, 2023). By implementing corporate data governance we have helped energy companies to fast-track the identification of good data, ensuring that interpretations are built on the best data, and less time is spent searching for data, thereby decreasing risk and costs.

Case studies

The subsurface data and project ecosystem comprises:

• All subsurface data types (e.g., seismic and well data)

Figure 5 This figure highlights two key aspects: the total number of duplicate projects that could potentially be deleted, and inactive projects that may benefit from a different archiving strategy.
Figure

• The range of project types created in which these data are analysed and interpreted (e.g., Petrel, Kingdom**, tNavigator***)

• The interpretations and models created by subsurface experts (e.g., reservoir models).

1. Creating ‘gold-standard’ datasets

Seismic data come in many formats from standard SEG-Y to proprietary formats. Seismic in native SEG-Y is loaded into Petrel, creating a Petrel-formatted seismic. If subsequently loaded into another platform, such as Paleoscan****, further proprietary formats will be created, resulting in several versions of the same seismic data. Furthermore, seismic data can be internalised into a project, breaking the link to the original data source. The result is multiple versions of seismic data are available; there is lack of clarity as to which should be used in interpretations, and new/updated data are not always incorporated into the latest interpretations. To mitigate these issues for a mid-sized UK energy company, Cegal built them a new seismic data library. First everything on their system was indexed using proprietary file crawlers to ensure no data were overlooked. New standards were introduced, including those for seismic naming, data formats, and data loading. The seismic data were then programmatically analysed to identify the ‘best of’ datasets, any internal seismic data were externalised using Cegal tools and a new library was created by moving the refined data into a central repository. This work ensured that the subsurface teams were provided with high-quality datasets in standard formats, easily accessible from one location.

2. Rationalising data post M&A

Mergers and acquisitions increase the complexity of the subsurface data ecosystem. After completion of the transaction, the integration step represents a significantly resource- and time-consuming process. This integration also needs to happen on the technical side of things, and data are again crucial to the continuity of operations in the newly created entity. Not only are data from a new plethora of sources and repositories acquired, but energy companies will likely have different data processes and standards, such as well naming conventions and coordinate reference systems (CRSs).

To maximise the value of their data, acquiring energy companies need to understand what data they now have, and where the data are. Considering that many terabytes or even petabytes of data are usually involved in a merger or acquisition, this is a huge task.

In 2023, Cegal supported a large UK-based energy company in the rationalisation of its subsurface data environment. The company had grown through the merger and acquisition of several independent organisations. This expansion led to a particularly complex subsurface data ecosystem, with inherited inconsistencies in data models, folder structures, and naming conventions. Additionally, overlapping assets from these acquisitions resulted in substantial data duplication.

The work began with an assessment of the entire data ecosystem employing various file crawlers. This identified high levels of duplication amongst seismic datasets, and interpretation projects. A rationalisation strategy was proposed to meet the

client’s subsurface data objectives, leveraging Cegal’s automation technologies to:

• Find, rationalise and consolidate all interpretation projects to a centralised location.

• Ensure all seismic data are externalised from interpretation projects and are in a centralised corporate repository.

• Scan, identify and remove all duplicate seismic data from the network.

Corporate data governance was instigated across the amalgamated assets implementing key principles such as de-duplication, metadata management, and ensuring data quality. Additional rules, like standardising file formats, maintaining version control, and securing sensitive data, help to ensure that data are accessible, consistent, and compliant with organisational standards.

As a result, a centralised gold-standard data environment is now available to users across the newly formed company, ensuring that they have access to the best data for their interpretations and decision-making.

3. Regulatory compliance

Extensive regulations define the way that subsurface data are accessed, stored, and disclosed. Failure to comply with these regulations can have serious financial consequences. Writing in Hart Energy, Trish Mulder highlighted that ‘most companies only have proprietary rights to 10% to 25% of their seismic data’. The majority of their data are thus licensed or non-proprietary, and companies need to be careful not to infringe usage restrictions and regulations.

Cegal worked with a multinational energy company that acquired an international rival’s UK division. To comply with the terms of multi-client datasets, the company needed to delete unlicensed seismic data post-acquisition, as the presence of unlicensed datasets posed a risk of legal and financial repercussions.

Using proprietary file and project crawlers, Cegal completed an assessment of the data in one week, building a full inventory and analysis of the company’s storage and software projects. Based on this assessment and the client’s priorities, unlicensed seismic files were programmatically removed, and an audit trail was provided for compliance and transparency. According to the client, the project ‘reduced the cycle time by about 90% to locate and remove the target data files’.

Regulatory compliance also encompasses compulsory data sharing with the UK NDR. As previously described, the levels of compliance among many UK oil and gas operators are low, and these numbers fall as the degree of data sharing required by law increases. This noncompliance can be eliminated by employing a systemic approach.

Working with a multi-national energy company, Cegal carried out gap analysis, including programmatic approaches, to elucidate the data that needed to be shared with the UK NDR. This was followed by data discovery exercises to locate missing data. Legacy data were evergreened and the data requisitions backlog was worked through.

Crucially, much of the legacy data held by energy companies does not conform to modern data formats. A multi-skilled team is needed to expose and format the various data to be sent to the UK

NDR, ranging from hardcopy well transcription to digital well datasets. This people-based approach is enhanced and augmented by programmatic tools, but is not ruled by them. By putting in the technically challenging and nuanced work, real value is added to legacy data, as problems are fixed at source, and once-forgotten data are made available.

4. Reducing storage costs

Data volumes are increasing. Not only are we continuously adding to the mountain of legacy subsurface data by shooting more seismic surveys and drilling more wells, but the quantity of data also collected in each instance is rising, as seismic resolution increases, and more tools are deployed. Duplication of subsurface data is a common challenge affecting many energy companies and it has significant cost implications.

Cegal carried out a subsurface data assessment at one energy company in which ~780 TB of duplicated data were identified. Stored on AWS (Amazon Web Services), this would cost approximately $500,000 a year (based on $60 per TB per month), and would result in approximately 160 metric tons of CO2 emissions annually (Jones, 2018; Shehabi et al., 2016). The same applies to on-premises storage where duplicated data increases storage demands, which not only raises costs but also amplifies energy consumption, resulting in a greater environmental impact and higher emissions.

Having identified the duplicated data, a rationalisation project was carried out to deduplicate the file systems. This included externalising seismic data held within interpretation projects, removing identical project duplicates and removing corrupt projects. As a result, the energy company achieved a substantial reduction in the volume of subsurface data, leading

to significantly lower storage costs (Figure 7). Importantly, all valuable and relevant data have been preserved, securely stored, and are now easily accessible to users, enhancing efficiency in future interpretation and analysis workflows. This approach does not only ensure cost savings but also promotes more effective and sustainable data management policies, supporting better decision-making and operational agility across the entire organisation.

Conclusions

As described in this paper, the subsurface data ecosystem is inherently complex — it comprises all subsurface data types (e.g., seismic and well), all interpretation projects (e.g., Petrel, Kingdom, tNavigator), and the interpretations and models created by subsurface experts (e.g., reservoir models). As more data are collected and our understanding of the subsurface increases, this ecosystem is becoming increasingly complex. If not properly managed, this can quickly result in unnecessary costs, and data being underutilised.

A systematic approach maximising value from a subsurface data ecosystem has been presented. It comprises three stages: 1) Assessment, 2) Rationalisation, and 3) Maintenance and Monitoring. This method utilises programmatic approaches and the expertise of a multi-skilled team, allowing energy companies to systematically map their existing data ecosystem and take action to address pain points, such as excessive storage costs, duplicate or inactive interpretation and modelling projects, and non-compliance with existing regulations.

Ultimately, this approach ensures the implementation of sustainable data management practices that can smoothly evolve alongside changing business conditions.

Figure 7 Dashboard displaying the results after rationalisation: substantial disk savings were achieved by deleting projects that were either unused or unusable due to issues such as file corruption, absence of coordinate systems, prolonged inactivity, and other factors.

References

Earley, S., and Henderson, D., Sebastian-Coleman, L. (Eds.) [2017]. The DAMA Guide to the Data Management Body of Knowledge (DAMA-DM BOK). Bradley Beach, NJ: Technics Publications Jones, N. [2018]. How to stop data centres from gobbling up the world’s electricity. Nature. https://www.nature.com/articles/d41586-01806610-y

Monk, D. [2014]. Technological Advances Bringing New Capabilities To Seismic Data Acquisition. The American Oil & Gas Reporter https://www.aogr.com/magazine/editors-choice/technological-advances-bringing-new-capabilities-to-seismic-data-acquisitio Mulder, T. [2021]. E&P Plus Feature: Seismic Data Ownership, Entitlements and the Cost of Compliance. Hart Energy. https://www.

hartenergy.com/exclusives/ep-plus-feature-seismic-data-ownership-entitlements-and-cost-compliance-193015

Shehabi, A., Smith, S. J., Sartor, D. A., Brown, R.E., Herrlin, M., Koomey, J.G., Masanet, E.R., Horner, N., Azevedo, I.L., and Linter, W. [2016]. United States Data Center Energy Usage Report. Energy Technologies Area. https://eta.lbl.gov/publications/united-states-data-center-energy Verma, G. [2023]. Exploring the Geoscience Data Management Landscape. IDC Energy Insights

*Petrel is a trademark of SLB.

**Kingdom is a trademark of S&P Global.

***tNavigator is a trademark of Rock Flow Dynamics. ****Paleoscan is a trademark of Eliis.

Sixth EAGE Borehole Geology Workshop

Seventh EAGE Rock Physics Workshop

The Seventh EAGE Rock Physics Workshop in Cape Town is a great opportunity for professionals and enthusiasts in the field to share their research and ideas. Submitting an abstract would be a valuable way to engage with the latest developments and connect with others in the industry. If you’re involved in rock physics, this could be a fantastic platform to showcase your work and network with leaders and experts.

Don’t miss this opportunity to submit your abstract and connect with industry leaders, experts, and fellow enthusiasts.

Exploring the benefits and pitfalls of using multiples in imaging

Gordon Poole1* and Milad Farshad1 demonstrate how the use of multiples in imaging may provide improved shallow illumination and potentially reduce the requirement for extensive site-survey acquisition in some areas.

Introduction

Many modern acquisition geometries using wide-tow streamer or ocean-bottom node (OBN) designs are optimised for the efficient illumination of deep targets using primary arrivals. One disadvantage of such configurations is that they often result in heavy striping when imaging the shallower section, owing to a lack of primary arrivals with small reflection angles. This may relate either to limited near-offset recordings from outer towed streamers or shots located away from the OBN receiver lines. With OBN geometries, down-going mirror migration may improve imaging of the shallow section, but the result may still be stripy and contaminated by noise, owing to the limited illumination between receiver lines. The use of free-surface multiples to enhance subsurface illumination has been discussed in the literature for many years (Berkhout and Verschuur, 1994). A free-surface multiple imaging approach that involves re-injection of recorded data as secondary sources has been shown to provide superior shallow imaging when compared to imaging using primaries alone (Mujis et al., 2005, Whitmore et al., 2010, Poole, 2021a).

Primary imaging using one-way wave-equation migration involves source-side injection and subsequent forward propagation of a source wavelet, synchronised with the backward

propagation of recorded data after demultiple on the receiver-side. At depths where the forward- and backward-propagated wavefields correlate in time and space, a subsurface image is formed using a cross-correlation or deconvolution imaging condition. Multiple imaging involves the injection of recorded data as secondary sources for both the source and receiver sides. The illumination from all multiple orders occurs simultaneously when forward-propagated arrivals correlate with the subsequent multiple order on the backward-propagated wavefield.

Figure 1 compares primary imaging and multiple imaging depth slices from a towed-streamer acquisition in the Central North Sea. The acquisition utilised a dual-source configuration towing ten streamers with a 100 m separation. Figure 1a illustrates a vessel sailing into the page with a water bottom primary arrival ray-path annotated. A water velocity v w = 1500 m.s-1 , with a water bottom velocity vwb = 1800 m.s-1, gives a critical angle of θ c = 56o (Snell’s law: θ c = sin-1 (v w/vwb)). The water bottom depth in this area was 92 m, meaning that the critical angle was encountered at an offset h c = 273 m (h c = 2d tan θ c). Considering the 150 m source-to-streamer layback present in this acquisition, pre-critical reflections will only be recorded for streamers three to eight. Beyond this point, energy will not be transmitted below

1 Viridien

* Corresponding author, E-mail: gordon.poole@viridiengroup.com

DOI: 10.3997/1365-2397.fb2024103

Figure 1 Ray diagrams relating to: a) Primary imaging, and b) Multiple imaging. Comparison of c) Primary imaging, and d) Multiple imaging for a 170 m depth slice from a towed-streamer acquisition in the Central North Sea.

the water bottom, which will reduce deeper illumination. Wavelet stretch will also become a major issue beyond this point. Figure 1c shows a depth slice at 170 m from a common-shot primary wave-equation migration. Acquisition striping is observed along with low lateral resolution due to wavelet stretching. Figure 1d shows the same depth slice from multiple imaging. Injecting the data as secondary sources allows the imaging of receiver-side pegleg multiples (shown by the black ray-path in Figure 1b). This completed the illumination of the shallow section and revealed small-scale features as highlighted by the white boxes.

Multiple crosstalk

One pitfall of multiple imaging that has been discussed in the literature is multiple crosstalk (Poole et al., 2010, Lu et al., 2016).

Multiple crosstalk relates to energy with the appearance of multiples contaminating the seismic image, caused by correlations of arrivals unrelated to one order of multiple separation. Causal crosstalk relates to contamination after an event; for example, a forward propagated primary correlating with a backward propagated second- or higher-order multiple. Anti-causal crosstalk relates to contamination above an event and may be caused, for example, by a forward propagated arrival correlating with a backward propagated primary.

Figure 2 discusses the formation of causal crosstalk using a simple synthetic consisting of a reflector at a 100 m depth within a constant-velocity subsurface of 1500 m.s-1. Figure 2a introduces the ground-truth reflectivity and modelled data used for this analysis. These modelled data relate to a split-spread towed-streamer geometry including a primary arrival, P, first-order multiple, M1, second-order multiple, M2, and third-order multiple, M3. In Figure 2b the modelled data has been propagated to a 100 m depth. The propagation used the one-way phase-shift-plus-interpolation (PSPI) strategy as outlined by Biondi (2006). A deconvolution imaging condition using the two gathers produced a strong response at the reflector due to the time alignment between P and M1, M1 and M2, Mn and Mn+1, etc. The gathers were further propagated to image deeper, and at a 200 m depth, Figure 2c, the deconvolution imaging condition resulted in strong first-order crosstalk due to the time alignment between P and M2, M1 and M3,

M n and Mn+2. The correlation between events more than one multiple order apart resulted in this non-physical event that was not present in the input model, Figure 2a. The phenomenon continued at a 300 m depth, Figure 2d, where each arrival correlated with an arrival three-orders later: M n with Mn+3. Lu et al. (2016), describes a strategy to model the crosstalk, where the contamination may be subtracted in the image domain.

Our proposed causal crosstalk mitigation approach (Poole and Farshad, 2024) involved a cascaded least-squares multiple imaging scheme, where the residual left by a shallow leastsquares multiple imaging result was used as the starting point for deeper least-squares multiple imaging. Figure 3a shows multiple imaging results from a towed-streamer dataset from the Norwegian North Sea. While the image is well focused in the shallow section, it is contaminated by water bottom multiple crosstalk (black arrow) and gas multiple crosstalk (white arrow). Shot gather input data is shown in Figure 3d and the residual after the least-squares multiple migration is shown in Figure 3e. The multiple migration has described most of the multiples in the input data, leaving the primary arrivals in the residual as expected. The least-squares cost function successfully described the multiples, but there was nothing in the scheme to prevent it doing so without causal crosstalk in the image. The proposed approach begins with least-squares multiple imaging of the water bottom, Figure 3b, produced using image-domain sparseness weights (Poole et al., 2021b). While a constant depth range is used for most migration algorithms, the image-domain sparseness weight volume used in this iterative least-squares technique masks image updates on each iteration to a discrete depth centred on the water bottom. The residual relating to this inversion, Figure 3f, was void of the water-bottom multiples observed on the input (Figures 3d and 3f, white arrows), but still contained longer-period multiples relating to the gas and deeper section (box on Figure 3f). The proposed approach continued by using this residual as input to imaging of the deeper section, Figure 3c. The image closely resembled the regular least-squares result (Figure 3a), but with less causal crosstalk from the water bottom. The corresponding residual is shown in Figure 3g, which looks similar to the regular

Figure 2 Synthetic analysis of causal crosstalk formation; a) Reflectivity input and modelled data, b) Image formation at 100 m depth, c) Image formation at 200 m depth, and d) Image formation at 300 m depth.

3 Illustration of cascaded least-squares multiple imaging: a) Regular least-squares multiple imaging, b) Constrained least-squares multiple imaging of the water bottom, c) Water bottom-driven cascaded multiple imaging, d) Shot gather starting residual, e) Shot gather residual after regular least-squares multiple imaging, f) Shot gather residual after constrained least-squares multiple imaging of the water bottom, g) Shot gather residual after water bottom driven cascaded multiple imaging, h) Constrained least-squares multiple imaging including water bottom and shallow gas, and i) Water bottom and gas-driven cascaded multiple imaging.

Figure
Figure 4 Input receiver gather data for the Utsira OBN dataset: a) Up-going migration, b) Down-going migration, c) Multiple migration, and d) Joint primary and multiple migration.

residual, Figure 3e. With this approach the algorithm is forced to describe water bottom multiples exclusively via the water bottom image, thus preventing deeper cross-talk contamination. Some gas-related multiple crosstalk was still present (Figure 3c white arrow) as the cascaded approach did not consider the gas multiple generators. For completeness, Figure 3h shows leastsquares multiple imaging of the water bottom and gas arrivals, the residual from which was used to image the deeper section, Figure 3i. We now see that the causal crosstalk noise has been substantially reduced for both water bottom and gas-related multiple generators.

Subsurface illumination

Data used in the following discussion relates to an OBN acquisition in the Utsira area of the Norwegian North Sea. Shot carpet data with a 25 m × 50 m sampling were recorded by OBNs on a 50 m × 300 m grid. Based on well-known reciprocity principles, a source-side wavelet was injected at the node position and data after up-down deconvolution was input at the shot positions for the receiver-side backward propagation (Figure 4a). Figure 5a shows time slices at 236 ms and 370 ms (after stretching from depth to time) resulting from this up-going primary migration. The shallow image suffered from heavy acquisition striping

owing to a lack of small reflection angles away from the receiver lines. This created significant uncertainty in the interpretation of the shallow section.

One well-known approach to improve shallow imaging involves the mirror migration of OBN down-going data (Grion et al., 2007). Mirror migration involves injecting and forward propagating a source wavelet at the mirror node location and using down-going data after demultiple at shot locations for the receiver-side backward propagation (Figure 4b); the resulting time slices are shown in Figure 5b. While the illumination is now relatively more continuous inbetween the receiver lines, the image is low fold, and some acquisition striping remains.

The migration of multiple reflections involves the injection of recorded data as secondary sources for both the source and receiver side (Figure 4c). The simultaneous illumination of all multiple orders occurs when forward propagated arrivals from the source side correlate with the following multiple order on the backward propagated wavefield from the receiver side. Compared to the down-going mirror migration, an increase in fold provides an improved signal-to-noise ratio (S/N), and illumination at smaller reflection angles has reduced wavelet stretching in the shallow section (Figure 5c). In addition, illumination beyond the receiver line layout is observed.

Figure 5 Imaging time slice results for the Utsira OBN dataset: a) Up-down deconvolution, b) Down-going mirror migration, and c) Multiple migration.
Figure 6 Imaging inline results for the Utsira OBN dataset: a) Down-going mirror migration, b) Multiple migration, c) Joint primary and multiple migration (JPMI), and d) Cascaded JPMI.

In Figure 6a we show an inline (location annotated with dotted line on Figure 5) from the down-going mirror migration above one of the receiver lines. While the shallow section has been illuminated, it is contaminated by dipping noise due to being low-fold and suffers from some wavelet stretching. Figure 6b shows the same inline from multiple migration. While the image is sharp and coherent in the shallow, the S/N decreases as we go deeper, and crosstalk is prevalent throughout (see arrows). The S/N of multiple reflection contributions to the image is related to the number of orders of multiple that correlate for a given reflection. For short-period multiples, many multiple orders reverberate after each primary arrival resulting in a reliable and robust image. As the multiple period increases, the number of multiple orders in the input data decreases and the S/N decreases. Joint primary and multiple imaging (JPMI) may be achieved by including the direct arrival in the source-side data (Figure 4d). By doing so, we see that the S/N of the resulting image (Figure 6c) has increased, and anti-causal crosstalk contamination (white arrows) has been reduced as the primaries are largely described by the source-side direct arrival. Some causal crosstalk (black arrows), however, remains. Figure 6d shows the image after cascaded JPMI which has reduced causal crosstalk to negligible levels. Compared to the conventional down-going mirror migration route, the cascaded least-squares JPMI approach produced an image with comparable

8 Imaging results from the Heimdal Terrace OBN dataset: a) Up-down deconvolution crossline, b) JPMI crossline, c) Up-down deconvolution depth slice, and d) JPMI depth slice. Depth slices are at 200 m.

deep imaging and improved shallow imaging without the need for designature and demultiple preprocessing. Some residual noise remained in the shallow section as the least-squares nature of the solution partially modelled shallow noise that was present on the down-going mirror migration (Figure 6a).

Data examples

This OBN example comes from the UK Central North Sea, acquired with shots on a 50 m × 50 m grid. Figures 7a and 7b compare crossline sections perpendicular to the receiver lines for down-going mirror migration and JPMI, respectively. While the down-going mirror migration illuminates the shallow section, it is affected by receiver line striping. The JPMI image provides a similar level of lateral resolution with a reduction in striping in the shallow. Figures 7c and 7d show depth slices at 170 m for the area from the down-going mirror migration and JPMI, respectively. As well as providing a reduction in acquisition striping, we can see increased illumination at the bottom of the image (white rectangle), where imaging of multiples has provided imaging away from the receiver lines. Some slight residual shot-point acquisition footprint can be seen on the JPMI depth slice, e.g., black rectangle in Figure 7d.

The final example comes from the Heimdal Terrace OBN survey in the Norwegian North Sea acquired with a 25 m × 50 m shot sampling recorded by receivers on a 50 m × 300 m layout.

Figure 7 Imaging comparison from UK Central North Sea OBN dataset: a) Down-going mirror migration crossline, b) Down-going JPMI crossline, c) Down-going mirror migration depth slice, and d) JPMI depth slice.
Figure

After up-down separation, these data were migrated to produce crossline and depth slice displays for a fast-track product, shown in Figures 8a and 8c respectively. The OBN line spacing has resulted in clear shallow striping making this shallow section unusable. The crossline and depth slice results from least-squares JPMI are shown in Figures 8b and 8d respectively. These JPMI results show a considerable improvement in shallow illumination and resolution, revealing shallow meandering channel systems and small-scale features throughout.

Conclusion

We have highlighted the limitations of shallow primary imaging and shown how the use of free-surface multiples may provide significant additional resolution in the shallow section. We have illustrated how the benefit of multiple imaging diminishes with increasing image depth and how multiple crosstalk may contaminate the multiple migration image. JPMI has been presented as a technique that can combine the illumination benefits of primaries and multiples while reducing anti-causal multiple crosstalk levels. Additionally, we have described a cascaded least-squares JPMI approach that is able to reduce causal multiple crosstalk to negligible levels. The use of multiples in imaging may provide potential for improved QC of shallow velocity models, allow the assessment of shallower targets, and potentially reduce the requirement for extensive site-survey acquisition in some areas.

Acknowledgements

We thank Viridien Earth Data for the towed-streamer data examples, data owners TGS and Axxis for the Utsira OBN images, Viridien Earth Data and TGS for the UK Central North Sea OBN data example, and Viridien Earth Data and TGS for the Heimdal Terrace OBN images. We also thank Viridien for its permission to publish this article.

References

Berkhout, A.J and Verschuur, D.J. [1994]. Multiple technology: Part 2, migration of multiple reflections. 64th Annual International Meeting, SEG, Expanded Abstracts, 1497-1500.

Biondi, B.L. [2006]. 3D seismic imaging. Investigations in Geophysics, SEG publication.

Grion, S., Exley, R., Manin, M., Miao, X-G., Pica, A., Wang, Y., Granger, P-Y. and Ronan, S. [2007]. Mirror imaging of OBS data. First Break, 25(11).

Lu, S., Whitmore, D., Valenciano, A., Chemingui, N. and Ronholt, G. [2016]. A practical crosstalk attenuation method for separated wavefield imaging. 86th Annual International Meeting, SEG, Expanded Abstracts, 4235-4239.

Mujis, R., Holliger, K. and Robertsson, J.O.A. [2005]. Prestack depth migration of primary and surface-related multiple reflections. 75th Annual International Meeting, SEG, Expanded Abstracts, 2107-2110.

Poole, G. [2021a]. Least-squares multiple imaging constrained jointly by OBN and towed-streamer data. 82nd EAGE Annual Conference & Exhibition, Extended Abstracts.

Poole, G., Moore, H., Blaszczak, E., Kerrison, H., Keynejad, S., Taboga, A. and Chappell, M. [2021b]. Sparse wave-equation deconvolution imaging for improved shallow water demultiple. 82nd EAGE Annual Conference & Exhibition, Extended Abstracts.

Poole, G. and Farshad, M. [2024]. Cascaded least-squares multiple imaging for reduced multiple crosstalk. 85th EAGE Annual Conference & Exhibition, Extended Abstracts.

Poole, T.L., Curtis, A., Robertsson, J.O.A., and van Manen, D-J. [2010]. Deconvolution imaging conditions and cross-talk suppression. Geophysics, 75(6), W1-W12.

Whitmore, N.D., Valenciano, A.A., Sollner, W. and Lu, S. [2010]. Imaging of primaries and multiples using a dual-sensor towed streamer 80th Annual International Meeting, SEG, Expanded Abstracts, 31873192.

To serve the interests of our members and the wider multidisciplinary geoscience and engineering community, EAGE publishes a range of books and scientific journals in-house. Our extensive, professional marketing network is used to ensure publications get the attention they deserve.

EAGE is continually seeking submissions for both book publishing and articles for our journals.

A dedicated and qualified publishing team is available to support your publication at EAGE.

Optimal Imaging Aperture for computational efficiency in 2D and 3D Reverse Time Migration using SeisRTM

Laxmaiah Bathula1 and Saheb Ghosh2 investigate 2D TTI and 3D isotropic RTM, focusing on the impact of shot-centric and fold-centric aperture selection on computational efficiency and imaging accuracy.

Abstract

Seismic migration is essential for converting seismic data into accurate subsurface images. The computational cost of migration is influenced by the migration algorithm, data dimensionality, and geological medium properties. Among the various methods, Reverse Time Migration (RTM) based on the two-way wave equation is regarded as the most precise for handling complex subsurface conditions. However, the complexity and computational demands of RTM increase significantly with transitions from isotropic to anisotropic media and from 2D to 3D data. Additionally, the choice of imaging aperture strongly affects RTM’s computational cost, making an optimal aperture crucial for accurately imaging subsurface features while managing resources. In this study, we investigate 2D TTI and 3D isotropic RTM, focusing on the impact of shot-centric and fold-centric aperture selection on computational efficiency and imaging accuracy using the in-house developed SeisRTM software suite. Experiments on synthetic and real field data show that the fold-centric aperture reduces memory and compute time by up to 1.4x compared to the shot-centric aperture while maintaining comparable image quality. These findings highlight fold-centric apertures as a computationally efficient choice for large-scale seismic imaging, especially in resource-intensive environments. SeisRTM’s capabilities in high-frequency migration, large dataset handling, and target-oriented migration were instrumental in this study, highlighting it as an efficient tool for large-scale RTM applications.

Introduction

Seismic migration is a crucial step in transforming seismic data into accurate subsurface images, typically representing the final stage of the seismic data processing cycle (Yilmaz, 1987). It is a computationally intensive process, with its requirements heavily influenced by the choice of migration method, the dimensionality of the data, and the complexity of the geological medium, such as isotropy or anisotropy (Vestrum, et al. 1999; Grechka, et al. 1998). A wide range of migration methods is available today. Among these, RTM (Baysal et al. 1983; Claerbout, 1971) based on the two-way wave equation is recognised for its superior accuracy in handling complex subsurface conditions.

The complexity of RTM escalates significantly when transitioning from isotropic to anisotropic media. In particular, the wave equations used in Tilted Transverse Isotropic (TTI) RTM are substantially more intricate than those required for Vertically Transverse Isotropic (VTI) RTM or Isotropic (ISO) RTM. Additionally, the computational demands for RTM vary with the complexity of the wave equations and the dimensionality of the data, from two-dimensional (2D) to three-dimensional (3D) media.

A critical parameter in RTM is the imaging aperture, which plays a vital role in accurately imaging subsurface features (Whitmore, 1983; Zhang et al.2009). An overly narrow aperture may reduce computational costs but can fail to adequately capture complex geological structures, while a broader aperture increases accuracy at the expense of higher computational demand. This paper discusses the implementation of in-house developed RTM application ‘SeisRTM’ package which supports 2D TTI and 3D ISO RTM and modelling. We examine the differences in computational resource requirements for two different aperture selection methods and evaluate the accuracy of 2D TTI results using a synthetic TTI model and field data, as well as the accuracy of 3D ISO results using field data.

Methodology 2D TTI RTM

For TTI RTM implementation, a 2D constant density acoustic wave equation-based formulations were used which are given from Equation (1) to Equation (4) (Fletcher et al. 2009).

1 Centre for Development of Advanced Computing (C-DAC) | 2 Geodata Processing and Interpretation Centre (GEOPIC), ONGC

* Corresponding author, E-mail: richar@cdac.in  DOI: 10.3997/1365-2397.fb2024104

In these equations:

• ; σ = 0.75 provides a stable solution (Fletcher et al. 2008),

• H and V represents horizontal and vertical stresses, respectively,

• v pz is vertical P-wave velocity,

• = is horizontal P-wave velocity,

• = is P-wave moveout velocity,

• ε and δ are Thomsen’s anisotropic parameters,

• θ is the dip angle.

The above equations were solved using a Finite Difference (FD) approximation on a regular Cartesian grid, achieving second-order accuracy in time and fourth-order accuracy in space. Absorbing boundaries were implemented on all sides of the grid to prevent boundary reflections (Pasalic and McGarry, 2010).

The source wavelet was introduced during the time-marching stage at a specified source location in the medium.

The 2D RTM image of each shot is created by cross-correlating the forward and backward wavefield using the Equation (5) (Rastogi et al. 2022)

Where:

• I cc (x, y, z) is the 3D RTM image,

• S s (x, y, z, t) is source wave field,

• RS (x, y, z, t) is receiver wavefield,

• t is FD time step and

• (x, y, z) are grid points for a 3D medium.

Individual shot gathers were processed to reduce RTM artifacts and subjected to far-offset energy muting before stacking. These steps were essential to producing the final 3D migrated image of the subsurface.

Imaging aperture

In RTM, the imaging aperture refers to the spatial extent of the sub-velocity model used to create the subsurface image for a particular shot. The size and resolution of this sub-velocity model directly impact the computation time, memory, and storage required for processing each shot. The computational resources needed for RTM can vary significantly depending on the geometry of each shot within the survey. Optimal aperture selection is crucial, particularly in complex, dipping subsurface environments, as it influences image quality (Hongchuan et al. 2002).

Where:

• I cc (x, z) is the RTM image

• S s (x, z, t) is source wave field,

• RS (x, z, t) is receiver wavefield,

• t is FD time step and

• (x, z) are grid points for a 2D medium.

The individual shot gathers were processed for RTM artifacts and far energy muting before addition.

3D ISO RTM

For the 3D seismic modelling of isotropic media, we employed a constant-density two-way wave equation, as described in Equation (6):

The imaging aperture is typically a user-defined parameter in RTM processing. A wider aperture can increase computation time and storage requirements, potentially resulting in a noisy image of shallow regions, while a narrow aperture might yield an inferior subsurface image. The appropriate spatial extent of the velocity model depends on the shot geometry and the complexity of the subsurface. For a moderately complex subsurface, the optimal aperture length can be selected as twice the maximum offset in the in-line and x-line directions. In this paper, we experiment with two types of imaging aperture criteria to evaluate their impact on computational resources and result accuracy in both 2D and 3D RTM. Refer to Figures 1 and 2 for visual representation of the concept in 2D and 3D, respectively.

a. Shot centric aperture

Where:

• ∇ is the Laplacian operator,

• t is FD time step,

• V is acoustic wave velocity at a grid location in 3D space,

• P is the pressure wavefield,

• F(t) is the source function and

• x s is the source location in the 3D velocity model.

This equation was solved using a staggered grid FD method (Virieux, 1986), which achieves second-order accuracy in time and tenth-order accuracy in space. Boundary reflections were mitigated by employing the Convolutional Perfectly Matched Layer (CPML) method, ensuring minimal artifacts at the edges of the computational domain (Komatitsch et al. 2007).

The 3D RTM image of each shot is created by cross-correlating the forward and backward wavefield using the Equation (7).

In the shot-centric approach, the centre of the aperture is positioned at the shot location. Figure 1(a) illustrates the shot-centric aperture for a 2D case. In this figure:

• The black box outlines the total extent of the subsurface model.

• Rmin and R max indicate the first and last receiver locations for a shot, while the blue star marks the shot location.

• The green double-arrowed line on top of the survey represents the aperture length, with a green dot showing the centre of the aperture for various geometries, including right-end, split spread, and left-end cases.

• The red dotted region highlights the migration area governed by the aperture and the target depth of migration.

In split spread geometry, the migration aperture length equals the receiver span, as the aperture is set to twice the maximum offset. The aperture length can be adjusted depending on user specifications, but for moderately varying subsurface conditions, twice the maximum offset is recommended.

For complex subsurface structures, a larger aperture can better accommodate dipping reflections on both sides of the shot location.

In right or left end-on geometries, where receivers are present only on one side of the shot, the shot-centric aperture can capture reflections from opposite dipping directions relative to the shot point, Figure 1(a). However, this approach may introduce additional artifacts in the RTM image since it considers a migration area beyond the actual shot foldage.

For diagrammatic illustration of shot-centric aperture for a 3D case, see Figure 2(a). We considered only an end-on geometry for this purpose.

The left diagram of Figure 2(a) shows the top plane in x-z dimensions as a black parallelogram, while the right diagram presents a cuboid view in x-z-y dimensions, visualising aperture concept in a 3D subsurface perspective.

The shot is marked by a blue star, and the receiver lines are represented by multiple blue dotted lines.

1 Selecting velocity model for shots with varying geometries for 2D RTM processing, (a) shot-centric aperture, and (b) fold-centric aperture.

Figure 2 Selecting velocity model for individual shots for 3D RTM processing (a) shot-centric aperture in-line and x-line and (b) fold-centric aperture along in-line and x-line.

The green double-arrowed lines intersecting at the shot location indicate the shot-centric aperture in the in-line and cross-line directions. The aperture here is set to twice the maximum offset in both directions.

The red box in the right diagram of Figure 2(a) defines the migration area using this approach.

b. Fold centric aperture:

In the fold-centric approach, the imaging aperture is centered at the Common Mid Point (CMP) fold centre for each shot gather. For a 2D case, see Figure 1(b):

• The yellow line shown in the diagram is the extent of the CMP of Rmin and R max receivers.

• The fold centre is marked as F center for all three types of geometries.

• Notably, the aperture length is shorter in the fold-centric method compared to the shot-centric approach. For moderately complex subsurfaces, the optimal aperture length can be the distance from F center till R max

Figure
(a)
(b)

Data Parameters

• The migration region is reduced due to the shorter aperture length, resulting in cleaner images with lower computation time and storage needs.

In 3D Figure 2(b), the fold-centric approach similarly reduces the migration region compared to the shot-centric method. We conducted extensive experiments to analyse the effects of both approaches in performing RTM using TTI media for 2D and isotropic media for 3D.

Experiments, results and discussion

HPC System

For this research, we utilised the PARAM Porul cluster (PARAM Porul | NSM (nsmindia.in)), which features 2 x Intel Xeon Cascade Lake 8268, 24 cores, 2.9 Ghz, processors per node, 192GB memory and 480 GB SSD. Each node boasts 192 GB of memory, and the total storage capacity of the system is 1 PiB, based on a Parallel File System (PFS). providing high-speed data access across the cluster.

In-house RTM software: SeisRTM

For our experiments, we utilised the in-house developed HPC software suite, SeisRTM. This software provides 2D modelling and RTM capabilities for both isotropic and anisotropic (VTI and TTI) media, as well as 3D isotropic modelling and RTM. SeisRTM integrates the Convolutional Perfectly Matched Layer (CPML) technique to mitigate boundary reflections and employs

Table 1 Data parameters for 2D TTI RTM for synthetic data.

Figure 4 Comparison of shot-centric and fold-centric RTM outputs for the synthetic TTI Model: (a) Shotcentric imaging aperture, (b) Fold-centric imaging aperture, and (c) Difference plot between (a) and (b).

high-order finite difference formulations for improved accuracy. Additionally, it supports various imaging conditions, including cross-correlation and normalised cross-correlation, and offers a target-oriented migration feature for enhanced imaging precision.

SeisRTM is equipped with a suite of data preparation and post-processing tools, optimised to handle high-frequency migration and large datasets efficiently. It is designed for parallel computing environments, making it well-suited for deployment on CPU clusters without core limitations. The software suite is implemented in C and C++, utilising MPI and OpenMP APIs to achieve effective parallelisation, performance, and computational accuracy.

SeisRTM offers both a Command Line Interface (CLI) and a Graphical User Interface (GUI) for user flexibility, and includes an in-house developed data visualisation tool, to streamline data analysis and visualisation.

Numerical experiments

The numerical experiments aimed to evaluate the migration capabilities of the SeisRTM application using synthetic and real field data. These test cases were conducted with varying imaging apertures to compare their outcomes in terms of imaging accuracy, computational time, and resource utilisation. By experimenting with different aperture criteria, we assessed how each approach impacts the quality of the migrated images and the efficiency of the RTM process.

Figure 3 Synthetic TTI model and parameter (a) P-wave velocity (b) Epsilon (c) Theta (d) Delta.

The tests were structured to analyse both 2D TTI and 3D isotropic cases, applying the shot-centric and fold-centric aperture methods. Performance metrics, including computation time, memory usage, and data storage requirements, were recorded to quantify the resources needed for each setup.

BP TTI 2007: A Synthetic Case

Figure 3 (a-d) presents the parameters of the synthetic TTI model, while Table 1 shows the data parameters used for RTM. Figure 4 illustrates the results: (a) and (b) depict the outcomes of the shot-centric and fold-centric imaging apertures, respectively, and (c) shows the difference plot between the two apertures. The RTM execution time and memory requirements for 500 shots are shown in Figure 9.

For each shot gather, the memory/storage requirement was 28 GB with a shot-centric aperture (aperture length: 10025 m) and 24 GB with a fold-centric aperture (aperture distance: 8500 m), resulting in a 1.16x reduction in memory/storage usage per shot gather when applying the fold-centric technique.

2D marine: A real data case

Figure 5 (a-d) presents the parameter models, and Table 2 shows the data parameters used for RTM. Figure 6 (a) shows single shot gather and Figure 6(b) and (c) shows shot-centric and fold-centric RTM shot image gather. The fold-centric RTM utilises a shorter aperture length compared to the shot-centric RTM, which reduces the migration region. Figure 6(b) shows shot-centric image gather with more imaging area as compared with Figure 6(c) fold-centric Image gather.

Figures 7 (a-c) and 8 (a-c) display the outcomes of shot-centric and fold-centric imaging apertures across two subsections of a marine line and the difference between the two sections. The RTM memory requirements and execution time for 500 shots are summarised in Figure 9(a) and 9(b) respectively.

For each shot gather, the memory/storage requirement was 170 GB with a shot-centric aperture (aperture length: 8300 m) and 129 GB with a fold-centric aperture (aperture length: 6220 m), resulting in a 1.37x reduction in memory/storage usage per shot gather with the fold-centric technique.

The computational time for processing 500 shots on 15 CPU nodes was 9 hours for the shot-centric aperture and 6.8 hours for the fold-centric aperture in TTI RTM, achieving a 1.32x reduction in compute time with the fold-centric approach. The results produced by both shot-centric and fold-centric apertures are comparable, indicating that the fold-centric aperture is a computationally efficient alternative without sacrificing image quality.

The computational time for processing 500 shots on 12 CPU nodes was 8.86 hours for the shot-centric aperture and 6.5 hours for the fold-centric aperture in TTI RTM, demonstrating a 1.36x reduction in compute time with the fold-centric approach. As shown in Figure 6(c), the results of both shot-centric and fold-centric apertures are comparable, with minimal differences between them, further validating the efficiency of the fold-centric technique. Data

Table 2 Data parameters for 2D TTI RTM for field data.
Figure 5 Real field TTI model parameters (a) P-wave velocity (b) Epsilon (c) Theta (d) Delta.
(b)
(a)
(c)
(d)
Figure 6 Real Field TTI Data (a) Shot gather (b) Shotcentric shot image gather (c) Fold-centric shot image gather.

Model Parameters

Table 3 Model parameters for 3D ISO RTM for field data.

Figure 7 Comparison of 2D TTI RTM (25 Hz) field data using different apertures for first subsection of marine line: (a) Shot-centric aperture, (b) Fold-centric aperture, and (c) Difference plot between (a) and (b).

Figure 8 Comparison of 2D TTI RTM (25 Hz) field data using different apertures for second subsection of marine line: (a) Shot-centric aperture, (b) Fold-centric aperture, and (c) Difference plot between (a) and (b).

Figure 9 (a) Memory usage and (b) Compute time vs. Aperture selection of 2D field and synthetic data using TTI RTM.

Figure 10 3D ISO real field data: (a) 3D velocity model, (b) Shot gathers.

Data Parameters

in-line range 900 to 1850 x-line range 850 to 3810

Table 4 Data parameters for 3D ISO RTM for field data.

(a) (b)
(a)
(a)
(b)
(a)
(b)

3D marine: a real data case

Figure 10 (a) and (b) shows an in-line of 3D velocity model and 3D shot gather respectively. Table 3 presents the model parameters, and Table 4 displays the data parameters used for 3D isotropic RTM. Figure 11 (a) and (b) shows the results for the shot-centric and fold-centric imaging apertures. The RTM memory requirements and execution time for 1154 shots are summarised in Figure 12 (a) and (b), respectively.

For each shot gather, the memory requirement was 600 GB using the shot-centric aperture (aperture length: 5000 m along the in-line and 1000 m along the x-line) and

451 GB using the fold-centric aperture (aperture length: 3750 m along the in-line and 1000 m along the x-line), resulting in a 1.33x reduction in memory usage with the fold-centric method.

The compute time for processing 1154 shots on 30 CPU nodes was 59 hours for the shot-centric aperture and 42 hours for the fold-centric aperture, achieving a 1.4x reduction in compute time with the fold-centric approach. Figure 11 illustrates that the imaging results obtained using both shot-centric and fold-centric apertures are comparable, validating the effectiveness of the fold-centric aperture in reducing computational demands without compromising image quality.

Overall, the experiments confirm that the fold-centric aperture achieves substantial computational savings over the shot-centric approach, reducing both memory and compute time requirements while maintaining comparable image quality. These results highlight the potential of fold-centric apertures as an efficient choice for large-scale RTM applications, especially in complex environments.

Conclusions

This study has demonstrated the effectiveness of optimised imaging apertures in RTM, specifically the advantages of fold-centric over shot-centric apertures in reducing computational costs without compromising image quality. Using the in-house developed SeisRTM software suite, we conducted comprehensive tests on synthetic and real data in both 2D and 3D cases, highlighting the critical role of aperture selection in RTM processes.

Our findings show that the fold-centric aperture consistently reduces memory and compute time requirements by up to 1.4x compared to the shot-centric approach. For example, in the 3D marine case, memory usage per shot was reduced from 600 GB to 451 GB, and compute time was shortened from 59 to 42 hours, confirming the computational efficiency of the fold-centric approach. Despite these reductions, image quality was comparable across both aperture methods, making the fold-centric aperture a valuable choice for large-scale seismic imaging, especially in resource-intensive environments.

The capabilities of SeisRTM were instrumental in this analysis. Designed for high-performance computing environments, The software’s support for target-oriented migration, high-order finite difference formulations, and customisable imaging conditions allowed for precise testing of aperture effects on imaging accuracy and computational efficiency.

(a)
(b)
(c)
Figure 11 3D ISO RTM (50 Hz) Field data using different apertures: (a) Shot-centric aperture, (b) Fold-centric aperture, and (c) Difference plot between (a) and (b).
Figure 12 (a) Memory and (b) Compute time vs Aperture selection for 3D Field data RTM.
(a) (b)

Cut seismic processing time with

PROVEN RESULTS OF STRYDE LENS™ PROCESSED IMAGES

Drilling location identified within 3 weeks following the completion of a 3D survey in Europe

Adapted the survey design of a 2D grid just 48 hours after data harvesting in South America environment, allowing processing to begin before the survey has concluded.”

“STRYDE Lens™ offers a ground-breaking approach to land data processing by temporarily converting STRYDE’s acquisition hardware into a processing Celina Geirsz Senior Processing Geophysicist, STRYDE

OIL AND GAS
CCUS

A time-lapse, multi-component (9C4D) seismic data processing flow, Vacuum Field, New Mexico, USA

Steven L. Roche1* presents a specific processing flow for an onshore 9C4D seismic project to monitor CO2 injection in a carbonate reservoir.

Abstract

Processing time-lapse, multicomponent seismic data requires steps to ensure repeatability. Maintaining vector fidelity in the shear wave processing is required for accurate S-wave birefringence analysis. We have developed a specific processing flow for an onshore 9C4D seismic project to monitor CO2 injection in a carbonate reservoir. An element of the processing flow is ‘Common Trace Pair’ (CTP) processing to derive and apply surface-consistent corrections between repeated surveys to remove differences in P-wave and S-wave images due to near-surface and overburden changes between survey acquisitions.

Introduction

In 1995 Texaco conducted a CO2 injection test in a carbonate reservoir. The Reservoir Characterization Project (RCP) at the Colorado School of Mines participated by acquiring, processing, and interpreting the time-lapse, multi-component (9C4D) seismic data. The objective is to use repeated P-wave and S-wave images for reservoir characterisation of the CO2 injection processes. The Vacuum field project was RCP study Phase VI. Within RCP, we developed seismic data processing flows to increase repeatability and maintain resolution for both the P-wave and S-wave data. A key innovation was Common Trace Pair (CTP) processing, utilising the common raypath from co-located pairs of source and receiver locations. CTP cross correlations of all trace pairs can be decomposed into prestack, surface consistent terms, then combined with surface-consistent deconvolution, statics and amplitude corrections. Pré-stack application increases the repeatability of the time-lapse data, both P-wave and S-wave, allowing improved resolution of subsurface bulk rock properties associated with reservoir processes. A second innovation is the strict attention given to surface consistent processes to maintain vector fidelity in the shear wave processing.

Vacuum Field – Geologic setting

Vacuum Field is located in Lea County, New Mexico, on the northwest shelf of the Delaware Basin. The Northwestern Shelf and Delaware Basin are located within the Permian Basin region in West Texas and Southeastern New Mexico. The sedimentary

1 Geophysical Consultant

* Corresponding author, E-mail: slroche314@gmail.com

DOI: 10.3997/1365-2397.fb2024105

section is primarily composed of Paleozoic carbonates and evaporites (Ordovician through Permian Age).

Vacuum Field is located on the northwestern shelf of the Delaware Basin. The lithology is primarily carbonates, with periodic siliciclastic and evaporite deposition. The general depositional characteristics are cyclic carbonate deposition in a shallowing upward environment with eventually greater amounts of evaporites being formed as the basin desiccates (Wilson, 1975). There are multiple hydrocarbon-producing intervals in the Vacuum Field ranging from shallow Permian to deep Devonian and Ordovician horizons. In the Central Vacuum Unit, the shallowest producing reservoir is the San Andres – Grayburg at a depth of 1300 m. The resulting thickness of the San Andres is 425 to 460 m in the study area.

Vacuum Field – Reservoir characteristics / CO2 injection

The Vacuum Field was discovered in 1929 with the drilling of the Socony Vacuum State No. 1 well in Section 13-T17S-R34E, Lea County, New Mexico. Typical rock and fluid properties of the San Andres Formation are tabulated below (Wehner, pers. comm.):

• Gross pay: maximum of 180 m

• Net to gross ratio: average of 40%

• Porosity: 0 %-24 %, average of 11.6 %

• Permeability: 0 md-530 md, average of 22.3 md

• Initial reservoir pressure: 1628 psia at 1370 m

• Producing interval: 1300-1460 m

• Reservoir temperature: 105° F

• Stock tank oil gravity: 38 API

• Bubble point pressure: 764 psia

• Initial solution gas-oil ratio: 400 SCF/STB (differential data)

• Reservoir oil viscosity: 0.96 cp at the bubble point pressure

• Reservoir drive mechanism: water and solution gas drive in the southern and southeastern portions of the field; solution gas drive in the northern portion of the field

The field operator, Texaco, initiated a waterflood in 1978 as a secondary recovery effort. Infill drilling reduced the well density to 10-acre spacing, with approximately 200 m separation between

wells. In 1995, the time of this study, Texaco performed a CO2 injection test into a single production wellbore, CVU #97. The Reservoir Characterization Project (RCP) at the Colorado School of Mines participated in the CO2 injection test under the direction of Dr Tom Davis and Dr Bob Benson.

The CO2 injection and soak period in well Texaco CVU 97 at Vacuum Field and the two 3D, 9-C seismic surveys (initial and repeat) were acquired from 30 October, 1995, to 27 December, 1995. The initial 3D, 9-C survey was recorded from 28 October through to 13 November. CO2 injection began on 13 November, 1995, and lasted until 8 December, 1995. Fifty million cubic feet of CO2 were injected at rates ranging from 1 to 2.6 mmscfpd; tubing injection pressure ranged from 400 to 900 psig and injection temperature was 105°F. The ‘soak’ period extended from 8 December through to 28 December, after which Texaco CVU 97 was returned to production. The repeat 3D, 3-C survey was acquired during the ‘soak’ period, from 21 December to 28 December.

During the CO2 injection and soak period, the reservoir pore pressures and fluid compositions were altered. In this specific project, changes were not confined to the CO2 injection well CVU 97 since the injection pressure of the six offset water injection wells were altered in an attempt to confine the CO2 bank. Offset water injection wells were ‘shut in’ prior to CO2 injection (reducing reservoir pressure). After CO2 injection, the offset water injection wells were restarted, with the exception of CVU #200, in an attempt to increase reservoir pressure and confine the CO2 bank.

In summary, there are several dynamic changes to the reservoir properties between the initial and repeat 3-C, 3D surveys. Reservoir pressure was increased, lowering the effective stress, the fluid properties were altered (density, viscosity and compressibility), and the reservoir fluids became segregated, forming different areas of varying fluid composition and properties.

9C3D Acquisition Parameters

Type survey 3-D, 9-C (time-lapse)

Subsurface bin size 16.8 m x 16.8 m

Number of receiver locations 836

Number of channels 2508

Number of source locations 616

Total source records

These reservoir processes altered the bulk rock properties, thus forming the motivation for reservoir monitoring using time-lapse multicomponent methods.

Vacuum field – 9C4D seismic survey data acquisition

The objectives for monitoring reservoir processes associated with CO2 injection require time-lapse, multi-component seismic data methods that provide resolution and repeatable measurements of the elastic wavefield. Horizontal and vertical resolution, or volume resolution, of the spatial variations in reservoir architecture provide the means for reservoir characterisation. The objective of data acquisition is to design and

1848 (616 vertical, 616 N-S horz., 616 E-W horz.)

Type spread Stationary : Circular form ~2500m diameter

Instrumentation

Receiver interval / line spacing

I/O System II (MRX)

2508 channels, 2ms sample

4 sec correlated record length

33 meter / 150.1 metre

Source interval / line spacing 33 meter / 184.5 metre

Receiver array

3 elements, 1 metre spacing inline, 3-C geophones

Source array (P-wave) Vertical vibrators : 2 units

8 - 120hz linear sweep, 10 second duration

4 sweeps per location, no moveup

Ground force phase lock

Source array (S-wave) Horizontal vibrator : 1 unit per orientation

1 unit oriented N-S, 1 unit oriented E-W

6 - 60hz sweep, 10 second duration

4 sweeps per location, no moveup

Ground force phase lock

Table 1 Multi-component seismic data acquisition parameters.

Figure 1 9C4D seismic data acquisition grid. Diameter of the source and receiver grid was 2500 m, centered on CVU 97. Key wells and type of well is annotated.

implement an acquisition program that provides a consistent framework of observations.

For the design of the Vacuum Field 9C4D seismic survey, the focus area is the San Andres reservoir at the CO2 injection well CVU 97. The RCP Phase VI data acquisition parameters produce an approximate image area at the reservoir zone after migration of a radius 600 m from the centre of the survey, well CVU 97, with a 17 by 17 m subsurface sample. The time-lapse, multi-component, seismic surveys were acquired immediately before the CO2 injection and during the injection soak period before resuming fluid withdrawal from CVU 97. The 9C4D data acquisition parameters are listed in Table 1. The source and receiver surface grid is shown in Figure 1, along with key wells for the CO2 injection experiment. Well CVU 97 (yellow symbol) is the CO2 injection well. Two offset production wells, CVU 197 and 196 (red symbols), are located east and west of CVU 97. Two rows of water injection wells (blue circles) are positioned north and south of CVU 97. Northern row of water injection wells are CVU 93, 194 and 94, southern row are CVU 99, 200 and 100. A downhole 3C sensor was positioned in CVU 200 at a depth of 1000 m.

An Input/Output system II recording system was utilised, recording 2508 channels at a 2 ms sample rate for vibrator operations. The recording spread was stationary with 836 receiver locations, recording 3 channels at each receiver station. A downhole 3-C phone was recorded in the CVU 200 wellbore at a depth of 1000 m using the I/O System II. The resulting subsurface

binsize is 17 by 17 m. The spatial sampling of this 1995 vintage RCP 9C4D survey spatial sampling resulted in a trace density of 82788 traces per square kilometre.

Sources used were P-wave vibrators (Mertz Model 18 Buggies, 2 units per vibrator point, spaced 10 m pad to pad) and S-wave vibrators (Mertz Model M l3 shear-wave units). The geophones were OYO, three-component (orthogonal) phones with 10 Hz resonance frequency. For multi-component seismic data acquisition, the orientation of the horizontal vibrator units and receiver transducers were defined as an acquisition coordinate system oriented north — east — downwards (right-hand, thumb down). The S-wave or horizontally actuated vibrators were the Mertz Model M l3 shear-wave units. These vibrators have fixed or non-rotating baseplate assemblies such that the force applied to the earth’s surface is perpendicular to the direction the truck unit is facing. To maintain consistency and ease of operations, one unit was designated as the ‘north-facing’ unit to provide eastwest baseplate motion. The second unit was either ‘west facing’ or ‘east facing’, depending upon the direction of traverse. In data processing, the polarity of all field records from the ‘east-facing’ shear unit were reversed in order to establish one coordinate system of north-south baseplate motion. The repeatability of the seismic data is an important issue, and efforts were undertaken by RCP and the acquisition crew in order to ensure repeatability. The repeatability of the source units was addressed by the following actions:

3 East-west source line profile across a playa lakebed observed by the downhole 3C sensor at a depth of 1000 m in CVU 200. It represents the downgoing compressional wavefield initiated by the vertical vibrator source, recorded by the vertical component of the downhole sensor. The pair of traces at each source point are the initial and repeat survey source instances. Cross-correlation of these trace pairs yields amplitude and phase information about the P-wave common raypath and source characteristics at each source position.

Figure 2 Data acquisition grid superimposed on a map of surface topography and Vacuum Field infrastructure. Topography is very gentle with depressions associated with playa lakebeds.
Figure

• P-wave and S-wave vibrators were phase locked to ground force

• Source lines were traversed in the same order and direction for both surveys

• Pad locations were marked with six-inch nails and flagging

• Offset and quadrant from the source stake were measured and noted in the observers logs

• Shear vibrator pad locations were co-located (unless obstructed)

Data processing

Within the RCP Phase VI 9C4D project at the Vacuum Field, we developed a multi-component data processing sequence designed to provide high resolution for reservoir characterisation while maintaining repeatability for reservoir monitoring. The presence of noise is one barrier for maintaining both resolution and repeatability. Since our onshore project included the occupying the same source and receiver locations for the initial and repeat surveys, we analysed ‘common trace pairs’ to derive surface consistent corrections between the surveys for both P-wave (PP) and S-wave (SS) imaging. ‘Common Trace Pairs’ (CTP) in this context represent a pair of raypaths defined by a source location and receiver location that were co-located for the initial and repeat seismic data acquisition efforts. A CTP raypath can be either pure P-wave (P down , P up) or pure S-wave (S down, S up). Crosscorrelation of a common trace pair produces amplitude, phase and frequency information unique to that CTP. Time window selection can

constrain the information to above, below or encompassing the reservoir zone. The set of all CTP wavelets can be decomposed into different domains, such as source, receiver, CMP, offset and azimuth.

At Vacuum field, we decomposed the CTP wavelets into source and receiver domains to derive surface consistent corrections between the initial and repeat surveys. The objective was to remove differences between the surveys due to near-surface variations, both spatially and temporally. Decomposition into source and receiver terms provided corrections between the surveys and a set of common source and receiver amplitude spectra for surface consistent deconvolution operator design and application.

The downhole 3C sensor positioned in well CVU 200 provided a measure of the downgoing P-wave and S-wave energy for both the initial and repeat surveys. These downhole data demonstrated that the downgoing wavefield character was dependent on source location and the data needed surface-consistent corrections, both spatially and temporally to remove near surface distortion. Examination of the downgoing P-wave and S-wave energy recorded by the downhole 3C sensor served to guide the development of the CTP processing flow. Figure 2 shows the data acquisition grid superimposed on the surface topography and oilfield infrastructure. Minor topographic depressions in the project area, or dry ‘playa lakes’ are variations in the near surface. An east-west profile of downgoing source energy

repeat survey source instances. Cross-correlation of these trace pairs yields amplitude and phase information about the S-wave common raypath and source characteristics at each source position.

each source point are the initial and repeat survey source instances. Cross-correlation of these trace pairs yields amplitude and phase information about the S-wave common raypath and source characteristics at each source position.

Figure 4 East-west source line profile across a playa lakebed observed by the downhole 3C sensor at a depth of 1000 m in CVU 200. It represents the downgoing shear wavefield initiated by the horizontal vibrator source (north-south baseplate motion), recorded by the north-south component of the downhole sensor. The pair of traces at each source point are the initial and
Figure 5 East-west source line profile across a playa lakebed observed by the downhole 3C sensor at a depth of 1000 m in CVU 200. It represents the downgoing shear wavefield initiated by the horizontal vibrator source (east-west baseplate motion), recorded by the east-west component of the downhole sensor. The pair of traces at

P-wave data processing flow

1 Geometry Description – Build database incorporating both initial and repeat surveys.

2 Source/Receiver Trace Edit – Surface-consistent trace edit based on trace amplitude rms levels decomposed to source and receiver components. Union of trace edit set applied to both initial and repeat surveys.

3 Refraction Static Estimation – Derived from repeat survey and applied to both initial and repeat surveys.

4 Minimum Phase Conversion Based on vibrator sweep spectrum

5 Q Compensation – Q model derived from near-offset p-wave VSP survey

6 Surface-Consistent Deconvolution (source/receiver domain) – Operators derived from repeat survey and applied to both surveys. Data preconditioning for operator design included first arrival mute, airwave mute and prestack spatial deconvolution to attenuate ambient noise.

7 Cross-correlation of common-trace pairs (p-wave source/vertical component) – Cross-correlation of each unique source – receiver pair between initial and repeat survey. Time – space window constrained to reflection/refraction data above San Andres reservoir zone. Individual trace cross-correlations decomposed into source and receiver components.

8 Apply Source and Receiver Phase Corrections to Match Surveys – Derive and apply correction to match initial survey to repeat survey. For these data the phase correction was a linear phase shift or time lag derived from peak of decomposed cross-correlation wavelets.

9 Statics/Velocity Estimation – Derived from repeat survey, applied to both surveys. Stacking velocities compared to Vrms function from p-wave VSP survey.

10 Individual Trace Edit – Trace edit based on gated individual trace rms amplitude levels. Computed on initial and repeat surveys, the union of the trace edits is applied to both surveys.

11 Surface Consistent Amplitude Corrections – Computed from a TX window containing reflected data above the San Andres reservoir zone. Initial source estimate from downhole 3-C geophone data (sum of squares estimate)

12 CMP Stack Simple summation (mean stack)

13 FX Filter (Spatial Deconvolution: inline – crossline )

14 Spectral Shaping

15 Bandpass Filter

16 3D Migration – Phase shift algorithm with single Vint velocity model from stacking velocities and VSP survey.

17 Cross-correlation between initial and repeat survey – Cross-correlation of each CMP pair between initial and repeat survey. Time – space window constrained to reflection data above San Andres reservoir zone. Individual trace cross-correlations applied to initial volume prior to interpretation.

across one of the surface depressions is shown in Figure 3. These traces, recorded by the downhole 3C sensor in CVU 200, are a measure of the downgoing wavefield observed 1000 m below the earth’s surface. This profile is sourced by the set of vertical vibrators and each source point has a pair of traces, representing the initial and repeat survey. Similarity between trace pairs is a qualitative measure of P-wave source repeatability. Wavelet character change and time delay from the general offset trend is an expression of near surface distortion, and the need for surface consistent deconvolution and near-surface static corrections.

Figure 4 shows the same east-west source profile using the horizontally actuated vibrators operating with north-south baseplate motion recorded by the north-south downhole sensor component. In an isotropic medium, this would represent the downgoing SH shear wave polarisation. Again, similarity is observed within a given trace pair and wavelet character distortion is seen associated with the change in near-surface properties due to the dry playa lakebed.

Figure 5 shows the east-west source profile using the horizontally actuated vibrators operating with east-west baseplate

motion, recorded by the east-west downhole sensor component. In an isotropic medium, this would represent the downgoing SV shear wave polarisation. As in Figures 3 and 4, similarity is observed within a given trace pair and wavelet character distortion is seen associated with the change in near-surface properties.

The time-lapse, multi-component data processing flows are shown in Table 2 (P-wave) and Table 3 (S-wave). Specific to time-lapse processing, the general flow is that the initial survey is acquired and immediately processed using surface-consistent algorithms. The second, or repeat survey, is acquired, the data acquisition geometry is merged with that from the initial survey. All common-trace pairs between the two surveys are cross-correlated and decomposed into source and receiver components to derive phase correction filters to match the two surveys. The surface consistent operators from the first survey (static corrections, deconvolutions) are applied to the repeat survey. Low signal to noise ratio (S/N), or ‘bad’ traces are identified within each survey and the union of the set of bad traces are removed from both the initial and repeat survey volumes. Any aberration

Table 2 P-wave data processing flow.

between the recording geometries of the multiple surveys is reconciled such that the recording geometry is held constant. This processing sequence is designed under the following conditions:

• Source and receiver locations between the repeated surveys are identical (+/- 1m).

• Near-surface elastic properties may vary, temporally and spatially. Variations will be removed using multi-component surface-consistent deconvolution and multicomponent common-trace pair analysis, complex filter design and complex filter application.

• Any variation in the elastic properties of the subsurface above the reservoir zone of interest is to be removed using multi-component common-trace pair analysis, complex filter design and complex filter application.

S-wave data processing flow

• Instrumentation (recording system, source characteristics and receiver characteristics) are identical. If not, then amplitude and phase differences can be deterministically or statistically derived and applied using the common trace pair analysis. P-wave and S-wave data volumes were processed using the processing methodology referenced in Table 2 and Table 3. We processed the vertical source – vertical receiver component volume (P-wave or ‘1C’) and the horizontal source – horizontal receiver component volumes (shear-wave or ‘4C’). The four horizontal component S-wave volumes (two diagonal and two off-diagonal components) were rotated, using Alford rotation (Alford, 1986), from the data acquisition coordinate system (north and east) to S1 (‘fast’ shear wave polarisation) and S2 (‘slow’ shear wave polarisation) components, along with the two off-diagonal volumes (D12 and D21). The implied assumption is

1 Geometry description – Build database incorporating both initial and repeat surveys.

2 Source/Receiver Trace Edit – Surface-consistent trace edit based on trace amplitude rms levels decomposed to source and receiver components. Union of trace edit set applied to both initial and repeat surveys.

3 Initial Static Estimation – Derived from repeat survey and applied to both initial and repeat surveys. Source and receiver statics are computed from source and receiver stacks on a shallow reflector.

4 Minimum Phase Conversion based on vibrator sweep spectrum

5 Q-Compensation – Q model derived from near-offset s-wave VSP surveys (S1, S2)

6 Four-Component Rotation (Alford Method) – Data are rotated from data acquisition coordinate system (V1-NS, V2-EW, H1-NS and H2-EW) into an S1 (N118°E), S2 (N028°E), D1 and D2 components.

7 Cross-correlation of common-trace pairs (S1, S2) – Cross-correlation of each unique source-receiver pair between initial and repeat survey. Time-space window constrained to reflection/refraction data above San Andres reservoir zone.

8 Individual trace cross-correlations decomposed into source and receiver components for both S1 and S2 data volumes.

9 Estimate of Surface Consistent Amplitude Spectra – Derived from each source and receiver cross-correlation wavelet for both S1 and S2 datasets. Pass source/receiver spectra to surface consistent deconvolution algorithm.

10 Surface Consistent Deconvolution (source/receiver domain) – Operators based on corrected amplitude spectra from common trace pair cross-correlations. Operators designed from both S1 and S2 data volumes and applied to both initial and repeat surveys. Data preconditioning for operator design included first arrival mute and ambient noise reduction inherent in the cross-correlation: trace summing process.

11 Cross-correlation of common-trace pairs (S1, S2) – Cross-correlation of each unique source-receiver pair between initial and repeat survey for both S1 and S2 volumes. Time-space window constrained to reflection/refraction data above San Andres reservoir zone. Individual trace cross-correlations decomposed into source and receiver components.

12 Apply Source and Receiver Phase Corrections to Match Surveys – Derive and apply correction to match initial survey to repeat survey. For these data the phase correction was a linear phase shift or time lag derived from peak of decomposed cross-correlation wavelets.

13 Statics/Velocity Estimation – Derived from repeat survey, applied to both surveys. Stacking velocities compared to Vrms function from s-wave VSP surveys.

14 Individual Trace Edit – Trace edit based on gated individual trace rms amplitude levels. Computed on initial and repeat surveys, the union of the trace edits is applied to both surveys.

15 Surface-Consistent Amplitude Corrections – Computed from a TX window containing reflected data above the San Andres reservoir zone. Initial source estimate from downhole 3-C geophone data (SH component)

16 CMP Stack Simple summation (mean stack)

17 FX Filter (Spatial Deconvolution) : TX Filter (inline-crossline)

18 Spectral Shaping

19 Bandpass Filter – Lowcut 8 Hz, 18 db/octave Highcut 24 Hz, 72 db/octave

20 3D Migration – Phase shift algorithm with single Vint velocity model from stacking velocities and VSP survey.

21 Cross-correlation between initial and repeat survey (SI and S2 volumes) – Cross-correlation of each CMP pair between initial and repeat survey. Time-space window constrained to reflection data above San Andres reservoir zone. Individual trace cross-correlations applied to initial volume prior to interpretation.

Table 3 S-wave data processing flow.

that these data are propagating within an incidence angle range of approximately 0o to 30o in an azimuthal anisotropic (horizontal transverse isotropy or HTI) coordinate system. Project VSP data, downhole 3C sensor analysis, borehole breakouts, and Alford rotation of the surface 4C data were used to determine the S1 direction (N118°E or approximately east-southeast) and S2 direction (N028°E) for the entire 4-D, 3-C survey. The observed S1 polarisation direction is in excellent agreement with the present-day maximum horizontal stress field.

The shear wave processing included the off-diagonal components to allow 4C ‘Alford’ rotation analysis (Alford, 1986). We maintained vector fidelity of the S1, S2, D12 and D21 components in data processing.

In both the P-wave and S-wave data processing flows, we sought to minimise any data-dependent processes between the surveys in an effort to improve repeatability. Our approach is to

Figure 6 Migrated stack images. Profiles are NorthSouth through wells CVU 97 and CVU 200.. P-wave – Initial survey (top left), P-wave – Repeat survey (top right), S-wave (S1) – Initial (middle left), S-wave (S1) – Repeat (middle right), S-wave (S2) – Initial (lower left), and S-wave (S2) – repeat (lower right.

derive a set of surface-consistent corrections from one survey, apply to both surveys, then use elements of the CTP method to correct one survey to the other in a surface-consistent manner. For the P-wave data (Table 2), the surface-consistent deconvolution was derived from the repeat survey and applied to both surveys (step 6), followed by CTP cross-correlation and decomposition into source and receiver terms (step 7), and CTP application (step 8). Note that the near-surface static corrections followed the same concept (steps 3 and 9). A post-migration, post-stack CTP application (step 17) in the CMP domain is essentially a match filter, commonly implemented in time-lapse processing.

The S-wave data processing flow (Table 3) is conceptually the same but with elements related to the vector processing of shear wave data as opposed to scalar. Our flow is designed to maintain vector fidelity, meaning shear wave vector magnitude and orientation is not compromised by incorrect amplitude and

phase corrections between the four components (S1, S2, D12 and D21). Since the S-wave observations from the downhole sensor (Figures 4 and 5) indicated the need for significant near-surface corrections, we combined Alford rotation (Alford, 1986), Common Trace Pair (CTP) processing and surface-consistent deconvolution. Specific elements of this combination address vector fidelity. The surface S-wave data were rotated from the acquisition coordinate system (N-S, E-W) into the general S1 ‘fast’ polarisation orientation (N118oE) and S2 ‘slow’ polarisation (N28oE). See Table 3, Step 6. Next is CTP crosscorrelation and decomposition into source and receiver terms for both the S1 and S2 data (Steps 7 and 8). Amplitude spectra derived from the source and receiver terms are then passed to a surface-consistent deconvolution algorithm to compute source and receiver corrections based on the minimum phase assumption (Steps 9 and 10).

To maintain vector fidelity, we apply the S1 and S2 source and receiver deconvolution corrections as follows:

• S1 ‘fast’ volume Source term S1; Receiver term S1

• D12 off diagonal volume Source term S1; Receiver term S2

• D21 off diagonal volume Source term S2; Receiver term S1

• S2 ‘slow’ volume Source term S2; Receiver term S2

After deconvolution, we repeat the CTP processing to derive corrections between the surveys (steps 11 and 12). As in the previous step, corrections are derived from the S1 and S2 volumes, then applied across the four components (S1, D12, D21 and S2).

Steps for surface-consistent statics (Steps 3 and 13) and amplitude corrections (step 15) Follow the same procedure, derivation from S1 and S2, application to S1, D12, D21 and S2 to maintain vector fidelity. All four prestack volumes are then processed through CMP stack, FXY (optional), spectral shaping and filtering (steps 16 – 19). The final process is post-stack migration. The migrated S1, D12, D21 and S2 shear wave volumes are available for interpretation and S-wave birefringence analysis.

The DP flow (geometry, preprocessing, CMP stack, post stack migration) is primitive by today’s (2024) data processing capabilities and these data would certainly benefit from reprocessing using current state-of-the-art multi-component vector processing techniques. In this 1995 effort, we simply rotated the shear wave data into the S1 ‘fast’ and S2 ‘slow’ coordinate system, then applied scalar processes to form the migrated shear wave volumes.

Results

Vertical seismic sections from the post-stack migrated data volumes are displayed in Figure 6. The vertical time sections are north-south profiles through wells CVU 97 and CVU 200. Figure 6a is a P-wave profile from the initial or pre-C02 injection survey. Significant horizons are annotated. Reflection time of the top of San Andres is 0.68 seconds at CVU 200 based on the VSP data and synthetic trace ties. Approximately 150 m of reservoir, encompassing 48 m of Upper San Andres and 102 m of Lower San Andres, represent approximately 52 ms of two-way travel time. The same north-south traverse from the repeat or post-C02 injection P-wave survey is included

for comparison (Figure 6b). A bandpass filter, 10-72 Hz., is applied to both data volumes. The P-wave data are visually very similar between the initial and repeat surveys. Lithologic boundaries with higher reflectivity. Rustler, Yates and Glorietta, are essentially identical.

Figures 6c and 6d show the S1 (‘fast’) polarisation for the S-wave data from the initial and repeat surveys. The Rustler, Yates, Queen and San Andres are annotated. Comparison with the initial survey P-wave data (Figures 6a and 6b) yields observations about bandwidth, reflectivity and reflection continuity. The bandwidth of the P-wave migrated data is approximately 10-72 hz. while the S-wave bandwidth is much lower, approximately 10-24 Hz. Bandwidth, in this context, is the portion of the signal spectrum that is repeatable between the initial and repeat surveys using the processing flow described previously. A time delay between the S1 and S2 shear volumes due to shear wave birefringence can be seen at the Rustler and Yates horizons.

Conclusions

A comprehensive time-lapse, multi-component data processing flow produced excellent P-wave and S-wave image volumes for reservoir monitoring of a CO2 injection test at Vacuum Field. Common Trace Pair (CTP) processing improved repeatability by deriving and applying surface consistent, prestack corrections between the initial and repeat surveys. Vector fidelity is maintained in the S-wave data processing flow. Although this study at Vacuum Field took place 30 years ago, this data processing methodology is relevant today and is applicable to hydrocarbon EOR, CCUS, geothermal and brine extraction projects. Any project where subsurface rock properties vary spatially and temporally due to reservoir processes. CTP processing can be adapted to modern data processing algorithms, including prestack time migration, prestack depth migration, and 5D trace interpolation and regularisation techniques. CTP methods can also be employed for combinations of dense and sparse spatial sampling, including downhole sensor and fibre-optic configurations.

Acknowledgements

I would like to thank the Colorado School of Mines Reservoir Characterization Project (RCP), Dr. Tom Davis, Dr. Bob Benson, Co-RCP students and RCP industry partners. Data-driven research within RCP developed time-lapse multi-component methods that continue to be applicable today. Particular thanks goes to to Scott Wehner and Michail Raines with Texaco, operator of Vacuum Field and the CO2 injection test in 1995.

References

Alford, R.M. [1986]. Shear data in the presence of azimuthal anisotropy: Dilly, Texas. 56th Annual International SEG Meeting, Expanded Abstract, 476-479. Roche, S.L. [1997]. Time-lapse, multicomponent, three-dimensional seismic characterization of a San Andres shallow shelf carbonate reservoir, Vacuum Field, Lea County, New Mexico. Colorado School of Mines, Department of Geophysics, PhD Thesis. Wilson, J.L. [1975]. Carbonate Facies in Geologic History. Springer. ISBN 0-387-07236-5.

Attenuation of non-compressional energy in ocean bottom node data

Tim Seher1*, Hassan Masoomzadeh1, Yong Ren1, Fons ten Kroode1, Mark Roberts1, Alexander Kritski2, Harald Westerdahl2, Mark Thompson2 and Åsmund Sjøen Pedersen2 use rotational measurements from a new type of ocean bottom node for the attenuation of non-compressional energy.

Abstract

Attenuation of non-compressional energy, such as shear body waves and scattered surface waves, is a critical step in the processing of ocean bottom node seismic data, because this unwanted energy interferes with the desired compressional signals. In this paper, we first review conventional methods for attenuating non-compressional energy in vertical geophone recordings, including cooperative denoising procedures and adaptive subtraction techniques. We then introduce the use of rotational measurements from a new type of ocean bottom node for the attenuation of non-compressional energy. Our results demonstrate that rotational data improve the attenuation of the non-compressional energy, particularly for flat events at small offsets where conventional methods struggle. Finally, we explore the potential of machine learning to reduce the computational cost and human effort involved in the denoising workflow. Overall, the combination of conventional denoise techniques and rotational data delivers robust results. The introduction of machine learning provides a way forward that leverages the strengths of existing methods and reduces the cost of seismic data processing.

Introduction

The attenuation of undesired signals is a continuing issue in the processing of ocean bottom node (OBN) seismic data. An example is the presence of non-compressional energy on vertical geophone records (Paffenholz et al. 2006a, 2006b), which limits our ability to jointly process hydrophone and geophone data. This energy is linked to the presence of both shear body waves and scattered surface waves. It has been called interchangeably shear-wave noise, Vz noise (referring to noise on the vertical geophone), or more recently, non-compressional energy, a term that we will adopt here. Over the years, a variety of processing techniques have been developed to attenuate non-compressional energy in vertical geophone recordings. The most common approach for attenuating non-compressional energy involves a cooperative denoising procedure between the vertical geophone and the hydrophone in different transform domains such as the Tau-p transform domain (Craft and Paffenholz 2007; Poole et al. 2012), the wavelet transform domain (Yu et al. 2011; Peng et al. 2013; Ren et al. 2020),

1 TGS | 2 Equinor

* Corresponding author, E-mail: tim.seher@tgs.com

DOI: 10.3997/1365-2397.fb2024106

and the curvelet transform domain (Yang et al. 2020; Kumar et al. 2021; Ren et al. 2022; Kumar et al. 2024). Recently, machine learning (ML) has been applied to accelerate non-compressional energy attenuation in vertical geophone recordings (Sun et al. 2023; Seher et al. 2024). However, a successful application of ML-based noise attenuation requires high-quality training data. New rotational sensors on the seafloor (Pedersen et al. 2023; Kritski et al. 2024; Masoomzadeh et al. 2024) are expected to provide appropriate training data for this purpose.

In this paper, we first review four solutions for the attenuation of non-compressional energy in OBN recordings, then demonstrate that rotational records from a prototype receiver can be used to improve the denoising process, and finally show how ML can be utilised to reduce the computational cost and the human effort involved in the noise attenuation workflow.

Conventional methods for the attenuation of non-compressional energy

A major problem with separating hydrophone and geophone recordings into upgoing and downgoing wavefields is the presence of strong non-compressional energy in the vertical geophone records. Different methods have been developed to attenuate this unwanted energy in OBN recordings. These methods commonly exploit two different properties of multi-component data. The first property is that non-compressional energy is not detected by the hydrophone. This property allows the attenuation of non-compressional energy in the vertical geophone data by using the hydrophone data as a guide in a cooperative denoising process. The second property is that horizontal geophones are more sensitive to non-compressional motions. Using the horizontal recordings as noise models allows for the attenuation of this noise by simultaneous adaptive subtraction. Here, we will be comparing four methods for the attenuation of non-compressional energy based on the 2D dual-tree complex wavelet transform, both 2D and 3D curvelet transforms, and the simultaneous adaptive subtraction of data recorded by horizontal geophones. We start by showing results for one of these methods based on the 3D curvelet transform (Figure 1). The curvelet domain is attractive for noise attenuation because it allows a detailed frequency and

orientation dependent representation of the data, where noise and signal can be distinguished. To exploit this property, we first transform both hydrophone and geophone records of a receiver gather from a survey in the Gulf of Mexico into the 3D curvelet domain. In this domain, thresholding based on the ratio of the curvelet coefficients allows the conservation of events that are present in both hydrophone and vertical geophone data and then the attenuation of those events that are only present in the vertical geophone records. In the curvelet domain, the algorithm naturally honours the dip and frequency content of the input data. The thresholding step yields a noise model that can be inverse transformed back to the offset domain. Subtraction of the noise model from the input data removes non-compressional energy and gives the desired compressional signal. In addition, we would like to point out that the curvelet domain is not only suitable for the attenuation of non-compressional energy, but also facilitates additional processing steps including deblending, data dependent obliquity correction, and calibration of hydrophone and geophone measurements.

To better understand the benefits and drawbacks of the methods described above, we apply these methods to the data from one receiver and compare the resulting noise models (Figure 2). We start by comparing methods based on both 2D wavelet and 2D curvelet transforms, which internally use a similar thresholding scheme. Two passes of the 2D curvelet method (in both inline and crossline directions) yield a more coherent noise model than two passes of the 2D wavelet transform method (Figures 2a&b). Furthermore, our 2D curvelet-based method better preserves the desired signals. This may be due to a finer dip sampling in the curvelet domain. A comparison of two passes of the 2D curvelet approach with a full 3D curvelet approach (Figures 2b&c) shows that the 3D method better

preserves event continuity and leaves fewer artifacts and less noise. Finally, we compare the 3D curvelet method with simultaneous adaptive subtraction. In this approach, we adaptively subtract the two horizontal geophones from the vertical geophone while using the hydrophone data to protect the desired signal. This comparison shows that both approaches yield noise models with coherent shear events (Figures 2c&d). However, the 3D curvelet noise model has captured additional incoherent energy.

Attenuation of non-compressional energy using rotational measurements

Conventional OBN data processing involves the separation of upcoming and downgoing wavefields. A successful separation requires high-quality compressional signals captured in both pressure and vertical motion records. A frequent problem is that the vertical sensor captures non-compressional energy corresponding to both shear and surface waves. This energy needs to be attenuated prior to acoustic wavefield separation. The conventional approaches described in the previous section are successful for dipping events at large offsets but are less effective in the near offset zone for flat events. That is because around the apex both compressional and non-compressional events appear flat, and therefore become less distinguishable in the transform domain.

A conventional OBN survey contains data from four receivers: a hydrophone to record pressure, and three orthogonal geophones to record particle velocity. During the Amendment 2 OBN survey, in addition to the conventional four-component nodes, we acquired seismic data using two test nodes equipped with a new sensor type (Pedersen et al. 2023; Masoomzadeh et al. 2024). This new generation node contains a hydrophone and

Figure 1 3D curvelet denoising uses the hydrophone record (a) to attenuate converted body waves and surface waves in the vertical geophone record (b). This process decomposes the input geophone data into a signal model (d) and a noise model (c). The figures shown here come from a conventional fourcomponent OBN and were created by averaging traces for a given offset within a 45° azimuth angle range.

six additional sensors. These six sensors are made of piezoelectric crystals mounted on the faces of a solid cubic frame, holding a metal sphere acting as an inertial mass. The crystals and the electronic circuit attached to them are designed to record voltages proportional to particle acceleration in a specific direction. Crucially, a summation of output voltages from two parallel crystals on opposite faces of the cubic frame provides one component of the translational acceleration vector, while a subtraction of those outputs provides a component of the rotational acceleration vector in a perpendicular direction. In combination, this new generation node provides measurements of pressure, the full translational acceleration vector, and the full rotational acceleration vector.

An interesting property of the acquired rotational data is the absence of compressional energy in these recordings. The reason is that a pressure signal propagates by displacing particles in the propagation direction, meaning that those particles do not experience any rotations. For a better understanding, imagine holding a sponge ball in the palm of your hand, and an imaginary multi-sensor receiver is present inside the ball. You may try the following actions to simulate compressional, shear and surface waves. First, squeeze the ball while maintaining its position and orientation. A hydrophone can tell you how firmly you are squeezing. Second, move the ball around without squeezing or changing its orientation. A set of orthogonal geophones can tell the speed and direction of those translational motions. Finally, roll the ball without squeezing or changing its location. A conventional node is oblivious to those rotational motions. Hence, the new sensor has been constructed to measure both translational and rotational acceleration vectors.

In the above experiment, we may notice that squeezing the ball does not change its orientation. This implies that pressure signals

Figure 2 Noise models estimated using a two-pass 2D wavelet transform (a), a two-pass 2D curvelet transform (b), a single pass 3D curvelet transform (c), and simultaneous adaptive subtraction of horizontal geophones from the vertical geophone (d). The arrows highlight areas where the 3D approach has removed more coherent energy than the two-pass 2D approach. The noise models correspond to the data shown in Figure 1, and traces were averaged for a given offset within a 45° azimuth angle range.

should not be detected by a sensor that is designed to detect only rotational motions. Therefore, acquiring rotational data is particularly helpful for the purpose of separating compressional and non-compressional events appearing in the vertical translational component. Although the rotational sensors have successfully captured non-compressional energy, the recorded rotational signals are not identical to the non-compressional signals captured in the vertical translational record. Rotational records may be considered as spatial derivatives of translational records. In theory, we need to convert horizontal rotational data (i.e., rotations around the two horizontal axes) to simulate vertical translational data. We derived a formula for this conversion (Masoomzadeh et al. 2024) based on a flat earth assumption. Because of this assumption, it is only kinematically correct. To address the dynamic variations, we used simultaneous adaptive subtraction to optimally match two horizontal rotational components to the vertical translational component and subtracted them from it.

To illustrate the advantages of the new data, we first present a data example together with the results from the proposed processing sequence (Figures 3&4). Comparing the hydrophone and vertical acceleration (Figures 3a&b) shows the presence of significant non-compressional energy on the vertical receiver in addition to the desired compressional energy. Comparing the vertical acceleration and the horizontal rotation shows that the rotational sensors capture non-compressional energy exclusively. This property makes the rotational data an ideal noise model for the adaptive subtraction (Figure 4b).

Comparing the results of the adaptive subtraction of horizontal geophones with the results of the previously presented 3D curvelet method allows interesting insights (Figures 4a&b). The adaptive

subtraction method successfully attenuates non-compressional energy in the near offset zone, where the curvelet method is less successful. The curvelet method successfully attenuates dipping non-compres-

Figure 3 Four of the seven recorded components from the novel OBN receiver deployed in the Gulf of Mexico. The hydrophone, vertical accelerometer, and the sensors measuring rotation around the x- and y-axis of the instrument were used in our noise attenuation processing sequence. The data correspond to a 45° stack for positive or negative offsets, respectively. Hydrophone, acceleration and rotation data have different ranges and are scaled independently.

Figure 4 Vertical acceleration after application of 3D curvelet denoise only (a), after simultaneous adaptive subtraction of the rotational measurements only (b), and the combined application of both methods (c). The difference between only applying 3D curvelet denoise and applying both adaptive subtraction and curvelet denoise demonstrates the improvement possible by combining the two techniques (d). The differences (d) are scaled up by a factor of 5. The results correspond to the data shown in Figure 3.

sional events, but it is less successful at attenuating flat parts of those undesired events at near offsets. A combination of the two methods (Figures 4c&d) removes more noise than each individual method.

Role of machine learning in non-compressional energy attenuation

The conventional methods of non-compressional energy attenuation presented above allow separating compressional and non-compressional signals in OBN data. However, these methods can be hard to parameterise for the user and costly to apply in terms of computational resources. In recent years, ML solutions have been deployed for different seismic processing tasks including the attenuation of non-compressional energy in OBN data (Sun et al. 2023; Seher et al. 2024). These ML solutions allow significant reduction of computational cost, if the up-front cost of training the ML network is ignored. In addition, pre-trained networks minimise the human effort during processing.

To illustrate the potential of ML in the attenuation of non-compressional energy, we trained a network using seismic data from two regions in the North Sea including the area North of Alvheim Krafla Askja (NOAKA). We then applied the denoising network to a line of receivers from the NOAKA region that was not included as part of the training data. Comparing the ML results with the curvelet results for a single holdout receiver (Figure 5) allows a few

interesting observations. First, the ML solution has not learned the transform artifacts apparent in the curvelet results. Second, the ML results are more conservative than the curvelet results – we observe less primary damage but more high-frequency noise in the signal display. Furthermore, the brute stack results for an entire receiver line (Figures 6a&b) demonstrate that the ML solution has successfully attenuated a significant amount of noise. Comparing the stacks of noise models for the curvelet and ML solutions (Figures 6c&d) demonstrates again that the ML solution is more conservative than the training data. Interestingly, the ML solution has removed additional noise at the beginning of the profile that was not removed by the curvelet method. This behaviour is due to the diversity of the training data and would be difficult to imitate by a human.

The conventional noise attenuation methods presented here can be used to create high-quality training data, which allows progress both in the short and long term. In the short term, training data can be created for each experiment separately, using a subset of receivers. Application of this customised network to all receivers for a single experiment has the potential to bring down the computational cost. In the long term, with the availability of

Figure 5 Comparison of 2D curvelet non-compressional energy attenuation and ML-based noise attenuation for a hold-out receiver. The curvelet and ML noise models (b&c) are qualitatively similar and a comparison of the signal models (e&f) to the vertical geophone (d) shows significant noise reduction for both methods.

Figure 6 Comparison of a brute stack before and after ML denoise (a&b). The differences between curvelet and ML denoise (c&d) show that ML denoise is overall more conservative but can be more flexible with respect to variable noise content. The noise stacks are scaled by a factor of 5 compared to the signal stacks.

sufficient training data, we may be able to utilise a generalised network, at least as part of a fast-track processing workflow. Finally, we have demonstrated that non-compressional energy attenuation is improved when the nodes are equipped with rotational sensors. The extra measurements of particle motions at the seabed can help to improve noise attenuation, and thereby provide superior training data for ML applications. This is of particular interest because we may only have to deploy the rotational sensors on a sparse grid to achieve the goal of improved denoising. The use of rotational data for ML applications will be a topic of future research.

Conclusions

Removing unwanted non-compressional energy such as surface waves and shear body waves from vertical motion recordings in OBN data is a critical step in acoustic wavefield processing, because this energy often interferes with the desired compressional signals. In this paper, we first reviewed conventional methods of attenuating this energy while highlighting their strengths and weaknesses. We then presented two significant advances – the use of rotational data and the application of ML methods. While the conventional methods are effective overall in attenuating non-compressional energy, they struggle with the attenuation of flat events at near offsets. This shortcoming can be addressed by using a new type of receiver that measures the rotational accelerations. These rotational measurements can be treated as a noise model and adaptively subtracted from the translational data. This method allows the attenuation of non-compressional events that are not distinguishable based on dip or frequency. Furthermore, combining conventional techniques with adaptive subtraction of rotational data provides superior denoising results. Finally, we explored the potential of ML for the attenuation of non-compressional energy and argued that ML can significantly reduce the computational cost and human effort. Crucially, we demonstrated the applicability of ML solutions for non-compressional energy attenuation as part of fast-track processing and showed its potential of outperforming a human data processor. In general, many ML techniques require the availability of high-quality and diverse training data. Relying on conventional processing solutions to generate these training data introduces subjectivity and error sources. Directly measuring the desired noise labels using rotational measurements has the promise of delivering superior training data and hence improving the noise attenuation results.

Acknowledgements

We are grateful to the many colleagues who contributed to this research. We would also like to thank TGS management and our joint venture partner SLB for permission to show the data.

References

Craft, K.L. and Paffenholz, J. [2007]. Geophone noise attenuation and wavefield separation using a multidimensional decomposition technique SEG Technical Program, Expanded Abstracts, 2630-2634. https://doi. org/10.1190/1.2793013

Kritski, A., Westerdahl, H., Pedersen, Å.S., Thompson, M., Kroode, F.T., Seher, T. and Bernitsas, N. [2024]. Efficient Vz Noise suppression by seismic polarization analysis of 6C seabed data. Fourth EAGE Marine

Acquisition Workshop, Extended Abstracts. https://doi.org/10.3997/22144609.202436003

Kumar, A., Hampson, G. and Rayment, T. [2021]. Simultaneous up-down separation and Vz denoise using joint sparsity recovery. 82nd EAGE Annual Conference & Exhibition, Extended Abstracts. https://doi. org/10.3997/2214-4609.202112695

Kumar, R., Amin, Y.I.K., Leake, S., Vassallo, M., Bagaini, C., Sonika, S., Scapin, D. and De Melo, F.X. [2024]. Shear noise attenuation using multidimensional optimum Bayesian weighting. 85th EAGE Annual Conference & Exhibition, Extended Abstracts. https://doi. org/10.3997/2214-4609.2024101142

Masoomzadeh, H., ten Kroode, F., Seher, T., Zuberi, M.A.H., Kritski, A., Westerdahl, H., Pedersen, Å.S., Thompson, M., Bernitsas, N., Behn, P. and Baltz, P. [2024]. Attenuating non-compressional energy on vertical motion sensors using rotational data. Fourth International Meeting for Applied Geoscience & Energy, Expanded Abstracts.

Paffenholz, J., Shurtleff, R., Hays, D. and Docherty, P. [2006a]. Shear wave noise on OBS Vz data – Part I Evidence from field data. 68th EAGE Annual Conference and Exhibition incorporating SPE EUROPEC 2006, cp-2-00235. https://doi.org/10.3997/2214-4609.201402227

Paffenholz, J., Docherty, P., Shurtleff, R. and Hays, D. [2006b]. Shear wave noise on OBS Vz data – Part II Elastic modeling of scatterers in the seabed. 68th EAGE Conference and Exhibition incorporating SPE EUROPEC 2006, cp-2-00236. https://doi.org/10.3997/2214-4609.201402228

Pedersen, Å.S., Kritski, A., Westerdahl, H., Thompson, M., David, S., Kroode, F.T., Bernitsas, N., Olivier, A., Behn, P., Baltz, P., Faber, K., Tatarata, A., Barry, R. and Greco, M., [2023]. Towards a next generation ocean bottom node: Incorporating a 6C motion sensor. 84th EAGE Annual Conference & Exhibition, Extended Abstracts. https://doi. org/10.3997/2214-4609.202310476

Peng, C., Huang, R. and Asmerom, B. [2013]. Shear noise attenuation and PZ matching for OBN data with a new scheme of complex wavelet transform. SEG Technical Program, Expanded Abstracts, 4251-4255. https://doi.org/10.1190/segam2013-0634.1

Poole, G., Casasanta, L. and Grion, S. [2012]. Sparse τ-p Z-noise attenuation for ocean-bottom data. SEG Technical Program, Expanded Abstracts, 1-5. https://doi.org/10.1190/segam2012-0679.1

Ren, Y., Yang, C., Degel, T. and Liu, Z. [2020]. Vz noise attenuation using dual-tree complex wavelet transform. SEG Technical Program, Expanded Abstracts, 1830-1834. https://doi.org/10.1190/segam2020-3426428.1

Ren, Y., Yang, C., Seher, T., Hawke, M. and Cho, E. [2022]. Vz denoise and P/Z matching in 3D curvelet domain. SEG Technical Program, Expanded Abstracts, 2862-2866. https://doi.org/10.1190/image2022-3745669.1

Seher, T., Yalcin, G., Roberts, M. and Ren, Y. [2024]. Shear wave noise attenuation in ocean bottom node data using machine learning. 85th EAGE Annual Conference & Exhibition, Extended Abstracts. https://doi. org/10.3997/2214-4609.202410498

Sun, J., Jafargandomi, A. and Holden, J. [2023]. Deep learning-based Vz-noise attenuation for OBS data, SEG Technical Program, Expanded Abstracts, 1480-1484. https://doi.org/10.1190/image2023-3910189.1

Yang, C., Huang, Y., Liu, Z., Sheng, J. and Camarda, E. [2020]. Shear wave noise attenuation and wavefield separation in curvelet domain. SEG Technical Program, Expanded Abstracts, 1805–1809. https://doi. org/10.1190/segam2020-3426418.1

Yu, Z., Kumar, C. and Ahmed, I. [2011]. Ocean bottom seismic noise attenuation using local attribute matching filter, SEG Technical Program, Expanded Abstracts, 3586–3590. https://doi.org/10.1190/1.3627945

Data management transformation to drive subsurface autonomy

Richard Mohan1* and Dr Sumeet Gupta1 explore key components in transforming data management to enable AI-ready subsurface data.

Abstract

The digital transformation of subsurface processes, powered by AI and ML shall enable a deeper understanding of reservoir dynamics and efficient subsurface decisions. Subsurface autonomy has significant potential to transform the way upstream companies with limited human intervention in conducting reservoir characterisation, field development planning, well planning, and production management.

To realise the full potential of AI in the subsurface domain, it requires a paradigm shift in current data management practices. This paper explores key components in transforming data management to enable AI-ready subsurface data. This addresses key challenges to facilitate AI-assisted subsurface interpretation, data-driven reservoir modelling, and the creation of subsurface digital twins. The paper highlights the emergence of industry standards such as Open Subsurface Data Universe (OSDU) in liberating data from proprietary, multivendor petrotechnical applications, ensuring seamless integration across disciplines for better decision-making. Finally, we discuss strategic actions that upstream companies must take to prepare their data to power AI and subsurface autonomy, enabling a more agile, efficient, and sustainable future for the energy industry.

Introduction

The exploration, drilling, reservoir management and production optimisation processes are continuously being transformed by AI and ML to enable a higher level of automation to improve decision making and optimise resource recovery. The ability to harness these technologies for subsurface discipline depends on how the data is managed, contextualised and integrated across the subsurface ecosystem.

Various subsurface data such as seismic surveys, well logs, core samples, fluid properties, reservoir characteristics, and more are often stored across siloed legacy systems and proprietary databases with varying degree of data quality. This poses a challenge for companies in implementing AI-driven insights. For reservoir characterisation, it would be ideal to incorporate all geologic and petrophysical data at the scale at which the data are available. However, computing time, costs, and capabilities all limit our ability to build a data-rich characterisation that can be used for reservoir fluid-flow simulation (Roger M. Slatt, 2013).

1 University of Petroleum & Energy Studies

* Corresponding author, E-mail: richardm.geo@gmail.com

DOI: 10.3997/1365-2397.fb2024107

Big data analytics comprise four Vs of data: volume, velocity, variety, and veracity The complexity of managing the data has increased substantially and is expected to become even more of a deterrent to performing analytics. The strategy for collecting, streaming, storing, transporting, cleansing, and securing the data is equally important as the analytic methods (Brule, 2015).

Typical data-related challenges in the upstream industry are cases of data incompleteness, lack of clear data ownership, multiple copies of data scattered in different source systems, data in numerous spreadsheets, time-consuming manual data loading and data validation process, data integrity issues between systems, real time data integration and associated quality issues (Mohan, 2017).

Subsurface autonomy refers to the seamless execution of integrated subsurface processes and workflows and drive decisions with minimal human intervention leveraging a subsurface digital twin and AI. The journey to autonomy is a continuous process with the objective to achieve a higher level of automation and integration incrementally from its current state. A subsurface digital twin is a digital representation of a subsurface system that integrates a variety of data, petrotechnical applications, static and dynamic models, data-driven models, and enables real-time updates to models to mirror the dynamic behaviour of the physical reservoir for adaptive management of reservoir operations. An AI-powered digital twin can assist geoscientists, reservoir engineers, and drilling teams in making faster, more informed decisions from well planning to optimising field development, enhancing reservoir characterisation and fine-tuning production management. The figure below presents the digital architecture framework representing digital twin and AI/ML Solutions to enable autonomy in subsurface and field operations.

Data is a fundamental building block to build a subsurface digital twin and to achieve subsurface autonomy. Companies must transform their data management practices to ensure subsurface data are standardised, contextualised, integrated, and seamlessly accessible to various subsurface workflows. The section below elaborates how AI and ML can transform subsurface operations by focusing on the critical role of data management in creating AI-ready datasets that can be used across various subsurface applications Additionally, the paper delves into the challenges associated with data integration, contextualisation, and governance, and highlights how industry standards like the

Open Subsurface Data Universe (OSDU) can unlock the full potential of subsurface autonomy. Though the scope of the paper is limited to the subsurface, the approach is applicable to the production and facilities autonomy. A full autonomy of the field is achieved when subsurface, production and facilities digital twins are seamlessly integrated to realise the full field value.

Data management transformation

AI technologies have the potential to significantly improve decision making across multiple subsurface domains, from AI-assisted interpretation, predictive reservoir modelling to real-time geosteering during drilling operations.

ML techniques have been used successfully to create surrogates for reservoir simulation models (Amini and Mohaghegh, 2019). ML techniques help to accelerate complex reservoir studies that typically require several reservoir simulations runs. However, access to various reservoir data managed in different databases and applications, cleaned and standardised data remains a challenge to enable AL/ML workflows.

According to the IDC survey, data centralisation for easier analysis, data mastering to ensure consistency, data quality improvements, data aggregation, and data governance are identified as top priorities by leading upstream companies (Venkataraman, 2021).

To realise the full potential of AI, availability of high-quality, structured, and contextualised data it is fundamental to ensure that AI algorithms can generate reliable and actionable insights. Subsurface data from seismic surveys, well logs, core samples, static and dynamic reservoir data and production histories must be processed, standardised, and integrated efficiently to make it suitable for analysis by AI algorithms to refine the understanding of reservoir behaviour, improve model accuracy, and support more efficient decision-making throughout the reservoir life cycle.

Typically, subsurface data is fragmented and siloed in nature. Legacy petrotechnical applications and databases often store seismic data, petrophysical logs, core data, static models and reservoir simulations in proprietary formats that are difficult to integrate or standardise. These systems often lack interoperabili-

ty, making it challenging to bring the data together from different sources and multivendor products or departments into a unified platform where AI models can extract meaningful insights.

To effectively leverage AI and ML for subsurface autonomy, data must be accessible, consistent, and contextualised. As the complexity of subsurface operations increases, the need for data contextualisation is essential. Contextualisation refers to understanding the meaning behind raw data and making it relevant to the geological, operational, and economic context.

Traditional data management approaches, which were often siloed and static in nature need to be transformed into integrated, real-time data ecosystems that seamlessly capture, store, and deliver to integrated subsurface workflows. Hence, it’s vital to move towards a unified data architecture to handle the increasing volume, variety, and velocity of data generated by advanced subsurface monitoring technologies and to enable seamless integration of multiple data sources, real-time data and provide rich context for interpretation. AI models need to be able to interpret subsurface data in a way that reflects the real-world complexities of geology, reservoir engineering, and operational conditions. This requires the adoption of cloud-based platforms, big data analytics, and machine learning to automate the data collection, process, analysis and interpret vast amounts of complex data in near real-time.

It is essential to take an integrated approach to data management to encompass data collection, protection, data visibility, data integration, data lineage, metadata management, and data quality. For upstream companies to enable AI-driven subsurface autonomy, several strategic actions are required.

Data quality impact on AI results

AI and ML models rely heavily on high-quality, clean, and complete data. Data quality issues, such as missing or incomplete well logs, noisy seismic data, or errors in core sampling can lead to inaccurate models and flawed decision-making. Lack of standardised quality controls across different data sets compounds this problem, leading to discrepancies between different sources of data. Data quality remains a big challenge for several companies to produce sustainable solutions, resulting from a variety of data

Figure 1 Conceptual digital architecture framework representing digital twin and A/ML solutions.

management issues that need to be addressed. Outliers, missing, duplicate, obsolete, and unstructured data, and multiple sources of disparate data need to be integrated into a single consistent version with contextual information (Sankaran et al., 2020).

For AI-driven subsurface solutions to be effective, companies must implement stringent data quality control measures. This involves adopting standardised data formats, verifying the accuracy of data sources, and ensuring that data is consistent across all platforms. Data quality must be managed through robust validation processes, and upstream companies must continuously monitor and audit their data to ensure its integrity. An automated process to identify data issues, reporting and tracking involving all data stakeholders, will enable continuous improvement of data quality.

Data Integration and Interoperability

The data integration process helps subsurface engineers to create more realistic and consistent models that capture the complexity and heterogeneity of the reservoir by combining and harmonising data from different data sources. Data integration enables us to identify data gaps, inconsistencies, errors, and improve data quality and reliability. Typically, seismic data, geological interpretations, well logs, production data, and reservoir simulation results are often stored in silos in proprietary formats. This fragmentation makes it difficult to unify variety of data and hinders AI algorithms from analysing data holistically. Data interoperability is key to unifying data across various geoscience and engineering domains, ensuring that all stakeholders from geophysicists to reservoir engineers have access to a single, up to date source of truth.

Achieving data integration requires not only the adoption of standardised data formats but also the development of robust data architectures that facilitate seamless access to data from multiple sources. Open data foundations such as the Open Subsurface Data Universe (OSDU) are crucial in breaking down these silos by offering an open, cloud-based platform for storing and sharing data across the subsurface value chain.

By enabling interoperability between different applications and vendors, OSDU allows for a more integrated AI-ready data environment with relevant measures by operating companies to

ensure data quality. OSDU enables data liberation by making it easier to access and share data across multi-vendor systems, and it ensures that data is structured in a way that is compatible with AI models. By providing a unified data structure, OSDU allows AI and ML algorithms to easily integrate and analyse diverse data sets, leading to more accurate reservoir models, better field development plans, and optimised production strategies. OSDU along with other standards can accelerate the development of AI-powered applications, for example, predictive modelling of reservoir behaviour by enabling seamless access to high-quality, well contextualised subsurface data.

Data contextualisation

Data contextualisation is essential for AI models to interpret subsurface data correctly. Without understanding the geological, engineering, and operational context in which data was collected, AI models risk making inaccurate predictions and to miss capturing the full complexity of the subsurface. Data contextualisation requires a sophisticated understanding of the relationships between various types of subsurface data. This includes linking geological interpretations with well log data, core analysis with fluid properties, and seismic data with reservoir models. To achieve contextualised data, upstream companies must ensure that their data systems can capture and store this metadata, creating a rich, integrated environment where AI algorithms can access and interpret data with the proper context.

Data and AI governance

Data is at the core of artificial intelligence and machine learning that enables AI/ML models to learn and evolve, allowing them to solve classification, prediction and anomaly detection tasks. As AI technologies become increasingly embedded in subsurface operations, data governance must evolve to address new challenges. AI/ML models are dependent on high-quality data if they are to perform well. It is critical to understand how data are collected, transmitted, stored, processed and exploited by AL/ML powered systems.

In the AI context, data governance refers to the management of data quality, security, privacy, and access rights, ensuring that AI algorithms are using reliable and trustworthy data. Since AI

Figure 2 Integrated data architecture (Mohan.R, 2017).

systems are often decision-making tools, it is essential that the data feeding these models is accurate, traceable, and compliant with regulatory standards. Mature data management companies have multiple stakeholders, including data science teams, IT, and line of business managers, who work collaboratively to ensure positive outcomes for data usage (Venkataraman, 2021). Companies must the implement robust governance practices covering life cycle of the AI model and should involve key data stakeholders comprising of data owners, data stewards, and data custodians, and data administrators in collecting, preparing and managing the data across the assets and AI development teams to develop, train and deploy AI models and solutions. Data stewards to ensure data standards and policies are scaled, upheld, and kept up to date. Data assets are categorised into raw data, pre-processed data, reduced data, labelled data, augmented data and validation data across the AI life cycle. Data governance should be implemented across all the categories of data assets. The figure below presents a conceptual integrated data architecture.

Conclusion

Achieving subsurface autonomy through digitalisation, AI and ML requires transformation of how data is managed, integrated, and contextualised. Upstream companies need to prioritise data management as a critical foundation for successful digital transformation and maximising the value for technology and data to achieve. By addressing the challenges of data fragmentation, quality, and governance, upstream companies can create an AI-ready data environment that supports smarter decision-making, faster well planning, and more efficient reservoir management and field development plans. Industry standards such as OSDU play a critical role in enabling the seamless integration and sharing of subsurface data, while AI technologies have the potential to revolutionise how the industry explores, develops, and manages oil and gas reservoirs. To fully realise the benefits of AI, upstream companies must take strategic actions that ensure their data is of high quality, structured,

accessible, and ready for AI-driven analysis with required storage and compute infrastructure. This includes effective transition to next-generation data platforms, data integration, data liberation, metadata management, data governance and close collaboration with multiple business disciplines.

References

Amini, S. and Mohaghegh, S. [2019]. Application of Machine Learning and Artificial Intelligence in Proxy Modeling for Fluid Flow in Porous Media. Fluids, 4(3), 126. https://doi.org/10.3390/fluids4030126.

Brulé, M. R. [2015]. The Data Reservoir: How Big Data Technologies Advance Data Management and Analytics in E&P. SPE Digital Energy Conference and Exhibition, held in The Woodlands, Texas, USA, Expanded Abstracts. SPE-173445-MS.

Melo, D. [2019]. Data management evolves, but challenges persist. Data Science and Digital Engineering in Upstream Oil and Gas.

Mohan, R. [2017]. Upstream Data Architecture and Data Governance Framework for Efficient Integrated Upstream Workflows and Operations. SPE ADIPEC, held in Abu Dhabi, UAE. SPE-185440188962-MS

Perrons, R.K. and Jensen, J.W. [2015]. Data as an Asset: What the Oil and Gas Sector Can Learn From Other Industries about Big Data. Energy Policy, 81, 117–121.

Popa, A. and Cassidy, S. [2012]. Implementing i-Field-Integrated Solutions for Reservoir Management: A San Joaquin Valley Case Study SPE Econ & Mgmt, 4(1), 58-65. SPE-143950PA.

Roger M.S. [2013]. Stratigraphic Reservoir Characterization for Petroleum Geologists, Geophysicists, and Engineers. Developments in Petroleum Science, 61, 229-281.

Sankaran, S., Matringe, S., Sidahmed, M., Saputelli, L., Wen, X.H., Popa, A., and Dursun, S. [2020]. Data Analytics in Reservoir Engineering. Richardson, Texas: PetroBriefs Series, Society of Petroleum Engineers

Venkataraman, A. [2021]. Data Management: Establishing a foundation for digital transformation in the upstream energy industry. IHS Markit, IDC White Paper.

The bees of processing intelligence

Neil Hodgson1*, Karyna Rodriguez1, Lauren Found1 and Helen Debenham1 reflect on how geoscientists are the bees of processing intelligence, cross-fertilising solution-strategies and benefiting from the diversity of practitioners they interact with as they explore the diversity of the world’s geology.

Introduction

Julius Caeser told us that ‘Experience is the best teacher’. Searcher has put this maxim at the centre of its business model. Whilst we outsource the processing to the best technology providers on the planet, seeking the ‘right technology for the geology’, we take Caeser’s message to heart and learn from our experience; using this to drive imaging.

Modern seismic processing is so fast and so algorithm-dependent, that one can be forgiven for believing that it could be a fully automated process where the experience and skill of human operatives add little value. Although AI may one day drive us that way, in the practical world of today we at Searcher beg to differ with that position – indeed, our strategy is to focus on learning – not machine learning but human learning – to use the most complex machine in the universe: the human brain, to leverage our pan-global exposure to seismic processing problem resolution.

Let’s look at a simple example. In Namibia’s Orange basin Searcher undertook an experiment with a difference experiment which began in 2022 with the acquisition of the 1800 km2 ‘Bridge’ survey in PEL 85, operated by Rhino Resources (Figure 1).

One of the biggest and most efficient acquisition vessels in the world, Shearwater’s Empress, acquired the data using 12, 8.1 km-long conventional streamers, each 150 m apart in ‘wide tow configuration’. The data were acquired in east-west orientation approximating to the dip direction, as conventional wisdom would say that this takes some of the stress off the migration to put data in the right place.

The data was processed using a conventional modern processing sequence on Shearwater’s Reveal software. Even the fast-track data that came straight off the vessel in real time allowed us to image the prospectivity under the Gravity Driven Fold and Thrust Belt (GDFTB) in much greater detail than the previous generation of 2D ever had, revealing a series of large stratigraphic and 4-way trapped plays below the GDFTB. On the fast-track data available one month after the last shot, we were able to map AVO characteristics that significantly reduced target risk. Final mapping on the Pre-Stack Depth Migrated dataset finally constrained the closures in depth. The final results of the acquisition and processing are stunning (see Figures 2 and 3 right-hand side).

The second part of our experiment was undertaken in 2023 with the acquisition of the 3700 km2 ‘Link’ survey that covered part of the adjacent PEL 83 crossing into PEL 85 and partly overlapped the Bridge Survey deep in PEL 85 (Figure 1). PEL 83 is operated by Galp Energia, which drilled the Mopane discovery wells at this time very close to this survey. The equipment and parameters used for the acquisition was exactly as we had it for the Bridge survey; however, to make the survey more efficient, it was acquired on an almost north-south azimuth – orthogonal to the Bridge survey.

The survey was using the same software, algorithms and crucially the same processing team that had processed Bridge. If one compares the two surveys, which abut each other in Figures 2 and 3, the differences are clear. The Link PSTM data, acquired in strike orientation has better multiple attenuation and a better velocity application in the migration despite

1 Searcher Seismic

* Corresponding author, E-mail: n.hodgson@searcherseismic.com

DOI: 10.3997/1365-2397.fb2024108

Figure 1 Bridge (2022) and Link (2023) 3D seismic images In Namibia’s Orange Basin. Yellow lines are sections in Figures 2 and 3.

the fact that Bridge was acquired in dip orientation, clearly unimportant to migration quality in this case, and processed to PSDM.

The differences are subtle, yet, the PSTM Link survey is already improved compared to the PSDM on Bridge. This is not a mistake but actually unavoidable because the acquisition equipment, techniques, conditions and the processing algorithms and methodology and software were exactly the same in each of these cases, the only difference between the two can therefore be the most important one – the experience of the processing team, learning from Bridge to forge the Link. Learning we see, can be more valuable than tech.

The primary learnings were to get the linear noise better removed from far offsets, allowing for cleaner sections, better velocity discrimination and most importantly to create stable angle stacks out to 50 degrees to illuminate AVO response.

In recent years, Searcher has processed data with SLB, CGG (now Viridien), Shearwater, PGS, DUG, RTS and EIF. We adopt a technology-agnostic approach that allows us to use the right acquisition technology for the geology and the processing technology to put the data in the right place. Our acquisition and processing projects have spanned deepwater basins, thrust belts, salt basins, extensional basins etc – and it’s in this diversity that the power lies; the number and variety of problems match the diversity and number of solutions. In this world of big data solutions – the biggest factor is the variety of the problems.

Processors find inspiration for their craft in all kinds of places. Dana Plato appeared in The Exorcist II, and though her classical Greek namesake was less famous for de-ghosting, he surely had some useful tips for processors when it comes to learning. Plato considered that the most effective kind of

Figure 2 Line A: RHS Bridge 2022 3D
Final PSDM. LHS Link 2023 3D fast-track PSTM.
Figure 3 Line B: RHS Bridge 2022 3D
Final PSDM. |LHS Link 2023 3D fast-track PSTM.

education is to ‘play amongst lovely things’ such as Namibian 3Ds, where multi-billion barrel discoveries are being made on a frequent basis. To that end Galp is busy drilling Mopane appraisal wells on PEL 83 as we write in mid Q4, and Rhino will operate a well later in Q4 on its PEL 85 acreage – where if there is any justice in exploration, they too will be making a very significant discovery in 2024. Plato also suggested that ‘experience gives a soul to the universe’, which is the key to Searcher’s out-sourcing processing strategy. More importantly, we become the bees of processing intelligence, cross fertilising solution-strategies and benefiting from the diversity of experts we interact with as we explore the diversity of the world’s

geology. Whilst Caeser’s words ring true and drive our processing strategy, one has to remember Oscar Wilde, who said that ‘Experience is the hardest kind of teacher. It gives you the test first and the lesson afterwards.’

References

Hodgson, N., Rodriguez, K., Debenham, H. and Found, L. [2023]. Affordably Making the Invisible Unmissable. First Break, 41(12), 73-76.

Hodgson, N., Found, L. and Rodriguez, K. [2024]. Delivering sands to Venus and all the traps between; Orange Basin, Namibia. First Break, 42(7), 55-57.

CALENDAR OF EVENTS

3-5

EAGE

Part

EAGE Workshop on Geothermal Energy in Latin America

Part of First EAGE Symposium & Exhibition on Geosciences for New Energies in America

EAGE Workshop on Water Footprint

Part of First EAGE Symposium & Exhibition on Geosciences for New Energies in America

3-5

CALENDAR OF EVENTS

7-11 Sep Near Surface Geoscience Conference and Exhibition www.eagensg.org

7-12 Sep 32 nd International Meeting on Organic Geochemistry (IMOG) www.imogconference.org

9-11 Sep Second EAGE Conference and Exhibition on Guyana-Suriname Basin www.eage.org

14-18 Sep Seventh International Conference on Fault and Top Seals www.eage.org

16-18 Sep The Middle East Oil, Gas and Geosciences Show (MEOS GEO) www.meos-geo.com

22-24 Sep Sixth EAGE Borehole Geology Workshop www.eage.org

23-24 Sep EAGE Workshop on Upstream Excellence: Pioneering Success in Oil and Gas Projects www.eage.org

24-25 Sep 4th EAGE/SUT Workshop on Integrated Site Characterization for Offshore Renewable Energy www.eage.org

29 Sep1 Oct Second AAPG/ EAGE Mediterranean and North African Conference (MEDiNA) medinace.aapg.org

29 Sep1 Oct Eighth EAGE Borehole Geophysics Workshop www.eage.org

2025

First EAGE Conference on Challenges and Opportunities in the Future of Mineral Exploration www.eage.org

6-8 Oct Second EAGE Data Processing Workshop www.eage.org

6-8 Oct Ninth EAGE High Performance Computing Workshop www.eage.org

Oct EAGE/AAPG/SEG CCUS Workshop www.eage.org

2-5 Nov EAGE/AAPG Workshop on Tectonostratigraphy of the Arabian Plate: Structural Evolution of the Arabian Basins www.eage.org

3-7 Nov 6th EAGE Global Energy Transition Conference & Exhibition (GET 2025) www.eageget.org

Nov Seventh EAGE Rock Physics Workshop www.eage.org

Nov Third EAGE Workshop on Geothermal Energy in Latin America www.eage.org

12-14 Nov EAGE/Aqua Foundation Third Indian Near Surface Geophysics Conference & Exhibition www.eage.org

24-26 Nov Third EAGE Seabed Seismic Today Workshop www.eage.org

1-3 Dec Fifth EAGE Eastern Mediterranean Workshop www.eage.org

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.