EAGE NEWS How your membership is aiding Energy Transition
CROSSTALK Fracking and the US election
TECHNICAL ARTICLE Determining time-distance rules for Vibroseis acquisition
With 90+ years of experience pioneering advanced technology to meet some of the world’s most complex challenges, we’re ready to face the demands of tomorrow. We are Viridien.
CHAIR EDITORIAL BOARD
Clément Kostov (cvkostov@icloud.com)
EDITOR
Damian Arnold (arnolddamian@googlemail.com)
MEMBERS, EDITORIAL BOARD
• Lodve Berre, Norwegian University of Science and Technology (lodve.berre@ntnu.no)
Philippe Caprioli, SLB (caprioli0@slb.com) Satinder Chopra, SamiGeo (satinder.chopra@samigeo.com)
• Anthony Day, PGS (anthony.day@pgs.com)
• Peter Dromgoole, Retired Geophysicist (peterdromgoole@gmail.com)
• Kara English, University College Dublin (kara.english@ucd.ie)
• Stephen Hallinan, Viridien (Stephen.Hallinan@viridiengroup.com)
• Hamidreza Hamdi, University of Calgary (hhamdi@ucalgary.ca)
Saskia Nota (firstbreakproduction@eage.org) Ivana Geurts (firstbreakproduction@eage.org)
ADVERTISING INQUIRIES corporaterelations@eage.org
EAGE EUROPE OFFICE
Kosterijland 48
3981 AJ Bunnik
The Netherlands
• +31 88 995 5055
• eage@eage.org
• www.eage.org
EAGE MIDDLE EAST OFFICE
EAGE Middle East FZ-LLC Dubai Knowledge Village PO Box 501711
Dubai, United Arab Emirates
• +971 4 369 3897
• middle_east@eage.org www.eage.org
EAGE ASIA PACIFIC OFFICE
EAGE Asia Pacific Sdn. Bhd.
UOA Centre Office Suite 19-15-3A No. 19, Jalan Pinang 50450 Kuala Lumpur
Malaysia +60 3 272 201 40
• asiapacific@eage.org
• www.eage.org
EAGE LATIN AMERICA OFFICE
EAGE Americas SAS Av. 19 #114-65 - Office 205
Bogotá, Colombia
• +57 310 8610709
• +57 (601) 4232948
• americas@eage.org
• www.eage.org
EAGE MEMBERS’ CHANGE OF ADDRESS
Update via your MyEAGE account, or contact the EAGE Membership Dept at membership@eage.org
FIRST BREAK ON THE WEB
www.firstbreak.org
ISSN 0263-5046 (print) / ISSN 1365-2397 (online)
Editorial Contents
29 A practical method to determine time-distance rules for efficient Vibroseis acquisition
Tim Dean, Richard Barnwell and Damien Barry
37 A proposed standard seismic frequency nomenclature for geophysical site investigation surveys in the offshore energy sector
Andy W. Hill, Gary Nicol, and Mick R. Cook
Sp ecial Topic: Marine Acquisition
43 Seismic rock properties and their significance for the interpretation of seismic amplitude variation with angle (AVA), offshore Liberia and Sierra Leone
David Went, Jon Rogers and Felicia Winter
49 Short streamers and sparse OBN acquisition: Potential for CCS monitoring? S. David, F. Ten Kroode, E. Cho, M.A.H. Zuberi, G. Stock, S. Baldock, J. Mispel, H. Westerdahl, M. Thompson and Å. Sjøen Pedersen
59 Industry-first deployment of simultaneous source acquisition using a Dispersed Source Array with Tuned Pulse Source and conventional airgun for a shallow water seismic survey offshore Malaysia
Maxime Benaniba, Jeremy Aznar, Stephane Laroche, Philippe Herrmann, Shuki Ronen, Julien Large, Shamsul B Shukri, Tasvir Kaur Rajasansi, Sukhveender Singh, Law Chung Teck, Faizan Akasyah, Chetan Anand Karri, Ewan Neill and Craig Walker
67 OBN for the energy transition era
Robert Basilli
71 Drone-borne marine magnetic surveys for UXO and cables/pipelines tracing Alexey Dobrovolskiy
77 Transforming seismic surveys with OBN and autonomous technology
Lerish Boshoff
81 Trusting the process
Neil Hodgson, Lauren Found and Karyna Rodriguez
85 CO2 storage capacity classification and compliance Ruud Weijermars
94 Calendar
cover: An Ocean Infinity vessel, A78-01, which is offering latest advances in OBN technology. See page 67.
European Association of Geoscientists & Engineers Board 2024-2025
Near Surface Geoscience Circle
Andreas Aspmo Pfaffhuber Chair
Florina Tuluca Vice-Chair
Esther Bloem Immediate Past Chair
Micki Allen Contact Officer EEGS/North America
Hongzhu Cai Liaison China
Deyan Draganov Technical Programme Officer
Eduardo Rodrigues Liaison First Break
Hamdan Ali Hamdan Liaison Middle East
Vladimir Ignatev Liaison CIS / North America
Musa Manzi Liaison Africa
Myrto Papadopoulou Young Professional Liaison
Catherine Truffert Industry Liaison
Mark Vardy Editor-in-Chief Near Surface Geophysics
Jonathan Redfern Editor-in-Chief Petroleum Geoscience
Xavier Troussaut EAGE Observer at SPE-OGRC
Robert Tugume Member
Timothy Tylor-Jones Committee Member
Anke Wendt Member
Martin Widmaier Technical Programme Officer
Sustainable Energy Circle
Carla Martín-Clavé Chair
Giovanni Sosio Vice-Chair
SUBSCRIPTIONS
First Break is published monthly. It is free to EAGE members. The membership fee of EAGE is € 80.00 a year including First Break, EarthDoc (EAGE’s geoscience database), Learning Geoscience (EAGE’s Education website) and online access to a scientific journal.
Companies can subscribe to First Break via an institutional subscription. Every subscription includes a monthly hard copy and online access to the full First Break archive for the requested number of online users.
Orders for current subscriptions and back issues should be sent to First Break B.V., Journal Subscriptions, Kosterijland 48, 3981 AJ Bunnik, The Netherlands. Tel: +31 (0)88 9955055, E-mail: subscriptions@eage.org, www.firstbreak.org.
First Break is published by First Break B.V., The Netherlands. However, responsibility for the opinions given and the statements made rests with the authors.
All rights reserved. First Break or any part thereof may not be reproduced, stored in a retrieval system, or transcribed in any form or by any means, electronically or mechanically, including photocopying and recording, without the prior written permission of the publisher.
PAPER
The publisher’s policy is to use acid-free permanent paper (TCF), to the draft standard ISO/DIS/9706, made from sustainable forests using chlorine-free pulp (Nordic-Swan standard).
Sanjeev Rajput Vice-President
Laura Valentina Socco President
Martin Widmaier Technical Programme Officer
Andreas Aspmo Pfaffhuber Chair Near Surface Geoscience Circle
Maren Kleemeyer Education Officer
Yohaney Gomez Galarza Chair Oil & Gas Geoscience Circle
Carla Martín-Clavé Chair Sustainable Energy Circle
Diego Rovetta Membership and Cooperation Officer
Peter Rowbotham Publications Officer
Christian Henke Secretary-Treasurer
Why your membership counts in progressing energy transition
As geoscientists and engineers, EAGE members are at the forefront of transformative change in the energy sector, playing a pivotal role in addressing the growing energy demands, diversifying energy sources, as well as developing more efficient and cleaner methods of energy production. Our members are catalysts of change – essential to the energy transition, driving innovation and sustainability in a critical sector.
Within the Association, the journey towards a more sustainable energy future celebrated a milestone in 2019 when a community dedicated to Decarbonisation & Energy Transition was established to offer a permanent space for members to connect. This quickly became a reference point for everyone at EAGE interested in knowing more about relevant projects, ideas and career opportunities.
As the Community grew and its scope expanded, it branched out into five more specialised groups earlier this year. These continue the important work in various areas, each focusing on specific aspects of the challenge: Carbon Capture and Storage, Geothermal Energy, Hydrogen and Energy Storage, Critical Minerals, and Wind Energy. This evolution reflects the dynamic nature of our professional association and our commitment to adapting and empowering innovation through multi-disciplinary connections.
Matthias Imhof, chair of the Technical Community on Carbon Capture and Storage, says: ‘The energy transition operates with new entrants, stakeholders, and market participants. While skills and workflows may be similar to the ones of the traditional O&G business, they require adaptation to the novel value chains and demands. Our tasks involve training a fresh workforce, integrating skilled experts from oil, gas, and related industries, and creating practices. To facilitate deployment at scale, our communities play a crucial role in networking, collaboration, sharing, and learning.’
EAGE is not new to change; we thrive on it. Our network represents a wealth of knowledge that can significantly boost each member’s journey. Through opportunities to present and distribute their work to an international audience, share and gain knowledge, discuss ideas, and find new resources and colleagues, our members are continually advancing their careers and contributing to the broader goals of the geoscientific community. For example, Sanket Bhattacharya, co-chair of the GET 2024 Offshore Wind Energy Conference, notes: ‘EAGE GET this November will play a significant role in bringing minds together for achieving a unified global success.’
For Gehrig Schultz, co-chair of the GET 2024 Geothermal Energy Confer-
ence, EAGE brings together people from different sectors and ‘we start having people talking between different technical silos. With that interaction, we can surpass barriers faster.’
The energy transition is a collective effort, and every member’s contribution is vital. By renewing your membership, you are committing to continued professional development and joining a global community dedicated to making a difference.
Join us in 2025 as we continue to drive innovation, share knowledge, and support each other in all the fields covered by the Association.
the Laurie Dake Challenge
Through opportunities to share and gain knowledge, EAGE members drive innovation and sustainability.
Latest short courses added to Education Catalogue
We are pleased to announce the expansion of our Short Course Catalogue with the addition of three new courses aimed at enhancing members’ expertise in key areas of geoscience: data visualisation, seismic processing, and compressive sensing. The courses, developed by leading experts in their respective fields, offer a combination of theoretical foundations and practical applications, tailored for geoscientists seeking to expand their skill sets and keep up with industry advances.
The new offering, ‘Data visualisation principles for scientists’, is designed to equip participants with the knowledge to effectively communicate complex data through clear, impactful visual representations. The course, led by Dr Steve Horne, covers the core principles of data visualisation, including appropriate chart selection, colour usage, and design strategies that cater to human visual perception. With a blend of historical context, scientific insights, and practical exercises, participants will learn how to identify and avoid common visualisation pitfalls and ensure their presentations are both accurate and engaging. This course should be particularly beneficial for anyone working with scientific data and is
delivered in both online and classroom formats.
‘Seismic multiple processing techniques: concepts, applications, and trends’, taught by Dr Clément Kostov, focuses on the complex world of seismic data processing, specifically addressing the challenges of dealing with multiples in seismic datasets. Participants will be taken through key concepts such as multiple prediction, adaptive subtraction, and redatuming. The course includes case studies from land and marine environments, as well as a review of the latest research and technological developments in seismic multiple processing. This intermediate-level course is ideal for geoscientists working with surface seismic data who want to deepen their understanding of how multiples affect velocity model-building and imaging workflows. Both online and classroom versions are available to accommodate various learning preferences. The first delivery of this course is scheduled on 20-21 November 2024.
‘Compressive sensing: explained and challenged’, presented by Jan de Bruin, offers a comprehensive overview of the increasingly relevant technique of com-
pressive sensing (CS) in seismic acquisition and processing. Without diving into the complex mathematical theory, the course provides participants with the necessary understanding to make informed decisions about when and how to apply CS in seismic projects. With a focus on real-world applications, participants will explore the benefits and limitations of CS, ensuring they gain a balanced view of this innovative technology. The course is available in a classroom format and is highly recommended for those involved in seismic survey design or geophysical data acquisition.
These new courses reflect EAGE’s commitment to providing members with training opportunities that meet the evolving demands of the geoscience industry. Whether your interest lies in improving data presentation, mastering seismic processing techniques, or understanding the potential of compressive sensing, these courses offer valuable tools for professional growth. With the flexibility of online and classroom options, geoscientists at all stages of their careers can now enhance their expertise and apply new knowledge to their work.
Learn more at learninggeoscience.org.
4 NOV RESERVOIR ENGINEERING OF GEOTHERMAL ENERGY PRODUCTION, BY D. VOSKOV SHORT COURSES AT GET 2024 IN-PERSON 1 DAY AN INTRODUCTION TO OFFSHORE WIND, BY J. GODTSCHALK
CO2 STORAGE PROJECT DESIGN AND OPTIMIZATION (SALINE AQUIFERS), BY P. RINGROSE
5-8 NOV BOREHOLE SEISMIC FUNDAMENTALS AND INTRODUCTION TO ADVANCED TECHNIQUES, BY A. CAMPBELL INTERACTIVE ONLINE SHORT COURSE 4 HRS/DAY, 6 MODULES
8 NOV EXPLORATION OF SUBSURFACE NATURAL GEOLOGIC HYDROGEN AND STIMULATION FOR ITS ENHANCED PRODUCTION, BY D. STRA¸POC ’ SHORT COURSES AT GET 2024 IN-PERSON 1 DAY
GEOPHYSICAL MONITORING OF CO2 STORAGE, BY M. LANDRØ GEOSCIENCE COMMUNICATION AND PUBLIC ENGAGEMENT, BY I. STEWART
12 NOV12 DEC
RESERVOIR ENGINEERING OF GEOTHERMAL ENERGY PRODUCTION, BY D. VOSKOV EXTENSIVE ONLINE COURSE 3 LIVE WEBINARS OF 2 HRS
20-21 NOV SEISMIC MULTIPLE PROCESSING TECHNIQUES: CONCEPTS, APPLICATIONS, AND TRENDS, BY C. KOSTOV INTERACTIVE ONLINE SHORT COURSE 4 HRS/DAY
* EXTENSIVE SELF PACED MATERIALS AND INTERACTIVE SESSIONS WITH THE INSTRUCTORS:
CHECK SCHEDULE OF EACH COURSE FOR DATES AND TIMES OF LIVE SESSIONS FOR THE FULL CALENDAR, MORE INFORMATION AND REGISTRATION PLEASE VISIT WWW.EAGE.ORG AND WWW.LEARNINGGEOSCIENCE.ORG.
DUG Elastic MP-FWI Imaging solves for reflectivity, Vp, Vs, P-impedance, S-impedance and density. It delivers not only another step change in imaging quality, but also elastic rock properties for quantitative interpretation and pre-stack amplitude analysis — directly from field-data input. It’s worth reading that last sentence again!
info@dug.com | dug.com/fwi
COURTESY OF SHELL
Conventional Workflow
DUG Elastic MP-FWI Imaging
Alternative petroleum systems of NW Europe discussed at Annual dedicated session
Session conveners Jean-Jacques Biteau (TotalEnergies, retired) and Axel Wenke (Neptune Energy) report on the dedicated session ‘Anything but Kimmeridge! Alternative Petroleum Systems of North West Europe’ held at the 2024 EAGE Annual Meeting in Oslo.
The session highlighted other sources than the classical and major Kimmeridge Clay Formation (KCF)/BCU related reservoirs, the prolific one in the North Sea provinces in terms of generated, expelled and trapped hydrocarbon volumes.
Central Graben alternatives
David Gardiner introduced the general framework of generative systems in the North Sea Central Graben. Although a great proportion of hydrocarbon fields in this area are correlated to the predominantly dysoxic to anoxic marine shales of Upper Jurassic age, presence of these organic-rich intervals is not ubiquitous in the area due to non-deposition, subsequent erosion or simply because these are in shallower stratigraphy than neighbouring discoveries with no viable migration mechanism. Gardiner mentioned as proven generative sections Middle Jurassic coals in the Danish Søgne Graben, Upper Permian Kupferschiefer sapropelic carbonates in Denmark, Zechstein of the Mid North Sea High as well as older Westphalian gas-prone coals and marine Namurian shales.
He concluded from analytical geochemical data and understanding of the central Graben regional geology that there was evidence for alternative source rocks to the Kimmeridge Clay Fm (and
equivalents) outside the main kitchens: localised lacustrine to restricted marine Jurassic source rocks may be present throughout the platform in mini-basins caused by salt withdrawal adjacent to salt diapirs. In addition, higher maturity fluids and Paleozoic-age diagnostic biomarkers indicate that deeper source rocks may underlie the Central Graben and environs, which are generally associated with deep listric faults, providing a vertical migration route for high maturity O&G.
Paleozoic generative systems
Barry Bennett mentioned the Southern North Sea as a prime example where exploration has mainly targeted gas fields arising from Carboniferous-aged deposits. Recent findings along the Mid North Sea High, such as the Ossian Darach and Pensacola and related prospects, have stimulated further interest in the area. He explained that biomarkers could help with differentiating KCF and older sections. For that purpose the saturated and aromatic hydrocarbon composition of potential source rock samples and oils from the Palaeozoic-related petroleum systems were analysed by gas chromatography – mass spectrometry (GC-MS). The sample suite consisted of source rock samples from the Devonian, Carboniferous and Permian
along with samples from the Jurassic KCF for reference.
A suite of oils from the Southern North Sea basin onshore and offshore well locations were analysed by GCMS for classical molecular markers. The aim of the investigation was to perform oil-source correlation and establish the application of age-diagnostic markers to assign to various source rock units. The application of aromatic steroid hydrocarbon compositions carries age-diagnostic information to primarily resolve the Palaeozoic relative to the Mesozoic and younger source contributions. Meanwhile, the aromatic-8,14-secohopanes (and benzohopanes) due to the increased carbonate (e.g. Jurassic Marl; Athena) and restricted conditions associated with the Zechstein provide markers to recognise Zechstein contributions. The lack of aromatic-8,14-secohopanes is a compositional feature assigned to clastic source derived oils from the Carboniferous and Jurassic KCF.
Haltenbanken area example
Based on a biomarker specific case study of a selected field in the Wintershall-DEA portfolio, Thorsten Garlichs discussed the potential of the Äre Fm and associated humic and coaly materials for providing gas contributions in the Haltenbanken area while oils were mainly generated by the Spekk FM (KCF equivalent).
Early-Mid Jurassic in North Sea
The study presented by Kiara Gomez from the University of Texas in Austin was mainly focused on representative Jurassic stratigraphic sections of a selected zone of the Viking Corridor (VC) and revealed a high-resolution, multi-proxy approach to understanding the evolution of a sub-boreal locality within the VC during the Early-Middle Jurassic.
TOC and organic carbon isotope composition (δ13Corg) data point to potential alternate source rocks within the Toarcian
‘Anything but Kimmeridge!’ continues a successful series of dedicated sessions held at the EAGE Annual since 2021.
and Aalenian-Bajocian (Middle Jurassic). High Hg content documented within the study area suggests that the intervals associated with the Early Jurassic KarooFerrar-large Igneous Province and Middle Jurassic North Sea doming uplift/eruption are likely sources of mercury.
The Alien: Upper Rhine graben
Far away from the North Sea examples, Johannes Böcker from Neptune Energy addressed the Upper Rhine Graben (URG) area which is at a high level of exploration maturity. The URG extended study on four oil families (Jurassic and Tertiary source rocks) and corresponding migration systems emphasises the role of the main interval as the Liassic Shales already demonstrated in Alsace and in the Paris Basin amongst others. A comprehensive overview on the distribution of oil families in the URG, the evaluation of source quality and migration pathways was presented as a conclusion.
Cretaceous and Paleogene deepwater offshore Norway Ivy Becker from Equinor focused on the Cretaceous intervals of the Norwegian Sea. Several source rock layers of Jurassic, Cretaceous and Cenozoic age which have been widely discussed to be present in the Norwegian Sea deepwater area but only some of them are proven.
In 2015, the 6706/12-2 (Snefrid Nord) well in the Norwegian Sea deep water triggered renewed interest in the understanding of the petroleum system in the Møre-Vøring basins. The geochemical composition of the discovered hydrocarbons allowed for another source rock other than the known Jurassic Spekk. For example, age-specific biomarkers point to a Cretaceous or Cenozoic origin in the context of hydrocarbons.
Those findings reduced uncertainties regarding source rock presence and integrated into conventional basin modelling for new functioning prospects. At the end of 2016, drilling of the exploration well 6608/10-17 S confirmed a discovery, which contributed to planning of the Verdande field. The hydrocarbon charge history of the Verdande field proves an alternative model without Jurassic source rocks at work on the Norwegian Continental Shelf.
Sverre Planke presented the relationship between Cretaceous and Paleogene source potential of the Norwegian Atlantic margin highlighting the possible role of sill intrusions. This area represents one of the last frontier petroleum provinces in Europe, which may also hold a vast reservoir potential for CO2 sequestration. The rich Jurassic source rocks are typically deeply buried and overmature in the outer part. Organic-rich Cretaceous, Paleogene and Neogene sediments are rare, but slightly enhanced TOC levels are present. Igneous sill intrusion emplaced at approximately 56 Ma represents a critical moment that is modelled to have generated large amounts of hydrocarbons from low TOC Cretaceous sediments. Enhanced TOC is found in immature Early Eocene and Miocene sediments cored on the Skoll High at IODP Site U1572. The measured temperature gradient at the site is around 75°C/km. Hydrocarbon maturation would require burial of approximately 1-3 km, achievable below the North Sea and Bear Island Trough mouth fans.
Neogene generative system?
The last talk, by Rune Maattingsdal from NOD addressed a possible Mid Miocene generative system present in the Svalbard province as a proxy for West Barents Sea. In 2021 and 2022 two different active oil seeps were sampled west of Svalbard. Geochemical analysis indicates that both oils come from the same source rock deposited in a deltaic environment, and are very young, most likely Tertiary in age. The characteristics of these oils seem to correlate quite well with a young Early Miocene deltaic-derived source rock previously proven and sampled in ODP Site 909 west of Svalbard.
Existing paleogeographic models indicate that at this time large parts of the Barents Sea shelf were exposed, most likely with large rivers and delta systems, both from Svalbard and the southwest Barents Sea, supplying terrestrial organic matter to the ocean. In addition, the Early Miocene Norwegian–Greenland Sea was probably still an oceanographically isolated basin, a setting likely favourable for source rock deposition. SAR-satellite observations on the slope west of the Barents Sea shelf shows abundant oil slicks at the sea surface, most likely originating from an active young petroleum system underneath
the many kilometres-thick Quaternary Bear Island Trough Mouth Fan. These new results and observations can form the basis for a new play model in the westernmost Barents Sea, sourced from a new young Early Miocene source rock.
Conclusion
Other generative systems than KCF can be found and be of economic interest mainly in the frontier parts of the oil and gas provinces (in space and depths), i.e., Paleozoic and Permian south Central Graben, Toarcian to Bajocian in the Viking Graben, Norwegian Atlantic margin cretaceous and Tertiary.
This session, organised by the EAGE Technical Community on Basin and Petroleum Systems Analysis, continues a successful series of petroleum system-related dedicated sessions that were introduced at the EAGE Annual in Amsterdam (2021), and followed by Madrid (2022) and Vienna (2023). The 2024 extended abstracts are now available in EarthDoc.
Join the Basin and Petroleum Systems Analysis community
Special issue submission deadline reminder
Don’t miss the chance to contribute to these special issues.
• Geoenergy thematic collection ‘The sustainable future of geoenergy in the hands of early career researchers’: Expressions of interest, with commitment to publishing a paper, should be sent by 30 November 2024
• Geophysical Prospecting special issue ‘Advances in geophysical modelling and interpretation for mineral exploration’: Submit your manuscript by 31 December 2024.
Advancing offshore safety discussed at EAGE LC London
Safety in various offshore contexts was the topic for an evening lecture in May organised by the EAGE London Local Chapter (LC) at Imperial College and streamed online to allow a broader audience to join.
Cai Glyn Ferguson, principal marine geoscientist at AtkinsRéalis gave an insightful presentation about the GEOCAST-GO research project which is underway in collaboration with University of Florida for OESI (Ocean Energy Safety Institute). The main purpose of the project is to support safety and environmental advances in offshore energy, including renewable and traditional sectors. It aims to deliver openly accessible reports for cable designers,
installers, and maintenance personnel in the offshore industry. The initial stage involves a forensic examination of cable failures in the EU/UK by linking geotechnical and metocean conditions to cable failure. Databases defining globally relevant morphodynamic provinces based on seabed conditions will be integrated with the forensic analysis to enable the assignment of georisk to different provinces localised around the American Continental Shelf. This will facilitate knowledge transfer between different regions and help predict key risks associated with those provinces.
Ferguson explained the modelling of seabed morphodynamics and hydrodynamic forces, investigated both primary and secondary scour and described the interaction of bedforms with cables and subsea structures. These numerical models will be refined through hindcasting against analogous data, conducting forward modelling to predict future behaviour.
EAGE London LC thanks Cai Glyn Ferguson for sharing the research results and look forward to seeing the outcome of this project. We also thank Dan Wagner, Tiexing Wang and Celina Giersz for hosting and moderating the event. The presentation has been recorded and is available on the EAGE YouTube channel.
EAGE launches Rosemary Hutton Award for Best Paper in Geoenergy
EAGE is introducing the Rosemary Hutton Award honouring the Best Paper published annually in Geoenergy Named after pioneering geophysicist Rosemary Hutton, it joins EAGE’s five prestigious Best Paper awards, celebrating outstanding geoscience and engineering research in our multi-disciplinary community.
Hutton’s groundbreaking work in electromagnetic geophysical methods to study the Earth’s crust and upper mantle continues to inspire modern research, and this award reflects her enduring impact on the field.
Launched in collaboration with the Geological Society of London, Geoenergy publishes timely research on key
topics like energy storage, geothermal energy, subsurface disposal, hydrogen energy, critical and raw materials, and sustainability – critical areas in the era of sustainable energy.
Peter Rowbotham, EAGE Publications Officer, explains the award’s selection process: ‘The selection of the Best Paper author(s) will be conducted by the EAGE Awards Committee, based on a rigorous review by the journal’s editors, and put forward to the EAGE Board for approval each year.’
Sebastian Geiger, Geoenergy editorin-chief, invites members to take advantage of the journal’s benefits: ‘A digital subscription to Geoenergy is included in EAGE membership, offering access to
cutting-edge research. We look forward to celebrating the research that will drive our community forward in the years to come.’
Geoenergy invites submissions that advance fundamental research and case studies in subsurface and near-surface analysis. The journal currently has several thematic collections open for submissions. Read Geoenergy published papers and explore its thematic collections directly on EarthDoc.
Modelling for cable avnd other seabed damage.
Submit Your Abstract for the EAGE Annual 2025 Technical Programme
We are excited to invite you to submit your abstracts for the upcoming EAGE Annual Conference and Exhibition, set to take place in Toulouse from 2 to 5 June 2025. Under the theme ‘Navigating Change: Geosciences Shaping a Sustainable Transition’, the event once again promises to gather leading experts and innovations in geosciences, engineering, and technologies driving the global energy transition.
The 2025 Technical Programme will be carefully balanced across the core fields of oil and gas, carbon capture and storage, renewables, and infrastructure geosciences. According to EAGE Techni-
cal Programme Officer Martin Widmaier, the focus will be on representing the three key Circles of the Association while maintaining technical quality and fostering discussions on crossover technologies that have the potential to bridge sectors.
Widmaier says: ‘The EAGE Annual Conference 2025 in Toulouse promises a compelling Technical Programme and high-quality papers. The themes will reflect the evolving focus and interests of the EAGE and its three Circles: Oil and Gas, Sustainable Energy, and Near Surface Geoscience, especially in the context of the energy transition. Topics will range from geophysics and geology to integrated
Student Calendar
PROF FABIO ROCCA
subsurface, reservoir engineering, mining, and data & computer science. Energy transition-related topics will once again be highlighted as a key theme.’
EAGE Annual values diverse voices in the field and is anticipating strong participation from young professionals and students, broadening perspectives and creating space for inter-generational knowledge exchange. This is the perfect opportunity for those who wish to showcase their research, enhance inter-disciplinary collaboration, and advance their careers in a highly visible event.
The Call for Abstracts is now open. We welcome submissions across a wide range of topics, including Geophysics, Geology, Reservoir Engineering, Integrated Subsurface, Mining, Renewables, Infrastructure, and Data Science. Submit your work by 15 January 2025 and become a part of the Technical Programme that defines the future of engineering and geosciences.
For more details on the topics, visit www.eageannual.org. We look forward to receiving your submissions and working together to build a Technical Programme that reflects how our geoscience and engineering community can advance our disciplines and be a leader in the energy transition.
A rewarding week for Near Surface Geoscience 2024 in Helsinki
Helsinki was the gathering point for over 500 geoscientists at Near Surface Geoscience 2024 (NSG2024), offering a dynamic mix of technical sessions, practical workshops, insightful panels and memorable field trips. Across three parallel conferences, attendees explored the recent progress in near-surface geophysics while enjoying opportunities to network and collaborate.
The event started with a series of workshops developed to provide in-depth learning in specialised areas of geophysics. Participants could explore the Digital Outcrop Modelling, Transient Electromagnetic Methods, or Hard Rock Physics, each offering a mix of theory and interactive sessions. These workshops allowed geoscientists to engage with topics of professional interest, while also providing a space for practical application and discussion.
Following the workshops, the event began with the Opening Session led by Andi A. Pfaffhuber, EAGE NSG Circle chair, followed by welcome remarks from Valentina Socco, EAGE president, and Suvi Heinonen, chair of the Local Advisory Committee. Their remarks set a collaborative tone for the week, highlighting the importance of innovation and knowledge sharing within the geoscience community.
The European Meeting of Environmental and Engineering Geophysics, celebrating its milestone 30th edition, focused on the application of geophysical methods to solve environmental and engineering issues. Presentations covered a variety of topics, from novel approaches to mapping soil contamination to geotechnical monitoring techniques in urban environments. Among the highlights was the application of remote sensing technologies for infrastructure stability monitoring in risk-prone areas. Several case studies showcased how geophysics is being integrated with other disciplines to address complex problems
such as industrial waste management and natural hazard mitigation.
Running alongside, the 5th Conference on Geophysics for Mineral Exploration and Mining centred on geophysical methods applied to mineral exploration. Experts shared new methodologies for mapping mineral deposits at greater depths and in remote locations. One notable area of focus was on innovations in airborne and ground-based geophysics, which are revolutionising the exploration of critical metals for renewable energy and tech industries. Techniques like magnetometry and electrical resistivity were examined in detail, emphasising improvements in accuracy and operational efficiency.
The 4th Conference on Airborne, Drone, and Robotic Geophysics highlighted the increasing use of drones and robotics in geophysical surveys. These technologies are enhancing data collection in challenging environments, such as mountainous or otherwise difficult-to-access regions. Participants explored how these emerging tools are influencing the future of geophysics, with applications ranging from mineral exploration to archaeological surveys.
The Exhibition was one of the central features, running alongside the conferences and bringing together 35 exhibitors. The space allowed participants to explore new technologies and services related to near-surface geoscience, while also providing a key area for interactions, including the Icebreaker Reception, which encouraged informal discussions among attendees.
As ever, one of the event highlights was the field demonstration of various geophysical equipment in action, always popular with participants. Whether observing electromagnetic or drone survey technologies, these sessions provided a practical look at how innovations are being applied in practical scenarios, bridging the gap between theory and practice.
Another standout moment was the special session ‘Rocky Road to Success’, organised by the EAGE Women in Geoscience and Engineering community. The session focused on the challenges and opportunities in the profession, featuring inspiring stories from industry leaders. It was a moment of reflection and motivation, resonating with many attendees as a call to action for greater diversity and support within the profession.
The event also offered participants the chance to step outside the conference setting and explore Finland’s unique geology through the field trips. From walking through the historic Suomenlinna Fortress to descending into the Tytyri Mine, these excursions combined geological learning with cultural experiences, allowing attendees to see up close some of the region’s most iconic geological formations.
As NSG2024 came to a close, delegates left with strengthened professional relationships, fresh perspectives and an appetite to convene again at Near Surface Geoscience 2025 in Naples, Italy. This next edition will introduce two new conferences – one on Geohazard Assessment and Risk Mitigation, and the other on UXO & Object Detection – broadening the technical programme and fostering innovative synergies. Next year’s event promises to further expand the horizons of geoscience so make sure to book Naples on 7-11 September 2025 in your calendar.
Opening Session in Helsinki.
Field trip to the Tytyri Mine.
New generation of Seismic instruments
Nodal Seismic data acquisition system
550,000 Seismic nodes to be delivered.
Hackathons among challenges planned for EAGE Digital 2025
The highly anticipated Fifth EAGE Digitalization Conference & Exhibition (EAGE Digital) in Edinburgh on 24-26 March 2025 has more than ever to offer, especially if you register with the All Access Pass.
Activities available will include short courses on topics from data visualisation and python AI for reservoir modelling. Hackathons will focus on innovating geoscience with co-pilot systems, GenAI systems, and a geosteering challenge. These activities will provide unique engagement opportunities to solve important technological
and business challenges in an interactive environment.
This year’s conference is hosted with BP as lead sponsor under the theme ‘Enhancing predictions and investments with digital technologies’. The wide range of short courses, workshops, and
LC Germany hosts event on UAV and
satellite data
EAGE Local Chapter (LC) Germany met in June at the Technische Hochschule Georg Agricola University (THGA) in Bochum for the first time. The occasion was a hybrid event, titled ‘Innovative environmental monitoring using multispectral UAV and satellite data’ attracting a range of professionals and researchers from the oil and gas environmental monitoring, geosciences, and remote sensing community. The focus was on how advanced technologies can be used to protect ecosystems and manage risks, marking an exciting step for the Chapter as it expands its activities in Germany.
In the keynote presentation, Benjamin Haske, Tobias Rudolph, and Marcin Pawlik from THGA introduced a new multi-level monitoring approach that uses satellite data for large-scale analysis alongside UAV (unmanned aerial vehicle) data for more detailed monitoring. To ensure accuracy, this approach is supported by on-the-ground measurements providing a comprehensive view of environmental changes.
The presenters showcased how these monitoring systems can be applied in real-world scenarios, especially in post-mining areas and for managing risks in the oil and gas industry. Using vegetation indices from satellite and UAV data, they demonstrated how changes like subsidence and product leaks can be quickly and accurately detected helping to make better decisions in environmental management.
The event also looked ahead to future projects and discussed ongoing work at the Research Centre of Post-Mining, including 3D modelling,
hackathons are just part of an extensive technical and strategic programme providing unrivalled opportunities to address the technological and business challenges in the digital sphere.
We are welcoming your contributions as we seek to build resilience for the energy industry through innovation and technology. Are you working on a project that boosts digital efficiency in our industry? Share your breakthrough ideas and pioneering progress! Submit your abstracts by 24 December for a chance to showcase your insights and inspire change.
For more details visit eagedigital.org.
multi-sensory drone flights for disaster protection, and drone-based inspections. The insights gained from these projects are expected to be valuable across different industries showing the flexibility and importance of the multi-level monitoring system.
The event wrapped up with a networking session where participants could chat and exchange ideas over snacks and drinks provided by EAGE. This relaxed atmosphere helped foster new connections and collaborations. Overall, the event in Bochum was a great opportunity for the EAGE LC Germany to build its presence and foster collaboration within the geoscience community. The chapter is looking forward to hosting more events in different parts of Germany, continuing to tackle important environmental challenges.
Connect with EAGE LC Germany
Keynote speakers (from left to right): Marcin Pawlik, Tobias Rudolph, and Benjamin Haske.
Panel of experts at EAGE Digital 2024.
University of Manchester is latest EAGE Student Chapter
A new Student Chapter at the University of Manchester, UK has joined the EAGE community. This is what the students say.
We are a strong group of PhD researchers and Masters students across the Departments of Earth and Environmental Sciences and Chemical Engineering that are excited to join and network with the global EAGE community. Within our Chapter, we encompass active research in all aspects of energy systems including geothermal, carbon capture and storage, hydrogen, deep geological disposal of radioactive waste, and oil and gas. These research areas integrate geological, geophysical, and engineering approaches at multiple scales to understand current world problems. Within our research, we apply observational, experimental, and modelling approaches to understand geological problems.
Our Chapter’s mission is to apply our geological and engineering research expertise to address the challenges of the energy transition. We intend to hold
seminars and lectures from external speakers from academic and industry knowledge bases to share novel research and the latest updates in earth sciences and engineering disciplines. We will also be organising field trips for the Chapter to understand the larger context of subsurface research projects.
We are also proud that a team of students from the University of Manchester won the Laurie Drake Challenge in 2023, and that a team came second in the 2024 competition. To contact the Manchester Chapter, please email our president, Holly Mills at holly.mills@manchester.ac.uk.
Renew your EAGE Student Chapter for 2025
As 2025 approaches, exciting opportunities await for all EAGE student chapters to grow and thrive once again. If you’re passionate about geoscience and engineering, now is the perfect time to reactivate your student chapter and become part of a dynamic international network of like-minded individuals.
Reactivating your chapter offers numerous benefits that can enhance your academic and professional journey, including networking opportunities with industry leaders and professionals; access to support programmes that help you develop both technical and soft skills; exclusive activities like competitions, conferences, and workshops with top experts; mentoring programme that guide you through your
career path in the energy and geoscience sectors.
To make this even more rewarding, we’re providing 15 free memberships supported by EAGE Student Fund for each reactivated student chapter. This is a fantastic opportunity to bring more students into the fold and allow them to enjoy all the benefits of an EAGE membership ranging from access to exclusive events and academic resources to professional growth opportunities.
We want each of you to tap into the potential of being part of a vibrant, global community where innovation, energy, and geoscience meet. Don’t miss the chance to elevate your chapter and engage your academic community with new members and exciting activities.
In order to reactivate your chapter, gather a motivated team of students passionate about geoscience and engineering; reach out to the EAGE Students Community Manager to receive guidance and support in revitalising your chapter; and then utilise the 15 free memberships to encourage your peers to join and get involved taking advantage of EAGE’s global student initiatives to enhance your chapter’s presence and engagement.
If your university doesn’t yet have an EAGE student chapter, this is the perfect opportunity to establish one and lead the way in creating a dynamic, forward-thinking academic community. For more information on reactivating your chapter, don’t hesitate to get in touch. We’re here to support you every step of the way.
Group photo of the ‘Basins’ research group at the University of Manchester at a field trip to see the Sherwood Sandstone Group. Some of the attendees are now members of Manchester’s new Student Chapter.
Time to team up for the next Laurie Dake Challenge
The Laurie Dake Challenge is back for the 2024-2025 edition, and we invite you to be part of the competition. If you’re studying with a diverse group of peers who love problem-solving and collaboration, and have a passion for geosciences, this is the perfect opportunity to showcase your skills.
As one of EAGE’s premier student competitions, the Laurie Dake Challenge encourages inter-disciplinary teamwork, data integration, and the refinement of project management, and presentation skills. Competing teams will be tested not only on their technical expertise but also on their creativity, innovation, and ability to think critically. This year’s Challenge, sponsored by TotalEnergies, focuses on a crucial global issue: CO2 capture by mineralisation in basalts.
The Challenge unfolds in three rounds, with each round bringing new hurdles to negotiate. Participants will be able to work on real assignments, gaining invaluable hands-on experience while learning directly from industry experts. If your team makes it to the final round, you’ll present your findings at the EAGE Annual
Conference & Exhibition in Toulouse, France, in June 2025.
How to apply
Gather a multi-disciplinary team of three to five students (including no more than one PhD student), submit your application, and demonstrate your passion for tackling one of the most pressing climate challenges of our time. Your submission must include a ‘Declaration of Academic Integrity’ and a Motivational Letter explaining your team’s goals, why you want to join the Challenge, and the importance of developing CO2 capture by mineralisation technologies.
Key dates
The competition includes three selection rounds, each of which is further explained in more detail below.
Application deadline — 15 November 2024
Each team signing up for the competition will receive an assignment provided by TotalEnergies. The earlier you register, the
more time you will have to work on the assignment and prepare your submission. So, register your team today rather than tomorrow.
First round submission — 3 January 2025
At the end of the first round, the ten best teams will be selected for the next round of the competition.
Second round submission — 7 February 2025
The ten teams will be asked to create a three-minute video pitch and a maximum 3000-word report, including images, to describe and provide evidence of their proposed development plan for the area. The jury will select the six teams to compete in the final round.
Final competition — 1 June 2025 in Toulouse, France
The six finalist teams will be invited to present their findings in Toulouse, France at the 86th EAGE Annual Conference and Exhibition.
Don’t miss the chance to be part of the Laurie Dake Challenge 2024-2025. Apply now and begin your journey toward becoming a leader in the energy transition and geoscience innovation. For any questions, feel free to reach out to us at students@eage.org.
The Laurie Dake Challenge awaits, ready to empower you with knowledge, experience, and the chance to leave a lasting impact. We look forward to seeing your teams in the competition!
The EAGE Student Fund supports student activities that help students bridge the gap between university and professional environments. This is only possible with the support from the EAGE community. If you want to support the next generation of geoscientists and engineers, go to donate.eagestudentfund.org or simply scan the QR code. Many thanks for your donation in advance!
Finalist teams at the Laurie Dake Challenge 2024, along with the committee jury, in Oslo, Norway.
Aiming for end of the rainbow Personal Record Interview
Lindsey Smith has spent her entire career so far (18 years) with bp as a geophysicist based in Aberdeen, working on North Sea and international projects including early experience with ocean bottom node seismic acquisition. She is now becoming known in the geoscience community for her advocacy of scientific colourmaps over the typical ‘rainbow’ software defaults for geophysical data visualisation. She presented a paper on the topic at this year’s Annual in Oslo.
How was growing up on a farm?
I grew up on a mixed farm in Perthshire, Scotland. My dad had suckler cows, grew cereal crops, potatoes and probably the best fruit, raspberries (although we didn’t think that at the time, we were always sick of raspberries by the end of the summer). I am the eldest of four and my mum once told me that if we weren’t bothering her, she didn’t mind what we were up to, so we had a lot of freedom, particularly when the weather was good. If it was bad, we watched a lot of TV.
Geology v geophysics dilemma
My favourite subject at school was geography. I liked learning about the physical processes that formed the landscapes around me and the art of drawing figures and diagrams, rather than writing essays. I chose geophysics rather than geology for two reasons: images of the earth’s interior and sea floor age (ironically in a rainbow colour map) inspired me to study the physical processes and properties of the Earth, and secondly, we had the 1990 film Tremors on VHS — I wanted to be a seismologist like Rhonda LeBeck.
Year off experience
After my undergraduate degree at the University of Edinburgh, which included a year abroad at Queen’s University in Kingston, Canada, I wanted to stay in academia, but had a year off including some time in New Zealand. I managed to do some voluntary geophysical field work
on a project studying the Alpine fault in South Island, as well as working on a fruit farm picking pears and agricultural labouring on the back of a potato harvester. I cut my trip short to come back to the UK for a seismic acquisition cruise in the North Atlantic to collect data for my PhD.
Career highlights so far
I worked in bp’s North Sea seismic delivery team in the late 2000s/early 2010s. This was a time when we started to acquire more ocean bottom cable surveys across our producing fields, balancing imaging requirements with survey costs. Latterly, I worked on Clair for four years after bp acquired an ultra-high density nodes survey in 2010, looking at the multi-component data and testing FWI-technology. These roles were great, but the real highlights for me were the people I worked with.
Current role at bp
With the advent of remote working, I’ve had the opportunity to work on some fields outside the North Sea including Azeri-Chirag-Gunashli (ACG), Azerbaijan, again with interesting imaging challenges and new OBN seismic. I am currently back in the North Sea, working on site characterisation of potential saline aquifers for CCS as part of the North Endurance Project.
Colourmap mission?
In one sentence: Change the default colourmap! In a bit more detail: we’ve been talking in bp about improving data visualis-
ation with colour since 2017 after we read Matt Hall’s Agile Scientific blog titled ‘No more rainbows!’. We have seen from tests on our data the improvements to be gained in interpretation, analysis and communication by swapping out bright colourmaps like rainbows for those designed specifically for scientific data visualisation. I want to help close the knowledge gap around the science of visualisation of data with colour in the wider applied geosciences community and encourage people to both explore how the choice of colourmap influences the ease and accuracy of interpretation and to challenge the default colourmaps that are available in their visualisation toolkits. There are many open source scientific colourmaps which are accurate, intuitive and accessible for those with colour vision deficiency. Search for ‘scientific colourmaps’ to find out more.
Mid career: what next?
I like what I do. There are always new things to learn, new ways of working and new technologies to help us understand the subsurface geology better, all of which will keep me in a technical role.
How do you spend time off?
My main hobby is cycling, either commuting to work or out at the weekend with family or friends. For the past 15 years bp North Sea has organised a three-day coast to coast charity bike ride from the west coast of Scotland back to Aberdeen. I’ve done six of those rides and I’m planning to do it again next year.
Lindsey Smith
MORE TO EXPLORE
CROSSTALK
BY ANDREW M c BARNET
BUSINESS
Drilling down on the fracking issue
In such a contentious US presidential election (due to end early November after First Break publication), energy policy may have a surprisingly significant bearing on the result whichever way it turns out. Not because what the polls have been saying are the key issues on the minds of most of the country’s voters, but due to the electorate college system enshrined in the Constitution and, in such a divided nation, unlikely ever to be reformed.
No country’s election is perfect. Arguments for first-pastthe-post versus some form of proportional representation haunt all democracies. But the US elections operate under special rules created by the founding fathers in 1787. They are so manifestly inequitable today that the voting in a handful of so-called swing states frequently determines who becomes president. Essentially the result is decided by an Electoral College of 538 electors from all the US states plus the District of Columbia (DC). Each state is guaranteed two senators and one congressional district with the 435 congressional districts distributed among the 50 states based on population census. This arrangement favours the smaller states giving them more electoral votes per person. The fear in the early days was that without this power, politicians would completely ignore small states and only focus on big population centres. Over time this has become absurd with people, say, in Wyoming, having nearly four times the clout of people in California in presidential elections, e.g., if California were represented on an equal scale to Wyoming it would have 205, rather than 55, electors.
‘New equivalent of ‘drill baby drill’’
ground states such as Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania and Wisconsin. Hence candidates and surrogates have been focusing more or less all their attention on these key constituencies and the local issues which concern them. Which is where fracking maybe comes into play, specifically in Pennsylvania (with 19 Electoral College votes at stake) and to a lesser extent in Michigan (with 15 votes). During the campaign Democratic candidate Vice-President Kamala Harris has been on the back foot. She had to reverse a 2019 pledge to ban fracking which she made during that year’s Democratic primary in her first attempt to win the presidential nomination. This headline issue has been successfully obfuscated by the intervention of ex-president Donald Trump, the Republican contender. He would know that no president has the authority to ban fracking, as it would require Senate and Congress approval, an unlikely outcome. Yet he seized on what is seen as a vulnerability (Harris ‘flipflopping’) to berate the Democrats with a familiar cocktail of lies/half truths and fear-mongering, which would not pass first base of a fact checker, but nonetheless makes a powerful message difficult to unpick in the heat of an election campaign.
Under this current electoral system, the winner of the popular vote may not secure enough Electoral College votes to win the presidency. This happened in both the 2000 and 2016 elections. George W. Bush and Donald Trump won these elections with clear Electoral College victories, but they did not win the most votes nationwide.
With the country so divided, the outcome of this year’s presidential election will be decided by the results in a few battle-
His artful message has been based on conflating fracking with US energy policy generally, in the process somewhat muting well known environmental and health objections to fracking, for example in Michigan, where there is growing public concern over the increase of high volume hydraulic fracturing technology. At his rallies Trump has claimed Democrats are anti-fossil fuel endangering US oil and gas production (which has actually been at a record high during the Biden-Harris administration) and linked this to his hostility to renewable/ climate change policies highlighted by his notorious antipathy to wind farms. This latter is often attributed to the trauma of witnessing the building of a 11-turbine wind farm 10 years or so ago right next door to his newly opened prestige golf course in Aberdeenshire, UK.
Politics have a way of grossly over-simplifying some issues, such is the case of Pennsylvania. Even in that state not to mention nationwide, it is doubtful whether many voters would know that fracking has been an industry option since the 1940s or the details of how fracking and horizontal drilling made the shale boom possible. Nor would they necessarily understand that fracking already accounts for the bulk of America’s domestic oil and gas production with 95% of new wells being hydraulically fractured, creating two-thirds of the total US gas market and about half of US crude oil production, according to the US Department of Energy (DoE).
At the federal level, the original bill to phase out fracking was introduced in January 2020 by among others left-leaning Democrats Senator Bernie Saunders and Congress Representative Alexandra Ocasio-Cortez. It went nowhere but obviously caused some alarm in Pennsylvania. The state is the second largest US gas producer after Texas with natural gas output totalling 7.5 trillion cubic feet in 2022, according to the US Energy Information Administration. Any threat to the future of fracking (unlocking the Marcellus and Utica shale reserves) could be viewed as a challenge to the state’s economy.
Republican vice-presidential candidate Sarah Palin in the 2008 election won by Barack Obama.
In 2019 when president, Trump commissioned a report from the DoE’s Office of Fossil Energy and Carbon Management on ‘Economic and national security impacts under a hydraulic fracturing ban’. The conclusion echoes his current public thoughts on the topic. The report stated that an ‘ill-conceived’ ban would have far-reaching and severe consequences, including the loss of millions of jobs, price spikes at the gasoline pump and higher electricity costs for all Americans and the likelihood of increased CO2, SO2, and NOX emissions. Furthermore a ban would end the US role as the world’s largest oil and natural gas producer and would force the United States to become a net importer of oil and gas once again thereby weakening the nation’s geopolitical influence and putting national security at risk.
‘Fracking boom … curse of overabundance’
The stats tell a more nuanced story. In March of 2024, the state reported 16,831 direct jobs in the industry, less than one half of 1% of all jobs. Some claim an improbable 120,000, if indirect employment is taken into account. Half of Pennsylvania households use natural gas as their primary home heating fuel, although the price to the consumer is nothing like the cheapest in the US. Adding to a complex picture, in the rural areas of Pennsylvania where most production occurs, the fracking boom has come with the curse of overabundance. Natural gas prices in southwest Pennsylvania have plunged some 80% over the past two years, after surging to $9 per million BTU in August 2022, after Russia invaded Ukraine, according to S&P Global Commodity Insights. As a result, royalties paid to local landowners have shrunk. In these mainly rural areas the key question is said to be not how to produce more natural gas, but how to build the infrastructure necessary to get it to places like New England and the Gulf Coast where it is likely to fetch higher prices. Yet any investment in cross-state pipeline infrastructure would be highly problematic, especially with all the uncertainy of energy transition.
The most recent poll indicated fracking remains a divisive issue in the state with support highly partisan. Fifty-one per cent of all Pennsylvania voters say they support fracking, 30% per cent say they’re opposed, but a call for more regulation enjoys significant majority backing. Only 42% of respondents said they’d support an outright ban.
Trump has therefore cleverly converted his advocacy of fracking into a catch-all call for unrestricted production of oil and gas, the new equivalent of ‘drill baby drill’ expressed by
A Trump second term energy policy can therefore be expected to expand offshore drilling, ease regulations on fracking, and open federal lands for expanded energy exploration, all with the purpose of US energy dominance and self-sufficiency. Renewables such as solar and windpower may come under scrutiny as the former president has often complained that they drive up electricty prices for the consumer. He has also spoken against carbon capture and storage initiatives. His previous scepticism about the value of rebates for electric vehicles, however, has evaporated since billionaire Elon Musk, CEO and founder of Tesla, became an enthusiastic Trump booster and donor. Also noted no doubt, nearly four-fifths of credits are flowing to Republican districts, which explains why 18 Republican congressmen declared in August that they opposed repeal of the climate-friendly Inflation Reduction Act (IRA).
Harris can be expected to continue on the path pursued by President Biden, with all its ambiguities, to satisfy simultansously the oil and gas business and demands for climate change mitigation measures. Biden began his term in office cancelling the planned Keystone XL tar sands oil pipeline from Canada to the US but presided over a record increase in oil production, the third president in a row to do so. At the same time offshore drilling permits are at a record low, a consequence of tougher regulation but maybe also reflecting the high cost of viable prospects in the Gulf of Mexico. Harris’s green credentials include inheriting the IRA, the CHIPS and Science Act and Bipartisan Infrastructure Law all designed to decarbonise the economy. She would also cooperate on international climate policy, in contrast to Trump’s withdrawal from the Paris climate accord.
The choice is between an insular pro-industry player that will slow the US path to Net Zero and a candidate already committed to change but feeling the constraints of enacting a green agenda while maintaining oil and gas output for energy security and revenue. The candidates’ stance on fracking may be a factor in those crucial swing states.
Views expressed in Crosstalk are solely those of the author, who can be contacted at andrew@andrewmcbarnet.com.
INDUSTRY NEWS
TGS launches big CCS assessment on US Gulf Coast
TGS is making strides in its Energy Transition offer in America with two big surveys to assess carbon capture and storage in the US and a major offshore wind survey. Firstly, the company has launched the US Gulf Coast CO2 Storage Assessment package.
Spanning 70 million acres across the Texas and Louisiana Gulf Coast, the assessment offers insights that streamline storage evaluations and guide strategic decisions. With data from 9000 wells and evaluations of 22 key geologic formations, TGS’s assessment is providing comprehensive formation top and petrophysical interpretations. These findings are being integrated into advanced workstation environments, equipping decision-makers with actionable information for informed strategic planning, said TGS.
Carel Hooijkaas, EVP of New Energy Solutions at TGS, said: ‘This assessment package is a game-changer for the industry, providing unparalleled data coverage and expert analysis to identify the most viable reservoir and seal formations for CO2 storage. Leveraging our extensive well log library and deep subsurface expertise, we are uniquely positioned to support carbon sequestration efforts across the Gulf Coast and beyond.’
The Gulf Coast CO2 Storage Assessment is strategically aligned with the recent Texas General Land Office CCS Lease Sale along the Texas coast providing data opportunities for carbon storage evaluation.
The assessment features a stratigraphic framework, comprehensive petrophysical analysis, and visual log curves, alongside extensive regional mapping of storage properties and volumetric visualisations.
‘By extending its coverage to the Gulf Coast and neighbouring regions such as Arkansas, Mississippi and Alabama, the assessment equips stakeholders with the data necessary for planning and executing carbon sequestration strategies in the rapidly evolving landscape,’ said TGS.
Meanwhile, TGS has announced the availability of its CO2 Storage Assessment for the Michigan Basin. Spanning 50 million acres, the study provides detailed insights into the region’s geological formations, capacity estimates, and site suitability for carbon sequestration, offering strategic guidance for optimising carbon storage projects in the Midwest.
The study leverages data from 1650 wells in Michigan, alongside core data from Western Michigan University, to conduct an in-depth analysis of key geologic formations and their potential for CO2 storage.
The assessment features a fully integrated stratigraphic framework, comprehensive petrophysical analysis, and advanced log curve interpretations. Both reservoir quality and sealing integrity are examined.
Finally, TGS has completed acquisition of an ultra-high-resolution 3D (UHR3D) survey on behalf of Community Offshore Wind in the New York Bight.
The survey characterised soil conditions at the site, which is critical for designing the offshore wind project. The survey commenced in October 2023 and had a total duration of more than eight months.
Community Offshore Wind, a joint venture between RWE and National Grid
TGS and Aker BP digitalise Yggdrasil area
Norways pledges $3.5 bn for floating offshore wind
Carbon emissions to peak this year, says DNV
The company’s study covers 70 million acres.
Ventures, won the largest parcel in the competitive lease sale off the coasts of New York and New Jersey, held in February 2022 by the Bureau of Ocean Energy Management (BOEM). The lease area has the potential to host 3 gigawatts (GW) of capacity.
Starting its second offshore wind site characterisation project, TGS was contracted to acquire UHR3D data over the entire lease, providing significantly more detailed information about the near-surface geology than traditional site surveys, which have historically used 2D technology. The survey was
conducted by the TGS vessel Sanco Swift
Hooijkaas said: ‘The acquisition of UHR3D seismic over the full lease will provide Community Offshore Wind with a solid foundation to develop their offshore wind farm site in the New York Bight. Our geophysical approach to mapping and understanding the shallow subsurface layers will reduce uncertainty and speed up site characteriaation. We have received considerable interest in our solution from pioneering developers, and this high demand has driven our decision to invest in an additional UHR3D stream-
er set, which will be in operation in a few weeks’ time.’
Doug Perkins, president and project director of Community Offshore Wind is using the latest technology to study how the soil behaves in our lease area, so that it can engineer the project in an environmentally friendly way. ‘We are building a project that is going to generate clean energy for communities in the US Northeast for roughly 30 years, so precision is key for building a project of this scale. The data acquired from the survey allows to plan with more precision and certainty than ever before.’
Shearwater wins Angola OBN project
Shearwater Geoservices has won a contract from TotalEnergies for an OBN project offshore Angola.
The deepwater OBN survey will be conducted by the dual ROV-equipped SW Tasman, the seismic node-laying vessel which was converted in 2023, and SW Gallien as source vessel. The survey will commence in January 2025, over a two and a half-month duration. Shearwater will utilise its compact high-endurance Pearl node for the project.
The work will be performed in Block 32 over the Louro and Mostarda fields where Shearwater previously completed a 4D streamer survey for TotalEnergies.
Shearwater CEO Irene Waage Basili said: ‘We are very satisfied with the performance of our SW Tasman/Pearl OBN platform which has been in continuous operation since its introduction last year. We are pleased to see one more key client use this unique platform to introduce deepwater OBN to new areas of their operations.’
Meanwhile, Shearwater has won the Petrobras Best Suppliers award for its offshore exploration programmes. Beginning in 2020, Shearwater and Petrobras have collaborated on several big projects, including advanced 3D and 4D seismic programs. This award covers the period from January 2023 to July 2024.
TGS and Aker BP sign deal to digitalise Yggdrasil
oil and gas area
TGS has signed a strategic partnership with Aker BP to digitalise the operations of the Yggdrasil oil and gas area in the North Sea.
Aker BP will integrate TGS’ Prediktor Data Gateway into its digital ecosystem for Yggdrasil as a process data server. At Yggdrasil, Aker BP has developed remotely controlled operations, unmanned production platforms, new technology and data-driven decisions and work processes. The Prediktor Data Gateway will help to optimise work processes and enhance operational efficiency through real-time, high-quality, reliable data.
‘The Prediktor Data Gateway offers unique flexibility and efficiency, represent-
ing a new frontier in field operations,’ said Carel Hooijkaas, EVP of New Energy Solutions at TGS. ‘With the Prediktor Data Gateway, Aker BP can achieve greater control, leveraging real-time data from OT systems through an OPC-UA standard information model for improved decision-making.’
Meanwhile, TGS has won a baseline 90-day 4D streamer contract in the Southern Atlantic region with an independent energy company. The contract spans approximately 90 days.
Finally, TGS has completed the PGS24M04NWS project in the Outer Vøring area of the Norwegian Sea, carried
Ramform Hyperion.
out by the crew on board the Ramform Hyperion. Despite periods of rough weather, the vessel covered more than 1500 km2 in one month.
Vessel
Norway bids to unlock Norwegian Sea discovery with hydraulic fracturing
The Norwegian Offshore Directorate believes innovations within hydraulic fracturing will solve one of the largest puzzles of them all – the Victoria discovery in the Norwegian Sea.
Victoria was proven by an Exxon exploration well 6506/6-1 in the year 2000. Total took over the operatorship six years later, and an appraisal well was drilled in 2009. The work to mature the discovery, given the state of technology at that time, indicated that profitable production would be challenging, and the acreage was relinquished in 2018.
‘We want to inspire the companies to take another look at the Victoria discovery. Yes, it’s a tight reservoir at significant depths, with both high temperature and high pressure, as well as a high content of CO2. Nevertheless, a study indicates that current well technology, combined with hydraulic fracturing, could allow production of substantial gas volumes,’ said Per Valvatne, senior reservoir engineer at the Norwegian Offshore Directorate.
‘This is one of the largest remaining gas discoveries on the Norwegian Continental Shelf (NCS) that is still not covered under a production licence. This is acreage that the companies can apply for in the next APA (Awards in Predefined Areas). Previous work on the discovery has shown around 140 billion standard m3 of gas in place, and the study reveals that four wells could yield production of 29 billion m3,’ said Arne Jacobsen, assistant director of technology, analyses and coexistence at the Norwegian Offshore Directorate.
Opecs, a British consulting company, has assisted the directorate in this study, which also includes a geomechanical study of the discovery. Opecs’ report shows that use of modern hydraulic fracturing technology can improve well productivity enough to ensure profitable production from Victoria.
The discovery is located at a water depth of 400 m, approximately 4800 m below sea level. The Dvalin field (10 km away), Heidrun and Åsgard (50 km away) are potential tie-back points.
The study shows that the aforementioned four wells could produce 10 million m3 of gas per day; a level that can be maintained for nearly two years. This would mean a total production of 29 billion m3 over a production lifetime of 30 years.
Another finding in the study indicates that there may even be room for additional wells, which would further increase recovery.
Jacobsen said: ‘It’s important to get development started while there is existing infrastructure to tie into: that’s why we believe efforts to unlock these reservoirs must be increased now.’
Hydraulic fracturing has mainly taken place in chalk reservoirs on the NCS. The Victoria discovery is situated in a sandstone reservoir, where fracking experience is limited on the NCS. ‘This was the reason for bringing in UK-based Opecs, as the Brits are more experienced when it comes to these types of fields,’ said the Norwegian Offshore Directorate in a statement.
Jacobsen said that both the technology and expertise are better than was the case 10-15 years ago: ‘Fluids and propping agents used in hydraulic fracturing have improved, the vessels used are better, and we’re also able to deal with higher reservoir pressure and temperature. These operations can also be done faster and with greater reliability, thus reducing costs and uncertainty.’
He added that the study shows that producing Victoria may now be commercially profitable: ‘Our hope now is that this will spark someone’s interest, and a willingness to commit to this huge proven gas resource. Moreover, we also have several discoveries on the NCS in tight reservoirs. The study shows that using today’s technology could mean that hydraulic fracturing can unlock substantial values on the challenging Victoria discovery. If we can solve the challenge there, then we can also set our sights on other challenges. As resource managers, we believe that hydraulic fracturing will become a key technology for realising the resource potential in tight reservoirs.’
SLB launches AI to enhance workflows
SLB has launched the Lumi data and AI platform, which integrates advanced artificial intelligence (AI) capabilitiesincluding generative AI - with workflows across the energy value chain.
The open, secure and modular platform unlocks access to data across subsurface, surface, planning and operations, increasing cross-domain collaboration and releasing new intelligence and insights to improve the quality and speed of decision-making. The latest large language models (LLMs) as well as domain foundation models from SLB will be embedded in the platform, enabling customers to accelerate AI adoption at scale.
‘AI is fundamentally altering the dynamics of our industry, but its transformational potential is hindered by the complexity of our industry’s data eco-
systems,’ said Rakesh Jaggi, president, Digital and Integration, SLB. ‘Through the Lumi data and AI platform, we will liberate and contextualise data for our global customers across domains – enabling them to scale advanced AI workflows and accelerate their ongoing digital transformation.’
The Lumi platform is built on the latest industry standards and will be available on all major cloud service providers as well as on-premises. SLB’s customers can train and deploy industry-specific traditional and generative AI models, including foundational models for exploration and production (E&P) by SLB.
SLB’s Delfi digital platform will be enhanced by leveraging the data foundations and machine learning capabilities of the Lumi platform. This will enable
more powerful and agile reservoir modelling, seismic and wellbore interpretation, directional drilling and geosteering workflows. It will also enable new capabilities for automation and operational efficiencies.
The Lumi platform integrates technologies from leading technology partners with SLB’s digital and domain expertise to facilitate access to data and AI capabilities across the energy production cycle. The open architecture of the platform liberates data from structured and unstructured sources using standard and open protocols, including the Open Group’s OSDU Technical Standard, an open data standard for the energy industry. It leverages Cognite Data Fusion to connect and analyse production data to optimise operations.
STRYDE provides seismic solutions to six academic instutions
STRYDE has secured six contracts with leading academic institutions to provide its solutions, including its newest seismic system, ‘The STRYDE Mini System’, across the US, Europe, and Africa.
The Mini System is a complete nodal seismic system specifically designed to enable small-scale seismic projects, including research projects for the academic sector.
‘Our investment in the development of the Mini System, coupled with recent contract wins, represents a continuation in the company’s commitment to making seismic data accessible for any industry,’ said STRYDE in a statement. ‘This ini-
tiative also underscores their continued dedication to fostering collaborations between industry and academia, driving innovation, and nurturing talent for a sustainable future.’
Rice University in Houston, Texas; the University of Exeter in the UK; and Uppsala University in Sweden are some of the academic institutions leveraging STRYDE’s technology to advance a variety of subsurface research initiatives in 2024.
These projects encompass geothermal, well monitoring, geohazard identification for civil engineering, agritech, and mine development.
Mike Popham, STRYDE CEO, said: ‘Our agile and lightweight seismic system offers a rapid, cost-effective solution for seismic data acquisition.
‘At STRYDE, we understand that high costs can hinder fundamental research and development, and I am proud that by further miniaturising our system, we’ve removed this barrier to innovation and can fulfil our mission of delivering high-density seismic capabilities across various
industries. This advancement supports the next generation of geoscientists by equipping them with essential tools for conducting crucial research.’
Rice University is using STRYDE’s seismic system to perform subsurface monitoring of geothermal well stimulation at the Utah FORGE site with high-density seismic.
A STRYDE user at Rice University said: ‘This survey was the highest in density and channel count ever conducted by our team. Achieving our desired trace density within our time and labour constraints would not have been possible without STRYDE’s agile nodes. The lightweight nature of these nodes marked a significant advancement for us, allowing for high-density deployments on foot, even in rough terrain.’
STRYDE has deployed more than 760,000 unique nodes, supporting more than 260 projects in over 50 countries, across multiple sectors, including oil and gas, mining, civil engineering, and emerging renewables market.
Mike Popham, STRYDE CEO.
Norway allocates $3.5 billion for floating offshore wind
Norway is poised to announce the next project areas for development of offshore wind on the Norwegian continental shelf in 2025 and is proposing $3.3 billion of state support towards the first floating offshore wind tender within the Vestavind F and Vestavind B areas.
‘Norway has an enormous potential for floating offshore wind on its continental shelf, but because the technology remains immature and costly, state support is required to accelerate its development. That is why we are proposing an ambitious support scheme,’ said Minister of Energy, Terje Aasland.
As part of its plan to allocate project areas for 30 GW offshore wind by 2040, the government has recently held a public consultation on the proposed support
Viridien
scheme models for the areas Vestavind B and Vestavind F, supported by state funding capped $3.2 billion.
Norway aims to conduct the next tendering round for offshore wind in 2025. Thereafter, the government intends to hold regularly scheduled tendering rounds and state aid competitions up to 2040.
The Norwegian Water Resources and Energy Directorate (NVE) is conducting a strategic impact assessment of 20 areas suitable for the development of offshore wind. The strategic impact assessment of the areas Vestavind F, Vestavind B, and Sørvest F will be completed by November. The strategic impact assessment for the remaining 17 areas will be completed in June 2025.
wins contract to support mineral exploration in Oman
Viridien has been awarded a comprehensive remote sensing programme from Minerals Development Oman (MDO) to identify, map and rank mineralisation prospectivity potential across seven concessions, covering a total area of 16,000 km².
Viridien’s multi-disciplinary team of experts will use machine learning, high-performance computing, advanced processing algorithms and archives of multi- and hyperspectral satellite imagery to create their innovative Bare Earth Plus models. These will be integrated
with structural, airborne geophysical, and gamma-ray spectrometry data, to support MDO in identifying the mineral potential in the Samail and Masirah Ophiolites.
Peter Whiting, EVP, Geoscience, Viridien, said, ‘Following our 50-year track record supporting mineral exploration projects, Viridien is pleased to assist MDO with our advanced remote sensing, AI and integrated Minerals & Mining solutions. Our aim is to support more efficient and accurate decision-making for mine operators throughout the mining lifecycle.’
Offshore Brazil: Reduce risk with regional 3D seismic Evaluate
pre-salt blocks with our extensive subsurface data coverage
Schedule a data viewing
BRIEFS
bp has abandoned a target to cut oil and gas output 25% by 2030 as CEO Murray Auchincloss scales back the firm’s energy transition strategy to regain investor confidence, according to a Reuters report.
ExxonMobil is proposing a $10 billion investment in offshore oil operations in a new investment push in Nigeria. The investment was announced during talks between Nigeria’s vice-president Kashim Shettima and the CEO of Exxon’s Nigeria operations Shane Harris at the UN General Assembly in New York.
The UK North Sea Transition Authority has published guidance and recommended principles which will encourage buyers, sellers and interested third parties to work together to ensure that transactions go through quickly. There are currently more than 100 transactions per year on the North Sea ranging from multi-million-pound transfers of field ownerships to smaller changes of Joint Venture (JV) partners. The guidance addresses the role each can play in helping these deals proceed efficiently.
Chevron has agreed to sell its 20% non-operated interest in the Athabasca Oil Sands Project in Alberta, Canada, to Canadian Natural Resources Limited. The $6.5 billion all-cash transaction is expected to close during the fourth quarter of 2024.
Equinor has reported that it is operating of 19 projects in Norway with a total investment value of $18.5 billion. Costs on projects have increased by $610 million over the past year, around 3%. Two of the projects, Johan Castberg and Oseberg gas compression and partial electrification, have experienced a postPDO increase of more than 20%.
TGS has been assigned a Ba3 rating with a stable outlook from financial services company Moody’s. The $450 million-backed senior secured notes (originally issued by PGS – now a fully owned subsidiary of TGS) are upgraded two notches from B2 to Ba3 with a stable outlook.
Denmark starts tendering for $4.2 billion CCS fund
The Danish Energy Agency is publishing the final tendering materials for the $4.2 billion CCS Fund, which will cover the costs of capture, transportation and geological storage of fossil, biogenic or atmospheric CO2 over a 15-year contract period. The subsidies are tied to a requirement for the commissioning of capture facilities by 1 December 2029 and a minimum requirement for full capture and storage from 2030.
The tendering materials are available on the CCS pages on the Danish Energy Agency website and are being published on the EU’s electronic tendering platform: Tender Electronic Daily (TED).
The European Commission has recommended capturing and storing approximately 50 million tonnes of CO2 by 2030 to achieve a 90% reduction by 2040 towards climate neutrality by 2050. Similarly, the Intergovernmental Panel on Climate Change (IPCC) estimates that 730 billion tonnes of CO2 will have to be stored globally by 2100 to meet the Paris Agreement.
The CCS Fund will secure carbon reductions or negative emissions that contribute to meeting Denmark’s climate goals. Overall, it has been estimated that the fund will reduce Denmark’s annual carbon emissions by 2.3 million tonnes from 2030. This corresponds to around 5% of Denmark’s total current emissions over a year.
The fund has been designed to ensure maximum competition for funding to achieve the highest possible carbon reductions at the lowest possible cost. The subsidy will be paid per tonne of CO2 stored. The tendering procedure will be carried out by negotiation, in which market players bid a fixed amount of CO2 per year and a price per tonne they will capture and store. Funding can be awarded to multiple tenderers.
The most important terms of the tendering materials for the CCS Fund were announced in June, in connection with a market consultation. The Danish Energy Agency has also held two market dialogues about the fund, most recently from June to August 2024.
The deadline for applications from potential tenderers to be pre-qualified to participate in the tendering procedure is 25 March 2035. The Danish Energy Agen-
cy plans to hold a follow-up information meeting about the tendering materials on 21 November 2024.
The CCS Fund is the third fund administered by the Danish Energy Agency for carbon capture and storage. In total, approximately $5.6 billion has been set aside.
The first fund, the CCUS pool, worth approximately $180 million, was won by Ørsted, which will capture and store 430,000 tonnes of CO2 annually from 2026 and for the following 20 years. Ørsted expects to capture and store the first CO2 from as early as 2025.
The NECCS pool was completed in May 2024, when the Danish Energy Agency contracted three companies to capture and store 160,350 tonnes of biogenic CO2 annually from 2026 to 2032.
According to the Danish Energy Agency’s latest point source analysis, the full capture potential of all Danish point sources amounts to 6.9-13.7 million tonnes CO2 in 2030. Denmark has granted six licences for exploration for CO2 storage and has political agreements with several countries for cross-border transportation of CO2 for geological storage under the seabed.
Implementation of the new CCS Fund requires state-aid approval from the European Commission. Meanwhile, DNV, the independent energy assurance provider, has certified the first CO2 storage site for Project Greensand in Denmark. Building on prior certifications issued in June 2023, DNV has granted the Certificate of Conformity – Site Endorsement and Storage Site, confirming that the project operator has developed plans for the safe and effective geological storage of CO2.
Denmark’s first offshore CO2 storage site, Project Greensand, is a collaboration of 23 partners and serves as a key example of the potential for safely storing CO2 permanently to mitigate climate.
‘We now have independent evidence, backed by DNV’s certification, that our site can safely and permanently store large volumes of CO2 that would otherwise have been emitted into the atmosphere in the North Sea subsoil,’ said Mads Gade, Head of INEOS Energy Denmark, the leading partner behind Project Greensand.
TGS studies electrical interconnection of Cape Verde
TGS is conducting a pre-feasibility study for the electric interconnection of the Cape Verde islands off the West Africa coast in collaboration with RTE International and Consultores de Engenharia e Ambiente S.A. (COBA).
The analysis will assess the technical, economic, and environmental feasibility of interconnecting the islands, a vital step towards optimising renewable energy resources such as wind, solar, and potentially green hydrogen.
Key elements of the study include: evaluating the potential for offshore and onshore renewable energy integration; exploring storage solutions such as pump storage systems and battery storage; addressing the environmental impact of interconnection; and assessing long-term economic benefits for the nation, particularly in reducing reliance on fossil fuels.
This study will be a cornerstone of Cape Verde’s Energy Master Plan (2018-2040), which aims for 50% of its electricity to be generated from renewable sources by 2030.
TGS will leverage its 4C Offshore power intelligence.
‘With its unique geographic and climatic conditions, Cape Verde presents opportunities to harness renewable energy effectively, and the island interconnection would further enhance energy efficiency, reduce costs, and increase the potential for offshore renewable energy projects,’ said TGS.
Meanwhile, TGS has announced the availability of fast-track data from the Nefertiti Multi-Client 3D seismic survey conducted in partnership with the Egyptian Natural Gas Holding Company (EGAS).
The survey, with the aim of understanding the potential of the shelf and transform margins in the western part of Egypt’s Meiterranean offshore area, is situated near the Sidi Barrani-1 well in the far west of Egypt’s offshore Mediterranean and covers shallow-to-medium water depths within the shelf and transform margin domains. The proximity to the well will add control to the full-integrity imaging. Targets are likely to be Lower Cretaceous and Middle Jurassic structures with clastic reservoirs.
Norway to invest $24 billion in exploration this year
Norway’s net cash flow from petroleum activities is estimated to be $60 billion in 2024 with $24 billion invested in petroleum activities.
The expected revenues from the petroleum industry in 2024 are lower than in 2023, when the state’s net cash flow was $91 billion. The decline is mainly because of lower estimates for gas prices compared to last year.
Oil and gas production is expected to remain relatively stable towards 2030. In the national budget, the government expects that total Norwegian petroleum production in 2024 to be 239 mil-
lion Sm3 of oil equivalents, while the production is expected to be 243 million Sm3 in 2025.
‘Production from the Norwegian Continental Shelf contributes large amounts of energy and is significant for the energy supply to Europe. We will continue to develop the petroleum industry and remain a stable and long-term supplier of energy to Europe’, said Norway’s minister of energy Terje Aasland.
Investments in the petroleum industry are estimated at $24 billion in 2024, including both new field developments and investments in producing fields.
SLB and NVIDIA use AI solutions to enhance seismic imaging for energy industry
SLB and NVIDIA are developing generative AI solutions for the energy industry.
The collaboration accelerates deployment of industry-specific generative AI foundation models across SLB’s global platforms, including its Delfi digital platform and Lumi data and AI platform, by leveraging NVIDIA NeMo, part of the NVIDIA AI Enterprise software platform, to develop AI that can be run in the data centre, in any cloud or at the edge.
The companies will build and optimise models to the specific needs of the data-intensive energy industry, including
subsurface exploration, production operations and data management. This will help to unlock the full potential of generative AI for energy domain experts including researchers, scientists and engineers ― enabling them to interact with complex technical processes in new ways to drive higher value and lower carbon outcomes, said SLB.
‘As we navigate the delicate balance between energy production and decarbonisation, generative-AI is emerging as a crucial catalyst for change,’ said Olivier Le Peuch, chief executive officer, SLB.
‘Our collaboration with NVIDIA will accelerate the creation of tailored generative AI solutions, enabling our customers to optimise operations, enhance efficiency and minimise their overall footprint.’
SLB and NVIDIA’s collaboration first began in 2008 with the innovative use of graphics processing units (GPUs) for subsurface imaging and geoscience interpretation. The companies have continued to work closely to optimise every generation of SLB’s high-performance compute and visualisation technologies available on its Delfi platform.
ENERGY TRANSITION BRIEFS
The US Bureau of Ocean Energy Management (BOEM) has approved the Atlantic Shores South project plan to construct and operate two wind energy facilities. The original lease was divided into two separate leases, both approx. 8.7 miles offshore New Jersey. The approved construction and operations plan includes up to 197 total locations for wind turbine generators providing clean electricity to the New Jersey grid.
The world’s first cross-border CO2 transport and storage facility, the Northern Lights CO2 transport and storage project in Øygarden, near Bergen, is ready to receive and store CO2. The Northern Lights facility is a joint venture between Equinor, Shell and TotalEnergies.
ExxonMobil has executed the largest offshore carbon dioxide (CO2) storage lease in the US with the Texas General Land Office (GLO). The 271,000-acre site complements the onshore CO2 storage portfolio ExxonMobil is developing, and further solidifies the US Gulf Coast as a carbon capture and storage (CCS) leader.
Equinor has acquired 9.8% of shares of the offshore wind company Ørsted. The transaction establishes Equinor as the second largest shareholder in Ørsted, after the Danish State, which holds a controlling stake in the company. Ørsted has a net renewable generation capacity of around 10.4 GW, and a gross portfolio of offshore wind projects in execution of around 7 GW. The company’s ambition is to achieve a gross installed renewable capacity of around 35 to 38 GW by 2030.
Statera Energy has submitted plans for Europe’s largest green hydrogen project in Aberdeenshire, UK. Kintore Hydrogen will help balance a renewables-led power system by using surplus renewable energy to produce green hydrogen. The first 500 MW of operational capacity is expected to be online by 2028, and when operating at its full, 3 GW capacity Kintore Hydrogen could save up to 1.4 million tonnes of CO2 per year.
Carbon emissions from energy sector to peak this year, says DNV
This year 2024 will go down as one of peak energy emissions, according to DNV’s Energy Transition Outlook. Energy-related emissions are at the cusp of a prolonged period of decline for the first time since the industrial revolution. Emissions are set to almost halve by 2050, but this is a long way short of requirements of the Paris Agreement. The Outlook forecasts the planet will warm by 2.2 °C by end of the century.
The peaking of emissions is largely due to plunging costs of solar and batteries which are accelerating the exit of coal from the energy mix and stunting the growth of oil. Annual solar installations increased 80% last year as it beat coal on cost in many regions. Cheaper batteries, which dropped 14% in cost last year, are also making the 24-hour delivery of solar power and electric vehicles more affordable. The uptake of oil was limited as electrical vehicles sales grew by 50%. In China, where both of these trends were especially pronounced, peak gasoline is now in the past.
China is dominating much of the global action on decarbonisation at present, particularly in the production and export of clean technology. It accounted for 58% of global solar installations and 63% of new electrical vehicle purchases last year. And whilst it remains the world’s largest consumer of coal and emitter of CO2, its dependence on fossil fuels is set to fall rapidly as it continues to install solar and wind. China is the dominating exporter of green technologies although international tariffs are making their goods more expensive in some territories.
‘Solar PV and batteries are driving the energy transition, growing even faster than we previously forecast,’ said Remi Eriksen, group president and CEO of DNV. ‘Emissions peaking is a milestone for humanity. But we must now focus on how quickly emissions decline and use the available tools to accelerate the energy transition. Worryingly, our forecast decline is very far from the trajectory required to meet the Paris Agreement targets. In
particular, the hard-to-electrify sectors need a renewed policy push.’
The success of solar and batteries is not replicated in the hard-to-abate sectors, where essential technologies are scaling slowly. DNV has revised the long-term forecast for hydrogen and its derivatives down by 20% (from 5% to 4% of final energy demand in 2050) since last year. And although DNV has revised up its carbon capture and storage forecast, only 2% of global emissions will be captured by CCS in 2040 and 6% in 2050 A global carbon price would accelerate the uptake of these technologies.
Wind remains an important driver of the energy transition, contributing to 28% of electricity generation by 2050. In the same timeframe, offshore wind will experience a 12% annual growth rate although the current headwinds impacting the industry are weighing on growth.
Despite these challenges, the peaking of emissions is a sign that the energy transition is progressing, said DNV. The energy mix is moving from a roughly 80/20 mix in favour of fossil fuels today, to one which is split equally between fossil and non-fossil fuels by 2050. In the same timeframe, electricity use will double, which is also at the driver of energy demand only increasing 10%.
‘There is a compelling green dividend on offer which should give policymakers the courage to not only double down on renewable technologies, but to tackle the expensive and difficult hard-to-electrify sectors with firm resolve,’ added Eriksen.
Solar energy is becoming cheaper than coal.
A practical method to determine time-distance rules for efficient Vibroseis acquisition
Tim Dean1*, Richard Barnwell2 and Damien Barry 2
A practical method to determine time-distance rules for efficient Vibroseis acquisition
Since its development by Conoco in the mid-1950s Vibroseis has become the predominant source for onshore seismic surveys. Coupled with its increase in popularity has been a need to increase its productivity. Over the years a number of acquisition techniques have been introduced, culminating in the ISS/blended/unconstrained acquisition technique where no restrictions are placed on when sweeps can be initiated. This, unfortunately, can result in unacceptable levels of noise contamination. A more optimum solution, therefore, which balances productivity and noise contamination, is to apply a set of slip-time/distance separation rules. In this paper we have shown how to acquire and analyse test data that allow such rules to be defined in the most efficient manner.
Since its development by Conoco in the mid-1950s Vibroseis has become the predominant source for onshore seismic surveys. The concept of Vibroseis is outstanding in its elegance, rather than transmitting all the frequencies contained within the source wavelet simultaneously over a very short period using. For example, explosives (high-energy/high-power), transmit the frequencies distributed over a much longer period (high-energy/low-power). The signals recorded in the data can then be compressed into a wavelet using the correlation process.
Despite the ability of Vibroseis to acquire data rapidly, the limitation of conventional (‘flip-flop’) acquisition was rapidly reached. Flip-flop acquisition requires the sweep and the listen-time (the correlated record length) to expire before another sweep can be initiated and given that sweeps are typically at least 12 s long, this can limit acquisition significantly. The first attempt to overcome this limit was the slip-sweep method (Rozemond 1996), which involves overlapping sweeps in time to a theoretical minimum separation equal to the listen-time (although in practice the time between sweeps, the slip-time, is larger due to harmonic noise contamination). Since then, there have been a number of methods developed to either avoid or minimise interference between sweeps that overlap in time and/or space:
• Distance separated simultaneous sweeping or DS3 (Bouska 2010, Stone and Bouska 2013), which relies on avoiding interference by keeping the vibrators a large distance apart. This method can also be combined with slip-sweep.
• Pseudorandom sweeps that are specially designed to minimise crosstalk (Dean 2013). Use of these sweeps has been limited as they tend to be difficult for the vibrator to transmit and
1 Anglo American Steel Making Coal | 2 Terrex Seismic
have inherently low energy (Dean et al. 2017) which requires them to be longer, thus negating their potential to improve productivity. They may also require sweeps to be transmitted simultaneously, forcing sources to wait for other sources to become ready, although methods have been suggested that avoid this by selecting sweeps in real-time (Dean et al. 2016).
• Independent simultaneous source, ISS, or ‘blended acquisition’ (Abma et al. 2015, Berkhout 2008, Berkhout et al. 2008, Howe et al. 2009, Howe et al. 2008). No restrictions are placed on the vibrators sweeping with any interference being removed in processing.
• The managed spread and source (MSS) technique (Quigley et al. 2013) – this is similar to ISS/blended but instead relies on restrictions in the slip-times that vary based on the separation between sources to avoid high levels of noise contamination. Despite these restrictions, the reduction on productivity can be as low as 10% when compared to ISS/blended (Dean et al. 2021). More recent implementations (Dean 2012, Dean 2015, Tellier et al. 2022) allow the vibrators to apply the acquisition rules independently rather than being controlled by the central acquisition system, removing the delays introduced by communications between the two.
A diagram which demonstrates such a set of rules is shown in Figure 1. This ‘rules plot’ summarises the relationship between the slip-time and source separations; combinations of sources that lie within the green area can be acquired whereas those within the red zone must either wait for a longer slip-time to expire or move further apart. It should be noted that during the application of the rules the position of all vibrators that have swept within the time window equal to the longest slip-time
2 Example diagram showing how the positions of all preceding shots that have occurred within the maximum slip-time limit (3 s in this case) need to be taken into account. Source B is outside the limiting area of source A, Source C is outside the limiting area of its immediate predecessor source B but within that of source A and thus must be delayed.
must be considered. For example, in the simple example shown in Figure 2 source B is outside the red zone associated with the preceding source A and is thus safe to acquire. Source C is also outside the red zone associated with its preceding source (B) but is within the red zone associated with source A and thus must be delayed.
Obviously before acquisition can begin, the acquisition rules need to be established. Previously, efforts have concentrated on simply plotting the theoretical positions of the harmonics in time (Bagaini 2010, Bagaini et al. 2012, Dean 2024, Dean et al. 2010, Meunier and Bianchi 2002, Ras et al. 1999, Ziolkowski 2010), which does not allow for the effects of separation; or by simulating overlapping sweep techniques by shifting individual records in time and space and then summing them (Dean, Kristiansen and Vermeer 2010). Such techniques have an obvious drawback in that the ambient noise level in the overlapping section is unrealistically boosted, a less obvious drawback is the effect of limitations in the noise contamination. For example, Figure 3 shows a trace from a single record in the frequency-time domain before and after correlation. In the latter, note the areas in the frequency-time domain that are vacant and that the boundaries are not parallel with the frequency axis. If we sum records such that the frequency-time regions that are populated do not overlap, then we will underestimate the resulting noise contamination. For example, Figure 4 shows the boundary of the populated frequency-time region seen in Figure 3b for two sweeps, one
starting at 0 s (green) and one at 14 s (red). If we look at the region which represents the record of interest (the white box) then we can see that there is no noise resulting from the second sweep below ~95 Hz. Building records in this manner also requires data to be recorded with large offset ranges, the maximum offset being equal to the maximum source separation of interest.
In this paper, we detail a method to both acquire and analyse the data required to determine time-distance acquisition rules for Vibroseis acquisition. We begin by describing the data required for the analysis, before showing how such datasets can be efficiently acquired. We then demonstrate a simple approach to analyse the acquired data before discussing possible variations to the process.
Acquisition
To determine the acquisition rules, we need to acquire a series of records into a spread with the vibrators having different delay times and separations, as indicated by the points shown on Figure 1. Once we have acquired this data we can then determine the minimum acceptable slip-time at each separation distance (shown by the green points on Figure 1).
To do so we could simply acquire sources with different sliptimes and separations into a full receiver line, but if the maximum
Figure 1 Example diagram showing the slip-time, separation rules for an MSS survey.
Figure 3 Frequency-time plots of a single (a) uncorrelated and (b) correlated nearoffset trace. In this case the sweep was non-linear 1-220 Hz over 24 s.
Figure 4 Diagrammatic representation of two sweeps whose correlated frequencytime boundary matches that shown in Figure 3b separated by 14 s. The white box indicates the time region of interest.
Figure
separation is likely to be quite large then this could be a very long line. If using a cabled system (assuming that the test line is part of the survey spread), then this approach could be worthwhile but for nodal systems (Dean et al. 2018) the nodes would then need to be collected and downloaded, which could be prohibitively time consuming. Instead, we prefer to layout a shorter receiver spread and acquire additional shots into it. We will now describe such a geometry.
If we consider the positions of two sources ‘A’ and ‘B’ in time and space, then there are four quadrants that source ‘B’ can occur in relation to source ‘A’, numbered 1 to 4 in Figure 5. If we assume that the geology does not vary significantly laterally then the noise at positive offsets will not differ significantly from that at negative offsets (and the noise at different azimuths will also be consistent). Thus, the noise from Source B will be the same irrespective of whether it is on the left or right hand-side of Source A, as long as their separation is the same. Thus, the first simplification we can make is to consider only quadrants 2 and 3. As well as the interference in A from B we also need to consider the interference noise in B from A, whose relative position to source B, when it has been relocated to the centre of the plane, is also shown in Figure 5. The single record shown in Figure 5 therefore contains two different tests: the noise in A from B for a positive slip-time value (i.e., A occurs after B) and the noise in B from A with a negative slip-time value (i.e., A occurs before B).
Using these assumptions, we can reduce the required receiver line length considerably by adopting a walk-away approach where we deploy a shorter line and acquire the longer offsets by moving the source increasingly further away. Such a design is shown in Figure 6; for the first record (Figure 6a), source A begins at the centre of the geophone line (shown in green) and source B at the end of the line, the first arrivals are shown in black. Both sources then move incrementally away from the geophone spread, keeping their spacing constant (Figure 6b to d) allowing larger offsets
Figure 6 (a) to (d) Four records acquired using two sources (‘A’ and ‘B’) into a static receiver spread shown in green. The black lines represent the recorded first arrivals. (e) the four records after the data has been merged and sorted by offset.
B2 A 2, B2 A3, B2 A4, B2
B3 A 2, B3 A3, B3 A4, B3
1 Summary of the four different combinations of the positions of sources A and B shown in Figure 6 coloured by their separation.
to be recorded. The maximum move-up is equal to the line length but in this case the move-up is half the spread length to acquire additional data. Once all the records have been acquired they can be merged, and sorted by offset (Figure 6e) to create a complete record. Note that depending on how we crop the record we can get both negative (t0 aligned with B) and positive slip-times (t0 aligned with A) for the same source separation. It should also be noted that although it may be tempting to start source A at the end of the receiver line to avoid acquiring the near offsets twice, if source B does not start at the end of the receiver line then we do not get the near offsets for source B.
Table 1 shows all possible combinations of the four different source positions shown in Figure 6 coloured by their separation.The scenario shown in Figure 6 represents the acquisition of all the records with a single-spacing separation (light-blue colour). Alternatively, we could acquire the data
7 Diagram showing the spread dimensions utilised in equations 1 and 2.
Table
Figure 5 Diagram showing the four quadrants of the location and timing space when Source A is placed at the centre.
Figure
in different orders. For example we could keep the position of source B constant while moving A (i.e., acquiring a row at a time), or we could keep source A constant while moving B (i.e., acquiring a column at a time), depending on what is the most efficient. Whatever the movement of the sources, it is generally more efficient to acquire all the different slip-times at the same time.
Test design
In this section we will show how the test design process works using a simple example. We begin by determining the range of values we need to investigate. In this example the receiver lines span the full width of the 4000 m survey and we acquire source points in 200 m long salvos. The maximum source-receiver offset (X max) will therefore be (1)
(where X max,i and X max,c are the maximum inline and crossline offsets respectively and Ssalvo is the salvo length) and the maximum possible separation between the vibrators D max is
The calculated values for this example are given in Table 2. In terms of the slip-times that we wish to test, the maximum time is that which corresponds to flip-flop acquisition, i.e., the sum of the sweep and record lengths, in this case 18 s, but we are unlikely to impose such a high limit. Most likely we will not acquire with anything more conservative than ‘noise-free’ slip-sweep (Dean, Kristiansen and Vermeer 2010) whose maximum slip-time for linear sweeps is given by (Ras, Daly and Baeten 1999) (3)
where fmax is the maximum frequency of a sweep with bandwidth W and length T, and k is the maximum order of the cross-harmonic noise we wish to avoid, which is typically the 3rd (Dean, Kristiansen and Vermeer 2010).
We now need to establish the ranges of values that we wish to test. In this case, we will test 2 s increments in slip-time between -18 and 18 s and 200 m, separations between 0 and 10,400 m, and offsets from 0 to 1600 m at 5 m intervals (the station spacing)
using a 600m-long receiver line. The source points were generated by placing the first pair of positions with Source A at a distance equal to half the spread length minus the separation value, and Source B at source A position plus the separation value. Both source points were then moved to the right at intervals equal to the spread length until the maximum offset condition had been reached. The resulting 52 pairs of source positions required are shown in Figure 8. Note that to ensure we have the near offsets for Source B we need to move Source A beyond the start of the receiver line. If acquiring these points, and thus the near offsets for Source B, is not possible then we would need to acquire the negative slip-times as a separate set of tests. This reduces the number of source points pairs to 40 but increases the total number of records (number of sources x number of slip-times) from 520 to 760.
Figure 9 shows the trade-off between the total number of records required vs. the receiver line length for when positive and negative slip-times are acquired separately (by limiting the movement of Source A beyond the start of the receiver line), and when they are acquired simultaneously. These results clearly show that increasing the line length has little impact on the number of records required beyond ~400 m.
Figure 8 (a) source and receiver positions for the 52 records required to obtain the full set of offsets and separation values for the survey in Table 2. (b) Trace offset values for Source A. (c) Trace offset values for Source B.
Table 2 Output summary of P50 OOIP, generated from different 3D reservoir models and dynamic simulation.
9 The total number of records required vs. the receiver line length.
Figure
Data analysis
Once we have acquired our test dataset we now need to analyse it and determine the acquisition rules. For the simultaneous data we can follow the guidelines of Bouska (Bouska 2010) and ensure that the first arrivals of the neighbouring shot do not interfere
with the signal zone. For the data shown in Figure 10 and a desired noise-free offset range of ±150 m (shown by the green boxes), this occurs at a separation of 300 m.
For the other separation values we prefer looking at the data displayed with constant slip-times, although it can be displayed with constant offsets as well. We like to display the records with the different separations side-by-side and then pick the separation at which we are happy with the noise level. Figure 11 shows records acquired with a slip-time of 2 s at offsets of between 0 and 700 m. The data has been normalised and is displayed as the amplitude envelope (the absolute value of the sample values). The major source of noise is the ‘chimney noise’ (Bagaini et al. 2014) associated with the vibrator itself. Where it overlaps the preceding sweep it obscures the first break energy (indicated by the white arrows). Strong harmonic noise, mostly associated with the ground-roll, is also evident (compare the noise between 1.5 and 2 s for the coincident and 700 m separated shots). Figure 12 shows the data with a slip-time of -2 s (i.e., the interfering shot is before the current shot), there is a small amount of chimney noise at the near offsets of the interfering source but nothing significant. In this case, we considered that 2 s slip-time was acceptable for separations of 600 m and larger. Looking at the 4 s slip-time data (Figure 13), the chimney and harmonic noise levels are considerably lower and this slip-time was considered appropriate for all the separations.
Discussion
In the example shown here we have detailed how the analysis of the records can be conducted visually. We have tried various analytical techniques but have found they tend to suffer from difficulty in identifying the difference between signal and noise. We have also ignored the effects of noise processing on the resulting data quality. This is partly because noise removal tends to be applied in the cross-spread domain, and we therefore lack sufficient data to be able to apply it, but also because accurate first-break picking is particularly important for calculating static solutions and thus it is important that they be interference-free. We’ve also found that the rules established using the method
Figure 11 Consecutive shots with 2 s slip-times and separations of 0 to 700 m.
Figure 10 Shot records where a second sweep is being transmitted at a distance given in the plot title. The green lines indicate offsets of ±150 m.
described here do not tend to negatively affect productivity as the slip-times tend to be quite small (Figure 1).
Well-designed sweeps and well-maintained vibrators are also important as the main noise contamination from subsequent sweeps is harmonic and chimney noise, both of which can be minimised. The main noise contamination from preceding sweeps is ground-roll, which processing may be effective at removing. The acquisition system also needs to be capable of supporting rules-based acquisition, particularly the ability to start sweeps without system-related delays and to communicate effectively with large numbers of source groups.
Conclusion
As Vibroseis acquisition has increased in popularity the need to increase its productivity has become ever more important. The highest productivity technique is ISS/blended/unconstrained acquisition where no restrictions are placed on the sources but this can result in unacceptable noise-levels. Slip-sweep and distance separated techniques avoid these noise problems but at the cost of decreased productivity. The best solution, therefore, which balances productivity and noise contamination, is to apply a set of slip-time/distance separation rules, a technique that is becoming increasingly popular. In this paper we have shown how to acquire and analyse test data that allows such rules to be defined in the most efficient manner.
References
Abma, R., Howe, D., Foster, M., Ahmed, I., Tanis, M., Zhang, Q., Arogunmati, A. and Alexander, G. [2015]. Independent simultaneous source acquisition and processing. Geophysics, 80(6), WD37-WD44.
Bagaini, C. [2010]. Acquisition and processing of simultaneous vibroseis data. Geophysical Prospecting, 58, 81-99.
Bagaini, C., Daly, M. and Moore, I. [2012]. The acquisition and processing of dithered slip-sweep vibroseis data. Geophysical Prospecting, 60(4), 618-639.
Bagaini, C., Laycock, M., Readman, C., Coste, E. and Anderson, C. [2014]. Seismo-acoustic characterization of a seismic vibrator. SEG Technical Program, Expanded Abstracts.
Berkhout, A.J.G. [2008]. Changing the mindset in seismic data acquisition. The Leading Edge, 27, 924-938.
Berkhout, A.J.G., Blacquière, G. and Verschuur, E. [2008]. From simultaneous shooting to blended acquisition. SEG Technical Program, Expanded Abstracts, 2831-2838.
Bouska, J. [2010] Distance separated simultaneous sweeping, for fast, clean, vibroseis acquisition. Geophysical Prospecting, 58(1), 123-153.
Dean, T. [2012]. Efficient seismic source operation in connection with a seismic survey. US. Patent 20120075955A1.
Dean, T. [2013]. The use of pseudorandom sweeps for vibroseis surveys. Geophysical Prospecting, 62(1), 50-74.
Dean, T. [2015]. Source start time determination. WO Patent 2015/0153466.
Figure 12 Consecutive shots with -2 s slip-times (i.e. the interfering shot appears before the shot shown) and separations of 0 to 700 m.
Figure 13 Consecutive shots with 4 s slip-times and separations of 0 to 600 m.
Dean, T. [2024]. An Integrated Approach to Vibroseis Sweep Design. First Break, 42(1), 59-64.
Dean, T., Grant, M. and Pavlova, M. [2021]. The Efficient Acquisition of High-Resolution 3D Seismic Surveys for Shallow Open-Cut Mining. First Break, 39(8), 43-49.
Dean, T., Iranpour, K., Vermeer, P. and Bagaini, C. [2016]. Sweep sequence detrmination for overlapping sweeps. US. Patent 9, 341, 724.
Dean, T., Kristiansen, P. and Vermeer, P.L. [2010]. High productivity without compromise – The relationship between productivity, quality and vibroseis group size. 72nd EAGE Conference & Exhibition.
Dean, T., Tulett, J. and Barnwell, R. [2018]. Nodal land seismic acquisition: The next generation. First Break, 36, 47-52.
Dean, T., Tulett, J. and Lane, D. [2017]. The use of pseudorandom sweeps for vibroseis acquisition. First Break, 35, 107-112.
Howe, D., Foster, M., Allen, T., Jack, I., Buddery, D., Choi, A., Abma, R., Manning, T. and Pfister, M. [2009]. Independent simultaneous sweeping in Libya-full scale implementation and new developments. SEG Technical Program, Expanded Abstracts, 109-111.
Howe, D., Foster, M., Allen, T., Taylor, B. and Jack, I. [2008]. Independent simultaneous sweeping – a method to increase the productivity of land seismic crews. SEG Technical Program, Expanded Abstracts, 2826-2830.
Meunier, J. and Bianchi, T. [2002]. Harmonic noise reduction opens the way for array size reduction in vibroseis operations. SEG Technical Program, Expanded Abstracts, 70-73.
Quigley, J., Holmes, D. and O’Connell, K. [2013]. Putting It All Together – Broadband, High-density, Point-receiver Seismic in Practice. 75th EAGE Conference & Exhibition.
Ras, P., Daly, M. and Baeten, G. [1999]. Harmonic distortion in slip sweep records. SEG Technical Program.
Stone, J.A. and Bouska, J. [2013]. Distance separated simultaneous sweeping, providing record-breaking productivity and a step-change in data quality in BP Jordan’s Risha seismic survey. First Break, 31(12), 53-60.
Tellier, N., Ollivrin, G., Laroche, S. and Donval, C. [2022]. Mastering the Highest Vibroseis Productivity While Preserving Seismic Data Quality. First Break, 40(1), 81-86.
Ziolkowski, A. [2010]. Review of vibroseis data acquisition and processing for better amplitudes: adjusting the sweep and deconvolving for the time-derivative of the true groundforce. Geophysical Prospecting, 58(1), 41-54.
ADVERTISEMENT
CONFERENCE AND EXHIBITION
CHECK
DATA MANAGEMENT
ENHANCING PREDICTIONS & INVESTMENTS WITH DIGITAL TECHNOLOGIES
BUSINESS PROCESSES AND OBJECTIVES
MODELS, ALGORITHMS & NEW DEVELOPMENTS
ADVANCED TECHNOLOGIES
SUSTAINABILITY AND NEW ENERGY APPLICATIONS
SUBMIT YOUR ABSTRACT
SUBMISSION DEADLINE: 24 DECEMBER 2024
A proposed standard seismic frequency nomenclature for geophysical site investigation surveys in the offshore energy sector
Andy W. Hill1*, Gary Nicol2, and Mick R. Cook3
Abstract
This paper sets out to provide a suggested standardised set of nomenclature for seismic interpretation data of different frequency (Hz) spectra in use in the offshore energy sector. Historically, a large number of acronyms have been used to describe seismic interpretation data of different frequency content across the oil and gas industry and now in the rapidly growing offshore wind and carbon storage sectors, but the terms in use have not always been consistent or logical.
This paper sets out to understand the history and background behind the evolution of these terms and to remove the confusion from current usage of MR, HR, UHR, UUHR, VHR, or SUHR by definition of a standardised nomenclature for seismic interpretation data ranging from 4Hz up to 5000Hz in frequency content. It is believed application of such a standard throughout the global offshore energy industry will ensure consistency and a common understanding going forward.
Introduction
In the last few years with the rapid growth in offshore wind farm activity and a similar expansion in marine CCUS (Carbon Capture Usage & Storage) projects, a number of acronyms to describe seismic data (in this paper taken to mean seismic reflection data acquired by a variety of means) of different frequency (Hz) content have been in increasing use, including MR, HR, UHR, UUHR, VHR and SUHR. Such nomenclature has all been bandied about somewhat nonchalantly and inconsistently. Yet, what do these terms actually mean, in terms of seismic frequency content, and are the terms consistent and logical? This paper sets out to understand the history and background to these terms and attempts to set a standardised nomenclature for the industry to ensure consistency from this point forward.
There has been a surge in demand for high-resolution seismic reflection data acquisition, in support of offshore wind projects in particular. At the same time, technology for this specific application has developed. This has resulted in the reappearance of the sparker source used in broadband form, reconsideration of the use of large boomer sources, the use of seismic streamers with mixed group lengths and allied recording systems allowing recovery of frequencies up to 4000 to 5000Hz. The definition of what these data should be termed: UHR, UUHR, VHR, or SUHR has produced confusion on what is being discussed. So, how did we get to this situation and what is the way forward?
In 1964, the drilling rig CP Baker blew out on shallow gas in the Gulf of Mexico at a depth of only 647 feet below mean sea level (BMSL) in 186 feet of water and sank in less than an hour with the loss of 22 lives. In the US Coast Guard (USCG) inquiry that followed, the United States Geology Service stated that there was no means with which to image the shallow geological section at that time to identify drilling hazards. The USCG investigation, not surprisingly, found that something needed to be done to improve imagery of the shallow section to prevent repeat events.
Fortunately, the uptake in the use of digital seismic recording systems started at about the same time with the introduction of a first valve dependent digital recording system in 1962, a transistorised system the Texas Instruments 9000 and 10,000 Digital Field Systems (DFS-1 and 2) in 1965, followed by the DFS-3 in 1968 and eventually the DFS-5 in 1975 (competing systems came from Sercel, Geosource, etc.). These systems enabled the parallel application of powerful emerging digital signal processing techniques.
Mostly, these systems acquired field data at a 4ms sample interval, thus a 125Hz Nyquist frequency with recording filters ahead of the Analog to Digital (A/D) converters which began to cut off steeply from around 62.5Hz, (½ Nyquist level). However, even the DFS-2 could operate at a 1ms sample interval, thus a 500Hz Nyquist with high frequency attenuation commencing at 250Hz. Subsequent improvements in digital conversion
electronics allowed the use of higher frequency filters ahead of the A/D conversion. These were known as ‘3/4 Nyquist filters’, so around 90Hz for 4ms sampling and 180 and 375Hz for 2 and 1ms respectively. By early in the 1970s systems were available on both sides of the Atlantic to successfully acquire data at 1ms sample interval (e.g. Moore (1973) and Lucas (1974)).
Around 1980 not only was exploration seismic data (here taken to comprise the target frequency band of 4 to 90Hz) being recorded digitally but systems were commonly available to record multi-channel High Resolution (HR) data (HR, here taken to comprise the 20 to 375Hz frequency band) to image the shallow section. So, we had exploration seismic and HR seismic. Simple. Systems became available at this time to also record at 0.5 and 0.25ms sample intervals.
Existing nomenclature
Now, the conundrum of what to call these additional types of data recorded at a 0.5ms or 0.25ms sample interval arose? To be able to record these faster sample interval data for ‘Goldilocks’ reasons, the source had to be just right (in terms of its frequency output, necessitating very shallow tow), the streamer had to be towed very shallow (as shallow as 1m), group lengths had to be short and the weather had to be near perfect, while processing of the low fold >1000Hz frequency data at the time required extreme patience. As a result, acquisition of 0.25ms of data did not catch on in the 1980s and was forgotten about for several decades. Acquisition of 0.5ms sample interval data were, however, considered a happy compromise. Repeatable sources outputting in the appropriate bandwidth of up to 500Hz, or just above, were available from mini-airguns and mini-waterguns – the mini-sleeve exploder having come and gone due to the difficulty of operating the system. Streamers of appropriately short group lengths (6.25 m) were available while the recording systems allowed an adequate number of channels to be acquired for good fold coverage. However, there were some issues. A misunderstanding of the need for very short near trace offsets and the resultant phasing out of the higher frequencies across groups was a common “where did my high frequencies go?” concern. It should be noted that similar issues were also seen with exploration seismic in shelf sea areas with the seafloor and shallow
section often looking like a foggy day in London – especially if in processing ‘Mute’ was incorrectly applied for the shallow section. Yet, generally, good quality data across the upper bandwidth of 50-600Hz were achieved. But the question was what to call these data as, evidently, they were as different to HR data as HR data were to exploration data?
At this stage let us take a detour back to the cathode ray tube and valve laden TVs of the 1950s-70s that received either very high frequency (VHF) or ultra high frequency (UHF) signals. We can recall nomenclature as in Figure 1 at the time; highlighting that VHF was broadcast at a lower frequency than UHF, with UHF being more sensitive to interference issues and signal strength attenuation, despite providing a theoretically better picture.
Thus came a problem c1990 where the industry set to codify the conduct of site surveys. What to call surveys being undertaken at 0.5ms sample interval? It was agreed by UKOOA (1992) to call them ultra-high resolution surveys (UHR, here taken to be the 50-600Hz frequency band). Thus, we now had exploration seismic, HR seismic for drilling hazard surveys and UHR seismic for foundation studies. Beyond that, the industry also acquired single channel seismic data from 500Hz to 1500Hz using multi-tip sparkers or miniguns and a variety of sources, including Boomers, Pingers and Chirp systems imaging with frequencies from 500Hz all the way up to and in excess of 10kHz. All these higher frequency single channel data were recorded in analogue mode; although by the mid-1990s that was beginning to change.
For the best part of two decades all was seen to be good. Yet, as flat-screen TVs and cable began to appear and the relevance of VHF and UHF faded, issues in site investigation began to arise.
The nascent offshore wind industry was primarily interested in the shallow stratigraphy below the seabed, which was largely
Figure 2 As it was: seismic imaging in the Central North Sea over the same diapir. A. Conventional exploration seismic data acquired with towed-streamer, dominant frequency ~50Hz. B. HR2D data, dominant frequency ~150Hz, C. UHR2D data, dominant frequency ~300Hz. Lines are not quite coincident.
Figure 1 Electromagnetic Frequency Spectrum and Standardized Nomenclature.
confined to the upper 100 m or that part of the shallow geology critical for foundation design. A significant number of new entrants (developers) came from onshore wind, and there was limited understanding of marine seismic data acquisition. Their well-founded desires were to be able to image their zone of interest at frequencies in excess of 1kHz in relatively shallow water. The traditional single channel systems of various types, that had been used in offshore oil and gas site investigation surveys were seized upon as the solution, but met with mixed success. These data have been labelled with a generalised descriptor of SUHR seismic – S being for single channel – yet the imaging frequencies that were desired were well above 600Hz – i.e. much higher than the UHR bandwidth.
With imaging issues (i.e. poor signal-to-noise, lack of penetration and multiple interference) and therefore the desire to apply advanced digital processing to improve results, it was realised that multi-fold coverage, using more repeatable sources and better designed receiver arrays, was the only way to progress – recording the data at very small sample intervals (i.e. 0.125 and up to 0.0625ms) to allow recovery of frequencies up to and beyond 5000Hz.
Like in 1964 for HR acquisition with the appearance of digital recording systems, this issue happened to arise as the technical opportunities of broadband acquisition appeared in the industry. Fortuitously, multi-level sparker or super boomer sources appeared together with streamers of even shorter group intervals (as little as 1 m group length) while adoption of other approaches such as sloped tow-streamer geometries helped make the vision a reality – when acquisition parameters were applied correctly!
So, how should we label these data? Very high resolution (VHR) was proffered as an appropriate term as these data were evidently different and better quality than the UHR data which had gone before. Good. Yet looking back at Figure 1, we recall that VHF refers to a lower-frequency spectrum than UHF. So that is, at best, misleading, or more fundamentally wrong. Some therefore suggested the term UUHR as evidently these data are more ultra’ly UHR than, well, UHR data.
3 As it can be now: imaging in 30 m water depth off Indonesia. A. HR2D data, dominant frequency ~150Hz. B. EHR2D data acquired with a multi-level sparker system, dominant frequency ~1500Hz. (After Putri et al., 2017) Lines are coincident.
disparity in these within the industry. For example, UHR seismic is defined differently by Arthur et al (2017) as 1.5-13kHz, IOGP, (2017) 60-500Hz, ISO (2021) 250-800Hz and SUT (2022) 2502500Hz.
Marine CCUS is now adding another dimension for consideration. As far back as 1993, 3D seismic data volumes have been acquired over relatively shallow oil and gas fields and denominated as Medium Resolution, or MR, surveys (defined here as 10 to 150Hz frequency band). This term went somewhat into abeyance for three decades. Indeed, reservoir geophysicists started acquiring 3D surveys which they proudly declared to be ‘HR’ surveys over shallow reservoirs – although their high end was ~150Hz. Again, therefore, terms were being used in an inconsistent and confusing manner.
For marine CCUS projects there is a sweet spot for subsea injection approximating to a depth of around 900 m below seabed. In most settings, at these depths, HR data are beginning to fade in the quality of their imagery. As a result, baseline surveys are now being designed, and called, MR surveys (e.g. Cooper, 2023). This allows not only quality initial images to be acquired of the reservoir intended for injection, but also affords the opportunity for image repeatability to undertake monitoring surveys in a 4D sense to verify that stored CO2 is remaining where it was intended. Again, this
Numerous publications have tried to define appropriate terms for these different data types but there is much contradiction or Figure 4 Data types and application (Widmaier, 2023).
Figure
is relatively simple to differentiate between exploration and HR data.
So, exploration, MR, HR and UHR bandwidth definitions we suggest are clear. However, we still have the problem of what to call data at the higher end of the frequency spectrum – beyond UHR.
A number of authors have published useful graphical images highlighting the purpose of different data and their relative resolutions. For example, Widmaier (2023, Figure 4) and one of the authors of this paper have presented (and continually adjusted) another over the past 15 years – of which Figure 5 is the latest version.
Again, while these are useful, they still do not provide a consistent structure to define seismic data acquired at the higher end of the useful frequency spectrum. For example, Widmaier uses the term ‘HD3D’ for ‘high density 3D’; another term that
has come into vogue in recent years but without any standardised definition. This paper, however, focuses on vertical resolution relating to frequency rather than trace density in acquisition.1
All this confusion and inconsistency of terminology prompted a discussion between the three authors across three continents; realising nothing currently was clear and some contradiction existed. Thus, consideration returned to Figure 1 in an attempt to provide an internationally accepted set of nomenclature for temporal resolution that could be applied across the whole seismic data acquisition industry.
Proposed nomenclature
It is suggested that VHR, used to denote seismic data that comprise higher frequency data content than UHR data, is unacceptable. And swapping the two terms around will leave a large amount of legacy data that includes ‘UHR’ in line numbering databases that will confuse data users. It is therefore suggested here that the term VHR is no longer used.
Applying a work around of the UUHR term also seems open to discussion. Therefore, it is suggested here that the term Extremely High Resolution (EHR, here taken to be the 250 to 5000Hz frequency band) should be adopted.
Why not SHR – aligning with SHF? Well, it has already been seen that the term SUHR has been in use when referred to ‘Single’ channel UHR data. Thus, simply adding ‘S’ in front of ‘HR’ will introduce further confusion rather than clarification. Therefore, a proposed standardised nomenclature for the seismic bandwidth of different data types is defined in Table 1.
The historic generalisation is that when we acquire new seismic data, we often fall into the trap of trying to differentiate them
1 For Ocean Bottom Cable or Node surveys the “D” stands for trace “Density” and relates to the shot and receiver density. HDOBC/N appeared as a term around 2010 where carpet shooting on a 50 m source grid was acquired over a standard 300x50 m receiver spacing survey design. UHDOBC/N is a more recent term where the receiver density was also increased to 50x50 m with either a matching shot point spacing, or an increased density of 25 x 25 m, each leading to a significant increase in delivered trace density.
Figure 5 Graphic representation of imaging system frequencies and resolutions –both vertical and spatial.
Figure 6 EHR2D data acquired with multi-level sparker source in the Central North, dominant frequency ~1500Hz. Below a surface sand unit, a complex series of Quaternary deposits overlays an unconformity above a structured Cretaceous sub-crop.
from existing data through use of a subjective prefix or suffix. Whether these subjective terms have been innocent differentiators or deliberate marketing ones (who wouldn’t want every survey to be a physicsdefying ultra high resolution / ultra dense one to convince management to pay for it – or a client to buy it?) this convention should be eliminated by practitioners through the adoption of the proposed standardisation of terms, defined here, for consistent application into the future.
However, this proposal still does not address what to do about naming the recording of single channel system seismic data which may be recorded across the frequency bandwidth from 300Hz all the way up to 20,000+Hz – depending on the source(s) in use. Simply adding ‘S’ to any of the frequency bands discussed above could be considered. However, a return to the original descriptor of continuous profiling (CP) for such data is proposed for acquisition using these systems, with shots fired on a time interval rather than a distance basis. Different data could be distinguished by simply denoting the source type behind CP (e.g. Sparker: CP-SPK, Boomer, CP-BMR, Pinger, CP-PNG, Chirp, CP-CHP etc.) especially as pairs of these sources are often acquired simultaneously on a single survey line.
Summary
This paper has set out a consistent set of nomenclature for seismic data of differing frequency bandwidths that have been acquired for differing objectives. It should be noted that in designing surveys through the desired spectrum one needs to complement the increasing temporal Hz with an appropriate spatial sampling to preserve those frequencies in the final interpretable product.
Although this paper references some of the parameters that are critical to retaining the frequencies required e.g. typical group length, bin size etc. it does not attempt to discuss in detail different acquisition equipment: source / receiver / recording parameters or the processing approach e.g. application of broadband processing techniques, that will deliver a product in the target bandwidth. This would be a worthy topic for another paper. Avoiding subjective prefixes and suffixes would also be applicable in this domain.
As Shakespeare may have suggested, ‘quick bright things [in this industry] have come to confusion’ through casual use of terminology and an absence of past thought to future technology developments. The Bard would also have to accept that tomor-
row’s technology, whilst it may well ‘creep on’ at more ‘than a petty pace, will not be able to change the physics of absorption of seismic signals. Therefore, in all but the most amenable of near seabed, normally consolidated, and fine-grained sediments one cannot currently foresee a future ability to achieve multi-channel imaging to greater than the frequency spectra set out here for EHR seismic – without significant operational effort over small operational footprints. Thus, going back to Shakespeare, it is believed that defining industry nomenclature in the way outlined here will provide ongoing industry clarity for tomorrow and tomorrow and tomorrow.
Acknowledgements
The authors would like to thank the following for review and constructive comments to this paper at various stages of its development: Ian Jack, Andrew Long, David Monk, Walter Rietveld and Nick Woodburn. We also thank bp and its partners for permission to use their data in the figures.
References
Arthur, J., Cauquil, E., Hill, A. and Wardell, N. [2017]. Forward: Special Issue on Applied Marine Geophysics. Near Surface Geophysics, 15, 333-334.
Cooper, C. [2023]. 3DHR Seismic Acquisition for CCS and considerations for Windfarm UHR. Seismic 2023: The Evolving Role of Seismic in the Energy Landscape, SPE, Aberdeen, April 2023.
IOGP [2017]. Guidelines for the conduct of offshore drilling hazard site surveys. International Oil and Gas Producers Association, London, Version 2, Revised October 2017.
ISO [2021]. Petroleum and natural gas industries - Specific requirements for offshore structures - Part 10: Marine geophysical investigations. International Organization for Standardization, ISO 1990110:2021(E), 2021.
Lucas, A.L. [1974]. A High Resolution Marine seismic survey. Geophysical Prospecting, 22, 667-682.
Moore, N.A. [1973]. Multi-Pak: a New Geophysical Method for Marine Engineers. Offshore Technology Conference, Houston, Texas, 29 April - 2 May 1973. OTC 1798.
Putri, S.A., Birt, C.S., Galanes-Alvarez, H., Apriani, R., Eloni, R., Manning, T. and Higson, M. [2017]. Tangguh Geotechnics and Geohazards Mitigation Part.1: The big picture. EAGEHAGI 1st Asia Pacific meeting on Near Surface Geoscience & Engineering, extended abstract.
SUT [2022]. Guidance Notes for the Planning and Execution of Geophysical and Geotechnical Ground Investigations for Offshore Renewable Energy Developments. ISBN 0 906940 59 1, Society of Underwater Technology, Offshore Site Investigation and Geotechnics Committee, M.R. Cook Ed. London. Revised September 2022.
UKOOA [1992]. Guidelines for the Conduct of Mobile Drilling Rig Site Surveys, Volumes 1 and 2. United Kingdom Offshore Operators Association (UKOOA).
USCG [1965]. Marine Board of Investigation; explosion, fire and sinking of the Drilling Barge C.P. Baker in the Gulf of Mexico, 30 June 1964. United States Coast Guard, 5943/C.P. Baker, 23 April 1965.
Widmaier, M. [2023]. PGS Live, Wind. PGS online webinar, 1 July 2023.
Table 1 Proposed standardised seismic data definitions.
Special Topic
MARINE ACQUISITION
Submit an article
Geoscience companies are continuing to innovate in the marine seismic acquisition sector.
David Went et al demonstrate that lithology has a very strong impact on the amplitude variation with offset or angle (AVO/AVA) response and that shale and brine sand responses need to be identified with confidence before any prognosis of a hydrocarbon signature is made.
S. David et al demonstrate the added value of using the multiples in FWI, deriving a velocity model to migrate the short streamer data, and suggest ways to address limitations encountered when trying to push FWI on sparse nodal data to higher frequencies.
Maxime Benaniba et al present a shallow water case study where a Tuned Pulse Source (TPS) and Conventional AirGuns (CAG) were deployed simultaneously to ensure optimum low-frequency signal from the emission stage to 4C fidelity sensor recording.
Robert Basilli outlines a fresh approach to OBN survey work, introducing a new class of vessels and utilising advances in remote OBN operations including ROVs and robotics.
Alexey Dobrovolskiy explains why drone-born marine magnetic surveys are coming into their own, especially to aid the construction of offshore wind farms.
Lerish Boshoff considers the evolving landscape of OBN seismic operations.
Neil Hodgson et al demonstrate how discarded legacy seismic data can be reprocessed to make the invisible unmissable and let the prospectivity shine out brighter than it ever did before.
Ruud Weijermars clarifies the status of the proposed classification methodology and how new tech companies are developing a separate code of practice for carbon removal claims.
First Break Special Topics are covered by a mix of original articles dealing with case studies and the latest technology. Contributions to a Special Topic in First Break can be sent directly to the editorial office (firstbreak@eage.org). Submissions will be considered for publication by the editor.
It is also possible to submit a Technical Article to First Break. Technical Articles are subject to a peer review process and should be submitted via EAGE’s ScholarOne website: http://mc.manuscriptcentral.com/fb
You can find the First Break author guidelines online at www.firstbreak.org/guidelines.
Special Topic overview
January Land Seismic
February Digitalization / Machine Learning
March Reservoir Monitoring
April Underground Storage and Passive Seismic
May Global Exploration
June Technology and Talent for a Secure and Sustainable Energy Future
July Modelling / Interpretation
August Near Surface Geo & Mining
September Reservoir Engineering & Geoscience
October Energy Transition
November Marine Acquisition
December Data Management and Processing
More Special Topics may be added during the course of the year.
Seismic rock properties and their significance for the interpretation of seismic amplitude variation with angle (AVA), offshore Liberia and Sierra Leone
David Went1*, Jon Rogers1 and Felicia Winter1 demonstrate that lithology has a very strong impact on the amplitude variation with offset or angle (AVO/AVA) response and that shale and brine sand responses need to be identified with confidence before any prognosis of a hydrocarbon signature is made.
Abstract
Well and seismic data from the Leonian and Liberian basins confirm the presence of a working petroleum system, with syn and post-rift Cretaceous intervals containing excellent source rocks, reservoir sands and sealing shales. Rock property studies conducted on well log data indicate that lithology (sand versus shale) has a strong impact on amplitude variations with offset or angle (AVO/AVA).
Figure 1 a) Location and geological setting of Sierra Leone and Liberia in the transform margin of the Atlantic Ocean. Examples of recently established major petroleum provinces in Guyana, Brazil, Ghana and Namibia are marked with yellow stars (Granot and Dyment, 2015); b) seismic and well data coverage offshore Sierra Leone and Liberia.
Elastic seismic inversions confirm the observations made from well logs. The key learning is that shale and brine sand responses need to be clearly identified before any prognosis of an AVO anomaly resulting from the presence of hydrocarbons is made.
Introduction
The deep waters of the Atlantic margin have proven to be a good place to explore for hydrocarbons in recent years with large commercial discoveries having been made, for example, in Brazil, Guyana, Ghana and Namibia (Figure 1a). Most discoveries have been made in siliciclastic plays in stratigraphic traps (e.g. Daily et al 2013, Hedley et al 2022). In these plays, there is typically a heavy reliance on seismic interpretation to define traps and on amplitude analysis, including amplitude variation with offset or angle (AVO/ AVA), to suggest hydrocarbon presence. Well data typically plays an important part in calibrating seismic responses. However, in newly emerging plays it is commonly scarce, so it is important to maximize the learnings from the existing well data to improve subsurface interpretation and increase the chances of commercial success.
The purpose of this article is to show the results of a rock property study and seismic inversion workflow conducted on the offshore West African Leonian and Liberian Basins (Figure 1b). Seismic rock properties determined from well logs provide an important link to interpretation of seismic responses in the subsurface (Castagna et al 1993). Rock properties determined from well logs in the Leonian and Liberian basins highlight that lithology has a very strong impact on the amplitude variation with offset or angle (AVO/AVA) response and a key learning is that shale and brine sand responses need to be identified with confidence before any prognosis of a hydrocarbon signature is made. Armed with this understanding, material exploration opportunities may be identified, each of which has the potential to transform this region from a frontier system to a commercial hydrocarbon province.
Geological setting and exploration history
The Leonian, Liberian and Harper Basins occur offshore Sierra Leone and Liberia, form part of the transform margin of West
Africa (McGregor et al 2003) and are the conjugate margin to the Guyana-Amazon basins in South America (Figure 1a). Atlantic rifting of these segments mainly occurred in the Aptian-Albian and syn-rift sequences have been penetrated by wells in both Leonian and Liberian basins (Brownfield and Charpentier 2006). However, it is the overlying late Cretaceous interval which has been the main focus of more recent exploration drilling, with thick post rift sequences proven to contain excellent source rocks, reservoir sands and sealing shales (Winter et al 2020). Minor non-commercial discoveries have been made that confirm a working petroleum system but frustratingly, many dry wells have also been drilled. The targeted traps are typically stratigraphic and rely on pinch out on the continental slope. Prognosing which traps contain oil typically relies heavily on AVA analysis and the identification of a suitable AVA (AVO) anomaly. Understanding the AVA behaviour and what constitutes a suitable anomaly is therefore critical to successful exploration.
Database and method
The well and seismic database used in this study is shown in Figure 1b. Fourteen wells from the deep-water Liberian Basin and nine wells from the Leonian basin were analysed petrophysically to establish lithology, porosity, fluid type and saturation. Seismic rock property curves, acoustic impedance, Vp/Vs and EEIχ27 were generated from P-Sonic, S-sonic and density logs. An example of the petrophysical output is displayed in Figure 2a.
The relationships of seismic rock properties to this petrophysical interpretation of lithology, porosity and fluid type/saturation was established by using multi-well cross-plots and by generating forward models of intercept and gradient from the log data using Shuey’s equation (Shuey 1985; Figures 2b and c). Gassman fluid substitution was performed to determine the magnitude of hydrocarbon effects (Smith et al 2003). Models for 38 API oil with low (200scf/bbl), medium (800scf/bbl) and high (1200scf/bbl) gas-oil ratio were constructed. The predicted velocity and density data for oils of these types use the pressure and temperature data from the wells and the equations in Batzle and Wang (1992).
Seismic data was inverted for relative extended elastic impedance (rEEI) using the method described in Went et al (2023) and well logs were used to verify the fidelity of the inversions. The method generates intercept impedance (AI) and gradient impedance (GI) from band limited impedance inversions of the near and far angle stacks. AI and GI are combined to generate an attribute rEEI at a cross plot rotation angle (chi) of 27°, an optimal angle to highlight changes in siliciclastic lithology and fluid (cf. Whitcombe et al 2004).
Petrophysical results and seismic inversion
The depth-related rock property trends for the well data used in this study are summarised in Figure 3. Acoustic impedance increases with depth below sea bed (TVDBSB) as a natural response to increased burial. At any given depth sands may show
Figure 2 a) Example petrophysical analysis (CPI) of a deep-water offshore well. Note the close correlation of the Vp/Vs curve to the gamma ray curve and volume of shale. Acoustic impedance in sandstone is locally higher or lower than adjacent shale; b and c) angledependent forward reflectivity models for the shale over sand interfaces arrowed in the CPI.
lower, similar or, more commonly, higher impedance than shales (Figure 3a). Hydrocarbon-bearing sandstones are not typically anomalous in terms of their acoustic impedance when compared to shales and brine sands at an equivalent depth (Figure 3b), and hence, are unlikely to form bright spots on the stack. Background Vp/Vs typically reduces from around 2 to 1.8 between 1500 m and 4000 m below sea bed. It is, however, lower in sandstones (reducing from 1.8 to 1.6) and higher for shales (reducing from 2.1 to 1.9) over this interval (Figure 3c). Hydrocarbon-bearing sandstones, at any one depth, tend to exhibit Vp/Vs values at the low end of the sandstone spectrum (Figure 3d), suggesting this would be a better attribute than acoustic impedance for identifying hydrocarbons directly. Extended elastic impedance at χ27 (EEI χ27) increases with burial depth, with porous sandstones typically showing lower values than shales at any given depth (Figure 3e). Porous hydrocarbon-bearing sandstones typically show among the lowest EEIχ27 values at any given
Figure 3 Depth of burial-related seismic rock property trends colour-coded by lithology and fluid type; a) acoustic impedance (AI) versus TVDBSB coded by volume of clay (VCL); b) acoustic impedance (AI) versus TVDBSB coded by water saturation (Sw); c) Vp/Vs versus TVDBSB coded by volume of clay (VCL); d) Vp/Vs versus TVDBSB coded by water saturation (Sw); e) extended elastic impedance (EEI χ27) versus TVDBSB coded by volume of clay (VCL); f) extended elastic impedance (EEI χ27) versus TVDBSB coded by water saturation (Sw). At a given depth lithology and fluid are better discriminated by Vp/Vs or EEI χ27 than acoustic impedance.
depth (Figure 3f). Hence, EEIχ27 is considered to be the optimal attribute for identifying fluid type directly from the seismic data. The above burial-related trends are broadly comparable to those in other normally compacting sedimentary basins (e.g. Mur and Vernik 2022).
The P-velocity – density and Vp – Vs relationships are similar in both basins. The relationships are displayed in Figure 4 and show characteristics broadly comparable to those established globally (e.g. Gardner et al 1974, Greenberg and Castagna 1993, Went 2021). These trends are instrumental in determining offset-dependent reflectivity behaviour (Figure 2c and d). They also relate to porosity, which mostly decreases systematically with depth of burial (Figure 4c) (Ehrenberg and Nadeau 2005).
Intercept and gradient values determined from forward models of angle-dependent reflectivity from between 1800 and 3000 m below seabed in multiple wells are summarised in the well-data intercept – gradient cross plot in Figure 5a. This plot demonstrates
the control of both lithology and fluid on the intercept-gradient relationships. The modelled shale on shale interfaces generate an intercept-gradient trend that goes through the origin and which is oriented at a rotation angle (χ) of approximately 27°. This is the background trend. The modelled shale over brine sand interfaces show intercept gradient relationships that define a trend that does not go through the origin. The trend lies roughly parallel to the shale-shale (background) trend but which is offset to either the SW of the plot (shale over sand) or to the NE of the plot (sand over shale). This is the brine sand trend. Hydrocarbon presence lowers EEIχ27 and shifts the data points further to the SW of the plot (shale over sand). The compressibility of the hydrocarbon phase determines the magnitude of the modelled hydrocarbon effect. A light oil (38 API) with a low GOR shows a small departure from the brine trend. Increasing gas content results in greater separation of the hydrocarbon-bearing sands from the brine case and thereby increases the likelihood of direct detection from seismic (Figure 5a). Seismic inversions for relative extended elastic impedance (rEEI) have been performed on selected 2D lines and 3D data sets to test for the detectability of lithology and fluid changes. Figure 5b shows an AI-GI cross plot from the Liberia basin over a brine well with oil shows. The cross plot is colour coded by EEIχ27 and mimics the well data cross plot. The challenge is to identify the shale, brine sand and hydrocarbon sand trends. The rEEI χ27 attribute clearly represents a look across the intercept-gradient cross plot perpendicular to the background trend (in the direction of the arrow in Figure 5b). Hence, seismic sections of rEEIχ27, or map form displays extracted from horizons, allow for the geometry of different types of anomaly to be observed. These displays may define anomalies related to geomorphological features, such as submarine channels, or anomalies which conform with depth-structure. As such they can be useful additional aids to estimating the rEEIχ27 thresholds for shale, brine sand and hydrocarbon sand. Figure 6 shows an rEEIχ27 inversion from a strike line in the Liberia basin. The section is dominated by red colours which point to low contrasts (mid-point values) in rEEIχ27, typical of the background trend or shale on shale reflectivity. The yellow and blue-dominated horizons (somewhat lower and higher values of rEEIχ27) are anomalous relative to the background trend and form a coherent interval in the lower part of the section. Calibration to the well confirms this is a sand-prone interval and that these responses relate to the brine sand trend. It
may therefore be used to map the reservoir fairway in the seismic volume. The traces of hydrocarbon present in this well do not correlate with any changes in the elastic response; hence there is no calibrated hydrocarbon signal in this section.
Intercept versus gradient cross plots: a) from well data in the Liberia basin, b) from seismic data over dry hole with shows in the Liberia basin. The well data cross plot shows the shale, brine sand and hydrocarbon sand trends. The anomaly present in the seismic data cross plot is a brine sand anomaly distinctly different to background shale trend.
Figure 4 a) P-velocity (Vp) versus density colour-coded by volume of clay (VCL); b) Shear velocity (Vs) versus P-velocity (Vp) colour-coded by volume of clay (VCL); c) porosity (phi) versus burial depth (TVDBSB) colour-coded by volume of clay (VCL).
Figure 5
Application and discussion
The results above indicate that when appropriately evaluated, either through calibration to wells or through geological reasoning, the rEEIχ27 attribute displays can be used to determine the location of sand fairways and potentially hydrocarbon anomalies. On the right of Figure 6 an interval of younger strata shows moderately bright rEEIχ27, pinching out up-dip. The strength of the rEEI response is similar to that of the sand fairway penetrated by the well to the left. Hence, a brine sand interpretation is considered most likely. However, given the presence of oil in sands nearby, this younger sand fairway may also have been a path for hydrocarbon migration and local traps on this fairway may hold some oil which could be responsible for some of the slightly more anomalous rEEIχ27 values.
Figure 7a shows the full stack display of an oblique dip line from the Leonian basin. A prominent antiformal feature is interpreted as a sandstone-dominated submarine fan mound draped over a basement high. The stack amplitudes over this feature
Figure 6 Seismic inversion for extended elastic impedance (rEEI χ27) in the Liberia Basin. The line shows a predominance of red, indicating low levels of impedance contrast, consistent with an interpretation of background shale. The lower part of the section shows strata coloured yellow and blue, indicative of a greater contrast in elastic impedance, interpreted as a predominantly brine-filled, sandstone-rich horizon, as confirmed by the well. The yellow blue responses in the top-right of the line highlight the presence of a younger sand fairway.
Figure 7 a) Oblique dip line showing rifted basement overlain by post-rift strata containing an antiformal structure, showing high seismic amplitudes (arrowed) that are interpreted as a mounded submarine fan sandstone complex; b) seismic inversion for rEEI χ27 co-rendered with the stack reveals low rEEI through four potential reservoirs within the sand complex; c) the rEEI χ27 anomalies broadly conform with the structure, suggesting a possible hydrocarbon response.
are strong. Seismic inversions for rEEIχ27 through this feature are illustrated in Figures 7b and c. In Figure 7b, stack reflectivity is combined with the rEEIχ27 attribute which highlights the presence of multiple stacked AVO anomalies. In Figure 7c the predominantly red colours are indicative of the background AVO trend, formed from shale on shale reflections. The less common discontinuous yellow layers are, by comparison, anomalous. Moderately low values of rEEIχ27 (weak yellow) are identified as brine sand anomalies whereas the stronger low values (strong yellow to green) present over the large sand mound show a broad conformance with structure and, collectively, may be considered a candidate fluid anomaly.
The premise of this article is that both brine sand and hydrocarbon sand can display as AVO anomalies. That is, changes in both lithology and fluid can shift the I-G trend away from the background. However, the cause of the shifts are somewhat different for lithology and fluid. The lithology control stems from the differences in the susceptibility of sand and shale to shear,
which promotes a low Vp/Vs for sand and a high Vp/Vs for shale. The fluid effect, on the other hand, stems from the increased compressibility of sand in the presence of hydrocarbons (with little change in shear) which results in a lower AI and a lower Vp/Vs in hydrocarbon-filled sands compared to brine sand or shale. This is significant because it may be possible, in some cases, to detect these differing effects through comparison of AVO attributes that are known to be sensitive to fluid effects with those attributes that are not (Went 2025). In some basin plays the effect of lithology on AVO is negligible (e.g. Went et al 2025). In others, such as in this study area, the lithology effect is pronounced and should be clearly identified prior to making any prognosis of anomalies resulting from a lighter hydrocarbon fluid.
Conclusion
The Leonian and Liberian basins have well and seismic data which confirm the presence of a significant working petroleum system, with syn- and post-rift Cretaceous intervals containing excellent source rocks, reservoir sands and sealing shales. Traps are typically stratigraphic and prognosing which traps contain oil typically relies on analysis of amplitude variation with angle (AVA). Evaluation of seismic rock properties from well logs and seismic data in the Leonian and Liberian basins reveals a strong lithology control on AVA. Hence, the distinctive brine sand trend should be isolated from the background shale trend prior to prognosing an anomaly resulting from the presence of hydrocarbons.
Acknowledgements
We would like to thank the data owners NOCAL and PDSL for permission to publish this article.
References
Batzle, M. and Wang, Z. [1992]. Seismic properties of pore fluids. Geophysics, 57(11), 1396-1408.
Brownfield, M.E. and Charpentier, R.R. [2006]. Geology and Total Petroleum Systems of the Gulf of Guinea Provence of West Africa. USGS, U.S. Geological Survey Bulletin, 2207-C, July 2006.
Castagna, J.P., Batzle, M.L. and Kan, T.K. [1993]. Rock physics: The link between rock properties and AVO response, in J. P. Castagna and M. M. Backus, eds., Offset-dependent reflectivity — Theory and practice of AVO analysis: SEG Investigations. Geophysics, 8, 135-171.
Dailly, P., Henderson, T., Hudgens, E., Kanschat, K. and Lowry, P. [2013]. Exploration for Cretaceous stratigraphic traps in the Gulf of Guinea, West Africa and the discovery of the Jubilee Field: a play opening discovery in the Tano Basin, Offshore Ghana. Geological Society, London, Special Publications, 369(1), pp.235-248.
Ehrenberg, S.N., and P.H. Nadeau [2005]. Sandstone vs. carbonate petroleum reservoirs: A global perspective on porosity-depth and porosity-permeability relationships: AAPG Bulletin, 89(4), 435-445, https://doi.org/10.1306/11230404071.
Gardner, G.H.F., Gardner, L.W. and Gregory, A.R. [1974]. Formation velocity and density — The diagnostic basics for stratigraphic traps. Geophysics, 39(6), 770-780, https://doi. org/10.1190/1.1440465.
Granot, R. and Dyment, J. [2015]. The Cretaceous opening of the South Atlantic Ocean. Earth and Planetary Science Letters, 414, 15 March 2015, 156-163.
Greenberg, M.L., and Castagna, J.P. [1992]. Shear-wave velocity estimation in porous rocks: Theoretical formulation, preliminary verification and applications. Geophysical Prospecting, 40(2), 195-209, https://doi.org/10.1111/j.1365-2478.1992.tb00371.x.
Hedley, R., Intawong, A., Winter, F. and Sibeya, V. [2022]. Hydrocarbon play concepts in the Orange Basin in light of the Venus and Graff oil discoveries. First Break, 40(5), pp.91-95.
Macgregor, D., Robinson, J. and Spear, G. [2003]. Play Fairways of the Gulf of Guinea Transform Margin. From: Arthur, T.J., Macgregor, D.S. and Cameron, N.R. (eds) Petroleum Geology of Africa: New Themes and Developing Technologies. Geological Society, London, Special Publications, 207, 131-150.
Mur, A. and Vernik, L. [2019]. Testing popular rock-physics models. The Leading Edge, 38, 350-357, https://doi.org/10.1190/tle38050350.1
Shuey, R.T. [1985]. A simplification of the Zoeppritz equations. Geophysics, 50(4), 609-614, https://doi.org/10.1190/1.1441936.
Smith, T.M., Sondergeld, C.H. and Rai, C.S. [2003]. Gassmann fluid substitution: A tutorial. Geophysics, 68(2), 430–440, https:// doi. org/10.1190/1.1567211.
Went, D.J. [2021]. Practical application of global siliciclastic rock-property trends to AVA interpretation in frontier basins. The Leading Edge, 40, 454-459, https://doi.org/10.1190/tle40060454.1.
Went, D.J. [2025]. A graphical approach to determine the relationship between intercept, gradient and the common seismic rock properties: global model and application. The Leading Edge (in press).
Went, D.J., Hedley, R. and Rogers, J. [2023]. Screening for AVA Anomalies in Siliciclastic Basins: Testing a Seismic Inversion Method in the Mississippi Canyon, Gulf of Mexico. First Break, 41, 75-81, https://doi.org/10.3997/1365-2397.fb2023076
Went, D.J., Bamford, M., Rogers, J., Brown, S., and Turner, G. [2025]. Characterising hydrocarbon discoveries and prospects in the Tay Sandstone using relative elastic inversion: Greater Pilot area, Central North Sea. Powering the Energy transition through subsurface collaboration. Geological Society Book Series, EGC, 1 (in press), https://doi.org/10.1144/egc1-2023-37
Whitcombe, D.N., Connolly, P.A. Reagan, R.L. and Redshaw, T.C. [2002]. Extended elastic impedance for fluid and lithology prediction
Winter, F., Esestime, P., Masotti, R., Tibocha, E., Deighton, I., Went, D. and Sayers, B. [2021]. October. Revealing the Hydrocarbon Potential and Quantifying the Prospectivity of the Harper Basin, Liberia, West Africa. 82nd EAGE Annual Conference & Exhibition (extended abstract).
Short streamers and sparse OBN acquisition: Potential for CCS monitoring?
S. David1*, F. Ten Kroode1, E. Cho1, M.A.H. Zuberi1, G. Stock1, S. Baldock1, J. Mispel2, H. Westerdahl2, M. Thompson2 and Å. Sjøen Pedersen2 demonstrate the added value of using the multiples in FWI, deriving a velocity model to migrate the short streamer data, and suggest ways to address limitations encountered when trying to push FWI on sparse nodal data to higher frequencies.
Introduction
The Paris Agreement aims at restraining global warming with efforts to limit the increase in temperature to 1.5°C above the pre-industrial level. Achieving these ambitious targets requires a significant reduction in greenhouse gas emissions, with Carbon Capture and Storage (CCS) emerging as a critical technology. CCS involves capturing CO₂ emissions from industrial sources, transporting it to a storage site, and injecting it into geological formations for long-term storage. For CCS to be effective and gain public and regulatory acceptance, robust monitoring technologies are essential to ensure that the captured CO₂ remains securely stored.
Subsurface monitoring plays a significant role in verifying containment and conformance. It involves tracking the movement of CO₂ within the storage formation to confirm that it stays within that formation and within the licence and behaves as predicted by the reservoir model. Various techniques are available and present benefits and drawbacks (pressure and temperature, well logging, …). Monitoring strategies must be tailored to the specific characteristics of each storage site and technologies need to be adapted accordingly.
As the energy transition progresses, overlaps and conflicts between oil and gas projects, offshore windfarm developments and carbon capture and storage projects will increasingly emerge. In these future congested areas, CCS monitoring will be particularly challenging due to limited space and restricted access for deploying monitoring equipment. Potential interference from other activities will also affect the monitoring solutions (Quirk et al, 2021).
Considering these challenges and starting from technologies that are traditionally used in the oil and gas industry, we wanted to evaluate the potential of using short streamers and free-fall, self-recovering Ocean Bottom Nodes (OBNs) for CCS monitoring in an innovative and cost-effective way. To do this, a field test – financially supported by Equinor and CLIMIT – was carried out at the Sleipner CCS field in the North Sea, where CO₂ has been sequestered and monitored since 1996 (A-K Furre et al., 2016; P. Ringrose, 2018; J. Mispel et al., 2019 and R. Dehghan-Niri et al., 2022). Carbon dioxide has migrated into nine thin sand layers,
which need to be imaged at a high enough resolution in order to confirm containment and conformance.
Forty-seven OBNs were deployed on a sparse 500 by 525 m grid (with densification to 100 m along one line) to derive a velocity model to migrate the high-resolution data acquired with the short streamers. Sparse node acquisition faces challenges with high-frequency FWI-based velocity updates due to limited illumination of the subsurface with reflected waves. The approach used to overcome this limitation is to include multiples in the Full Waveform Inversion (FWI), in addition to primary reflections, to constrain the velocity estimation and to provide a higher-resolution FWI-derived velocity model.
In this paper, we show the added value of using the multiples in FWI in order to derive a velocity model to migrate the short streamer data. We also show the limitations encountered when trying to push FWI on sparse nodal data to higher frequencies and suggest future work to address these.
Acquisition set-up
A hybrid streamer and OBN survey was conducted to demonstrate the benefits of two recently developed acquisition technologies: high-resolution mini-streamer acquisition (eXtended
Figure 1 Pictures showing the self-recovery device attached to the OBN. Nodes are simply thrown overboard (left) and recovered by an inflatable balloon lifting the node to the sea surface (right).
High Resolution – XHR) (R. Dehghan-Niri et al., 2023) and free-fall, self-recovering OBNs. Two independent datasets were acquired simultaneously, with the OBN dataset complementing the short-streamer dataset by providing long offsets and good S/N at low frequencies for an FWI-based velocity model update.
Minimising the cost of monitoring is obviously very important for CCS projects. To meet this objective, a free-fall approach was used to deploy the nodes. The OBNs were also equipped with a ‘pop-up’ self-recovery mechanism to optimise retrieval. This has the advantage of accelerating node recovery and eliminating the need for more expensive Remotely Operated Vehicle (ROV) or Node-on-a-Rope (NOAR) solutions. The self-recovery device is made of an inflatable balloon, a pinger and an air reservoir. The system is activated by sending an acoustic signal to the pinger, which triggers a valve on the air reservoir to open, after which the balloon is filled and lifts the node to the sea surface (Figure 1).
To optimise the cost further, a very limited number of nodes was used and deployed on a sparse grid (500 m x 525 m) with denser node spacing of 100 m along one receiver line for testing purposes. Shots recorded by the nodes are the shots fired from the short streamer XHR acquisition, with a shot grid of 6.25-m flip-flop and 75-m sail-line spacing (Figure 2).
Objectives
The aim of the study was two-fold. First, we wanted to evaluate the capabilities of FWI in shallow water, 80 m at Sleipner, and, with sparse nodal data, to produce a velocity model suitable for imaging the short streamer data. Second, we wanted to compare the resolution of FWI images against that of short streamer images.
Sparse node acquisition has limitations, notably in the sampling of the very shallow subsurface which can compromise
high-frequency velocity updates. At low frequencies, FWI predominantly relies on diving wave energy where the sparsity of the nodes is not a problem. Estimating a high-resolution velocity model with FWI relies on small angle reflection energy, which is highly sensitive to node spacing. To address these challenges, our approach leverages multiples instead of relying solely on primary reflection data to stabilise and enhance the accuracy of the FWI-derived velocity model. Multiples contain more near-angle information and provide better illumination compared to primary reflections, which could ultimately enable robust and reliable monitoring of CCS sites.
Building a low-frequency velocity model
A 3D initial velocity model volume was derived from 2D data spanning the survey area. The velocity model was calibrated to match the velocity profile of the closest available well information and smoothed to produce a well calibrated velocity model. Two temperature and salinity velocity profiles (TS-dip) were acquired, transcribed and evaluated as function for the water column. For shallow water depth, a single value water velocity was considered sufficient for water column velocity and a velocity of 1492 m/s was selected. The anisotropy model used was a constant delta 3% and epsilon 4.5%.
During acquisition, near field hydrophone (NFH) data were recorded. Applying a single global debubble operator based on the NFH data, showed some jitters and left some residual bubble
Figure 2 Acquisition layout. The blue lines are the sail lines for the streamer survey. The yellow dots are the Ocean Bottom Node locations. Grey outlines indicate the CO2 extension layers.
Figure 3 Hydrophone data examples: raw (a), debubble using global derived operator (b), debubble using shot-by-shot derived operators (c)
Figure 4 Image gathers of Up-Down deconvolved OBN data: in the initial velocity model (a) and in the 8Hz FWI velocity model (b).
energy, so the NFH data were used to derive individual shot-to-shot operators which provided an improved debubble result (Figure 3).
The hydrophone node data were used to perform FWI utilising both diving wave and reflection energy, in 0-4 Hz, 0-6 Hz and 0-8 Hz frequency bands. An acoustic Dynamic Matching FWI (DM FWI) was used. The DM FWI algorithm addresses the inversion problem by maximising the cross-correlation between the recorded and synthetic data. By dynamically matching these datasets (i.e., matching the amplitude of the synthetic data with that of the field data either by normalisation or by matching the amplitude of the synthetic data to the field data), the algorithm reduces the influence of amplitude, allowing it to minimise kinematic differences during the data-fitting process (Huang et al. 2020 and 2023; Mao et al., 2020). The FWI results were reviewed carefully through data domain QC and pre-stack depth Kirchhoff migration of Up-Down deconvolved OBN data. It was observed that FWI has successfully captured the general background velocity, with common image gathers showing good gather flatness (Figure 4).
Higher-frequency updates: Two approaches
In the past decade, an increasing number of projects have treated multiple reflections as signals for imaging the subsurface, rather than as noise. Imaging with multiples provides improved near-angle illumination, which goes hand-in-hand with enhanced resolution and a wider illuminated area. This is predominantly valid in shallow areas, where multiples illuminate the subsurface at smaller reflection angles than the primaries.
Using the full wavefield in velocity model-building provides similar benefits to those described above for imaging and it is being increasingly used in FWI flows. Using multiples in FWI increases the non-linearity of the inversion problem, meaning that it is easier for inversions to get stuck in a local minima, resulting in cycle-skipping artefacts in the final model. However, use of the full wavefield has the potential to accelerate the full imaging flow by removing the need for the time-consuming pre-processing steps (Figure 5).
Following the two pre-processing workflows, FWI was run in increasing frequency bands from 0-12Hz, 0-15Hz and 0-25Hz using two different data sets as input, namely the Up-Down deconvolved data set and the hydrophone data set with minimal preprocessing (Figure 6).
In the first approach, the node data were processed through to Up-Down Deconvolution (UDD), utilising both the hydrophone and vertical geophone components. This process yielded a dataset with surface reflection events, such as ghosts and surface multiples, removed (Figure 7 b). Subsequent FWI can then be run with an absorbing boundary condition in the numerical scheme for forward and backward modelling. It is important to note that UDD assumes a horizontally layered earth, an assumption which will be violated in complex geological settings. However, for the Sleipner case, this assumption is generally valid.
A second approach was based on running FWI on the fullwavefield hydrophone (P) data, with de-bubble only applied (Figure 7 c). In this method, the free surface reflectivity was included in the inversion, and a data reconstruction method (Zuberi et al., 2023) was utilised to mitigate the effects of guided waves and ocean bottom waves, which cannot be explained properly by acoustic modelling.
Figure 7 Example of a nodal gather and input data to FWI for the two different routes. Raw Hydrophone data (a), UDD data (b), hydrophone data with debubble applied (c).
Data reconstruction methodology
Elastic effects in the near surface can cause a mismatch between the acoustic modelled data and the observed data. In principle, elastic modelling should allow a better match between the modelled and observed data for FWI. In practice, however, the increased computational cost and parameter uncertainty associated with taking more physics into account (several parameters to invert for) can pose serious challenges to FWI. Moreover, since the multiples depend non-linearly on velocity, any discrepancy due to unexplained elastic effects would make inversion with multiples more challenging. We have therefore opted for acoustic FWI in this study.
To mitigate the near-surface elastic effects in a computationally efficient manner, we use the data reconstruction method proposed by Zuberi et. al. (2023). This method reconstructs an acoustic equivalent of the observed data, which is subsequently used in FWI as the observed data. The acoustic equivalent is obtained by matching the observed source gathers (in this study sources and receivers are interchanged for computational efficiency) to the modelled gathers and applying the resulting matching/reconstruction filter to the observed data (Figure 8). As the reconstruction filter is a single filter applied to whole source gather, it does not alter the slopes or curvatures of the events in the data. In other words, by using data reconstruction we can mitigate the elastic effects by absorbing them in the reconstruction filter instead of leaving them in the residual, which can cause spurious updates. This may be more important for multiples due to their non-linear dependence on elastic perturbations. Data reconstruction is an approximate technique to use all data acquired in an elastic earth, including multiples, in an acoustic FWI scheme.
In general, the reconstruction filter can absorb features in the data that are not explained by the modelling, which is like a conventional source inversion. Zuberi et al. (2023) also show that data reconstruction performs source inversion implicitly; that is, the data reconstruction filter is an inverse of the matching filter for conventional source inversion. Therefore, in this study we did not have to perform explicit source inversion/wavelet estimation.
Comparison of FWI results on UDD and ‘raw’ hydrophone data
Figures 9c and 9d show the velocity models obtained from the two approaches after the 25 Hz update. Figures 9a and 9b are shown for comparison and contain the initial velocity model used and the 8Hz FWI velocity model from the low-frequency
update phase. FWI images derived from these velocity models and KPSDM sections derived using these velocity models are also included to help with the comparison and the interpretation of the results. The KPSDM results used for QC and comparisons were derived using the UDD data from the dense node line (inline sections) and the XHR streamer data (depth slices).
A first glance at the results shows that the CO2 plume has a distinct slow velocity and can be clearly observed in both approaches. However, the UDD result provides less definition
Figure 8 Example of hydrophone nodal gather after the data reconstruction method application. 0-12Hz observed nodal gather (a), 0-12Hz nodal gather after data reconstruction (b).
Figure 9 Velocity model (left), FWI image (middle) and KPSDM image (right) obtained in: initial model (a), 8Hz FWI velocity model (b), 25Hz FWI full hydrophone data velocity model (c) and 25Hz FWI UDD data velocity model (d).
and less-focused edges of the CO2 plume. This is particularly evident on the depth slices (Figure 10). In the shallow area, the full hydrophone FWI results seem sharper compared to UDD FWI results, with less noise and with velocity updates that better follow the geology when comparing to the KPSDM stacks.
A second look shows that the UDD version exhibits fast velocity bands around the CO2 plume, which are less dominant in the full wavefield results using hydrophone data. Common image gathers based on the UDD FWI model show downward curving gathers, indicative of a too fast velocity estimation in this area. In contrast, gather flatness using the FWI model based on the full hydrophone data version is significantly better (Figure 11).
Increasing
frequency and wavenumbers:
60Hz
FWI image compared to 60 Hz KPSDM image of short streamer data
The primary objective of the OBN data acquisition was to derive a velocity model to migrate the XHR data.
This was achieved by the 0-25Hz FWI-based velocity model derived on the full hydrophone data, as evidenced by the flat
Figure 10 Time slices through KPSDM volumes of XHR data at depth 900 m: KPSDM using the 25Hz FWI-derived velocity model based on full hydrophone data (a), KPSDM using the 25Hz FWI-derived velocity model based on UDD data (b). Low velocities indicate the CO2 plume.
Figure 11 Image gathers: in the 25Hz FWI full wavefield hydrophone data derived velocity model (a) and in the 25Hz FWI UDD reflectivity data-derived velocity model (b).
image gathers in Figure 12 a. However, we also wanted to investigate the maximum resolution achievable with FWI on this sparse nodal data set, so we carried on with the FWI work, beyond what is normally needed for a migration velocity model.
We continued to use the full wavefield and the data reconstruction method in the following FWI updates.
By replacing the field signature with a synthetic source, we shaped the wavelet in the data for each FWI frequency band. This can help in multi-scale FWI strategies by making the transition to higher frequencies smoother. By absorbing the near surface effects, the data reconstruction method prevents noise from getting imprinted in the shallow updates. Having cleaner updates in the shallow subsurface improves the results for deeper targets.
The frequency bands used for high-frequency FWI were 0-35 Hz, 0-45 Hz and 0-60 Hz. In addition to scaling up the frequency, we also scaled up the wavenumbers by increasing the contribution of smaller angles in the inversion. As for higher-frequency bands, at the 60 Hz stage, all angles were used.
The 60 Hz FWI-derived velocity model and its FWI image were compared with a depth-migrated UDD section and
depth-migrated XHR data. In order to assess and validate the velocity model the raw migrated XHR data is also included in the comparisons (Figure 13). The raw XHR image contains clear and strong multiples at the injectite level (see the red box in Figure 13). They can be observed on the processed XHR dataset as weaker residuals. As expected, the UDD image doesn’t contain those. More interestingly, the multiples generated by the injectites are only vaguely visible in the FWI model and FWI image, and far less evident than in the XHR images. This reduced multiple contamination demonstrates the added value of using multiples in FWI-based velocity estimation. That said, the small residuals observed in the FWI results need to be further investigated. They are likely to depend on the water velocity and details of the water bottom.
FWI experiment on primaries-only dataset (without Free Surface)
To further evaluate the impact of free surface multiples in a sparse node and shallow-water setting, we conducted an additional test. A synthetic ‘primaries only’ dataset was created using the 60 Hz velocity model through acoustic finite difference modelling. An absorbing boundary was used at the free surface in order to obtain
Figure 12 KPSDM stacks and image gather comparisons: 25Hz FWI velocity model based on full hydrophone data (a) and 35Hz FWI velocity model based on full hydrophone data (b) show hardly any differences, as expected.
Figure 13 Migrated stacks showing multiple contamination on XHR images, which are far less prominent on the FWI velocity model and its derivative.
data with primaries only. Figure 14 c shows the data set obtained in this manner. In comparison, we show the observed data in Figure 14 a and the modelled synthetic primaries and multiples data in the 60 Hz FWI model in Figure 14 b. The results show good correlation with the observed data for the main events. The strong low-frequency content in the primaries-only synthetic data is due to the absence of a ghost effect.
A QC done on stacks confirms the feasibility of using the primaries-only modelled synthetic data. Figure 15 shows a stack comparison between the UDD stack and the primaries-only modelled synthetic stack data. By multiplying the two stacks, a section QC is produced, highlighting phase differences and any mismatch between the two. Again, the shallow area demonstrates the major differences. The rest of the section, including the CO₂ plume, remains quite similar.
This primary-only synthetic data went through an FWI sequence up to 8 Hz with an absorbing free surface condition. The results are shown in Figure 16, which shows the velocity models, UDD images and UDD image gathers at 8 Hz for both the full wavefield and the synthetic primaries-only data.
The velocity model from the primaries-only data showed regions of higher than expected velocity in the shallow section,
highlighting the sparse node sampling imprint. Additionally, high velocities above the plume led to poor gather flatness in common image gathers (indicated by red arrows) and poor imaging with erroneous reflection events in the migrated stack. This underscores the advantage of using the full wavefield, which enhances shallow subsurface illumination through multiple energy, a benefit not achievable with primaries-only data.
Figure 14 Observed hydrophone data (a), Modelled synthetic data obtained with the free surface on in the forward modelling (b), Modelled synthetic data obtained with the free surface off in the forward modelling (c), Low-cut fiiltered modelled synthetic data obtained with the free surface off in the forward modelling (d), wavelet used and its amplitude spectra (e).
Figure 15 Up-Down Deconvolution stack (a), Modelled synthetic stack obtained with the free surface ‘off’ in the forward modelling (c) and multiplication result in (b) between the two – (a) and (c).
Figure 16 8Hz FWI Velocity model and common image gathers for the primary-only dataset route (a) and the full-wavefield hydrophone route (b).
Remaining challenges
In this section we discuss two remaining challenges with FWI on the sparse nodal data set.
The first one is that, although the 60 Hz FWI Image resolution at the CO₂ plume shows satisfying results to interpret nine sand layers, the overall resolution, particularly in the shallow area, is lower than the Kirchhoff PSDM image (Figure 17). This
is mainly due to the regularisation (preconditioning to weigh the higher wavenumbers down) of the acoustic FWI gradient, which was used to avoid overfitting the data at high frequencies.
A second challenge is a questionable high velocity update, which can be observed just below the seabed and is probably an FWI artefact (Figure 18). To verify if this is a valid update, a travel time tomography has been run on the XHR data (Figure 19). A single pass of tomography was run, using the 8 Hz FWI velocity model displayed in Figure 8b. Due to the short offsets, limited pre-conditioning was applied to the gathers prior to RMO picking. Although the moveout is quite limited, the curvature-based picking worked successfully. However, updates are restricted to shallow depths because of the limited offset range in the short streamer data. Tomography was run down to 250 m.
FWI Image derivative of the 60Hz FWI velocity model and amplitude spectra (a) and UDD KPSDM stack and amplitude spectra (b).
The tomography run on the XHR data does not show the faster velocity in the shallow part and shows a different result than the FWI velocity model, with a decrease in velocity. Additionally, this is also observed on the XHR CDP gathers where events at this depth are dipping down slightly when the XHR data is migrated with the 60 Hz FWI velocity model. The CDP gathers are flatter when using the velocity derived by tomography on XHR data. PSDM QC on the XHR dataset shows stack improvements. This FWI artefact can be either due to elastic effects that were not fully mitigated by the data reconstruction method or to a lack of near-angle illumination, leading to unstable shallow velocity updates. Additional tests will be needed in order to accurately understand the origin of this FWI artefact.
Figure 18 Zoom in the very shallow area showing the OBN 60Hz FWI velocity model overlaid on the XHR stack data and the corresponding XHR CDP gathers.
Figure 19 Zooming into the very shallow area showing the XHR travel time tomography-derived velocity model overlaid on the XHR stack data and the corresponding XHR CDP gathers.
Figure 17
Conclusions and way forward
Physical and cost constraints limit seismic acquisition configurations in CCS environments. Our study demonstrates the potential of overcoming these limitations by using short streamers and a cost-effective OBN operation together with the incorporation of multiples into the FWI process. The primary objective of this acquisition was to treat the datasets independently: XHR data provided a high-resolution image of the shallow subsurface and CO₂ plume, while OBN data provided a velocity model to depth migrate the XHR data. In addition, we have performed tests and assessments on the feasibility of FWI as a tool to provide a high-resolution velocity model and then to derive an interpretable volume from it.
By leveraging the full wavefield, we achieved a high-resolution velocity model update with a minimal number of nodes. This experiment confirmed that incorporating multiples enhances the accuracy and reliability of FWI-derived velocity models. We demonstrated that with FWI we managed to construct a kinematically correct velocity model and generated a high-resolution depth image from the XHR data. The resolution of the FWI velocity model was enhanced and its derivative indicates the method is on the right path with the FWI algorithm handling multiples well. However, the resolution of the final 60 Hz FWI image is lower than that of the 60 Hz KPSDM image from XHR data due to the lack of short offsets, or equivalently, small reflection angles.
An obvious way to try to increase resolution in FWI models and images is to include the XHR data in the inversion workflow. Ryan et al. (2024) performed an FWI on the XHR dataset only, using a regional velocity model as input. This led to high-resolution FWI models and images, but it could not update the kinematical content of the regional model because of the lack of offsets in the XHR data. In a CCS context, in order to provide correct imaging for each survey and hence an accurate monitoring, it is crucial to update the model, which underlines the need for a joint workflow – using OBN data for updating the model and the XHR data to improve the resolution of the FWI result, especially in shallow water, based on reflection data rich in high frequencies.
Another way to increase the resolution of FWI models and images based on nodal data only, would be to switch from acoustic to elastic FWI. This would allow us to move away from the kinematic approach adopted in DM FWI towards a dynamic one based on a least squares objective function. Modelling and using the amplitudes in a more correct manner is likely to help in increasing resolution. Elastic FWI may also avoid the shallow velocity artefact discussed in the previous section.
On the acquisition side, future operations can be more cost-effective by using smaller sources and smaller compressors, thereby reducing the equipment footprint. In a subsequent phase, decimation tests can also be performed to evaluate the impact of node and shot density on FWI results, consequently providing a reduced source and/ or receiver effort to even further reduce the cost.
These advancements will unlock the potential for integrating OBN data into CCS projects, offering improved efficiency and accuracy in subsurface imaging and characterisation.
Acknowledgements
The authors would like to thank Equinor ASA and the CLIMIT program for their support and collaboration in the XHR/OBN Sleipner acquisition, as well as our colleagues at TGS and Equinor ASA for fruitful discussions and their support.
References
David, S., Zuberi, M.A.H., Cho, E., Baldock, S., Stokes, S. and Stock, G. [2023]. Unlocking Advanced Imaging for CCS Using Cost Effective Acquisition Solutions. 4th EAGE Global Energy Transition Conference and Exhibition, Extended Abstracts.
Dehghan-Niri, R., Pedersen, Å.S., David, S., Westerdahl, H., Thompson, M., Furre, A-K., West, P. and Holm-Trudeng, T. [2023]. Proving the potential; monitoring Sleipner-CO2 plume with mini-streamers. 84th EAGE Annual Conference & Exhibition.
Dehghan-Niri, R., Thompson, M., Mispel, J., Zarifi, Z., Gram, C., Olsen, P.A., Pedersen, Å.S., Ringrose, P., Furre, A.-K. and H. Westerdahl, H. [2022]. Optimizing a Geophysical Monitoring Toolbox for Offshore CO2 Storage. 16th International Conference on Greenhouse Gas Control Technologies, GHGT16.
Furre, A-K., Eiken, O., Alnes, H., Nesland Vevatne, J. and Kiær A. [2016]. 20 Years of Monitoring CO2-injection at Sleipner. 13th International Conference on Greenhouse Gas Control Technologies, GHGT13.
Huang, Y., Mao, J., Sheng, J., Perz, M., He, Y., Hao, F., Liu, F., Wang, B., Yong, S.L., Chaikin, D., Citlali, Ramirez A., Hart, M. and Roende, H. [2023]. Toward high-fidelity imaging: Dynamic matching FWI and its applications. The Leading Edge
Huang, Y., Mao, J., Xing, H. and Chiang, C. [2020]. Noise strikes, but signal wins in Full Waveform Inversion. SEG 90th Annual Meeting.
Mao, J., Sheng, J., Huang, Y., Hao, F. and Liu, F. [2020]. Multi-Channel dynamic matching full-waveform inversion. SEG 90th Annual Meeting.
Mispel, J., Furre, A-K., Sollid, A. and Maaø, F.A. [2019]. High Frequency 3D FWI at Sleipner: A Closer Look at the CO2 Plume. 81st EAGE Annual Conference & Exhibition.
Quirk, D., Underhill, J., Gluyas, J., Wilson, H., Hiwe, M. and Anderson, S. [2021]. The North Sea through the energy transition. First Break, technical article.
Ringrose, P. [2018]. The CCS hub in Norway: some insights from 22 years of saline aquifer storage. International Carbon Conference 2018.
Ryan, C., Liao, K., Moore, H., Westerdahl, H., Thompson, M.,Pedersen, Å.S., Mispel, J., Wierzchowska, M., Dehghan-Niri, R. and Biryaltseva, Y. [2024]. Short Streamer Acquisition – the Potential and the Challenges. 3rd EAGE Geoscience Technologies and Applications Conference.
Zuberi, M.A.H, Cho, E., Seher, T. and Myklebust, R. [2023]. Mitigating the effects of guided waves in OBN data for acoustic FWI using data reconstruction: A data example from the Yggdrasil area. IMAGE 2023, extended abstract.
Zuberi, M.A.H., and Pratt, R.G. [2018]. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning. Geophysical Journal International, 213(1), 706-725.
Industry-first deployment of simultaneous source acquisition using a Dispersed Source Array with Tuned Pulse Source and conventional airgun for a shallow water seismic survey offshore Malaysia
Maxime Benaniba1*, Jeremy Aznar1, Stephane Laroche1, Philippe Herrmann1, Shuki Ronen1, Julien Large1, Shamsul B Shukri2, Tasvir Kaur Rajasansi2, Sukhveender Singh2, Law Chung Teck 2, Faizan Akasyah2, Chetan Anand Karri3, Ewan Neill3 and Craig Walker3 present a shallow water case study where a Tuned Pulse Source and conventional airguns were deployed simultaneously, in a Dispersed Source Array mode, to acquire seismic data with Ocean Bottom Node and borehole distributed acoustic sensor receivers to ensure optimum lowfrequency signal from the emission stage to 4C fidelity sensor recording.
Abstract
We present a shallow-water case study offshore Malaysia where a Tuned Pulse Source (TPS) and Conventional AirGuns (CAG) were deployed simultaneously, in a Dispersed Source Array (DSA) mode, to acquire seismic data with Ocean Bottom Node (OBN) and borehole distributed acoustic sensor (DAS) receivers. Both TPS and CAG were deployed from a single source vessel, using a single compressor package in a single pass. This industry-first novel deployment was also the first time the TPS source and MEMS-based OBN had been deployed in tandem to ensure optimum low-frequency signal from the emission stage to 4C fidelity sensor recording.
Introduction
The first successful seismic acquisition and imaging project (Meritt et al, 2024) conducted with the Sercel TPS, a low-frequency and environmentally friendly seismic source, in the Gulf of Mexico validated the value of its pre-survey analysis tools (source configuration and planning), and the efficiency and reliability of its deployment in a production environment (full azimuth to 40 km offset and 60 km offset inline direction). Six weeks after retrieval of the last OBN, an unprecedented sub-salt full-waveform inversion (FWI) image was delivered, revealing deep geological features which had previously been invisible. This case study took place in deep water, with OBN recording and using only the TPS as a source over an area already covered by legacy full-azimuth (FAZ) towed-streamer data acquired with a CAG source.
Why deploy the TPS in DSA mode with a CAG array?
Some of the most challenging E&P projects are conducted in complex geological settings, involving chalks, basalt, carbonate
or salt bodies which are responsible for wavefield scattering above targets of interest. In such environments, the lack of low frequencies combined with limited acquisition offsets and azimuths are the main factors that can reduce penetration of the useful seismic signal. When current data-driven inversion algorithms, such as FWI, are applied in these complex environments they are subject to local minima also known as cycle skips, and dependent on subjective a priori information. Inverted velocity and reflectivity models do not therefore accurately depict the subsurface. Limited seismic signal penetration impacts project life cycles, prospect identification, and well placement.
Seismic surveys that combine broadband sources starting from a very low frequency (1 Hz), and covering at least 7 octaves recorded over wide-azimuth and long-offset geometries, are required for a robust inversion process with minimum a priori information. To efficiently address the long-offset, wide-azimuth requirements, OBN acquisition is the solution, and, in this Malaysia case study, Sercel GPR300 MEMS-based OBNs were used. To generate the valuable low frequencies, for a faster and more robust FWI (Meritt et al., 2024), a TPS source was deployed in addition to a CAG array. The TPS added one and a half additional octaves compared to the CAG alone. By enhancing the signal-to-noise ratio (SNR) in the low-frequency part of the seismic spectrum, TPS enables clear identification of subsurface structures at longer offsets. It also leads to a higher peak-to-side lobe ratio of the seismic wavelet, which results in higher-resolution mitigating interference caused by wavelet sidelobes. Here, we present a survey acquired with an efficient operational solution to combine the benefits of a CAG array (5 to 100 Hz) with the lower-frequency information from the TPS source (below 5 Hz) in a single pass with a single source vessel.
DSA principles
Historically, marine seismic acquisition has used large arrays of impulsive sources with different volumes ranging from a few tens to a few hundreds of cubic inches (cu.in). In most seismic surveys, all the sources in an array are shot at the same time in a synchronised mode or with subtle delays between the sources to enable bubble tuning. Another approach, known as Dispersed Source Array (DSA) (Berkhout, 2012) is to shoot sources with different characteristics at intervals of a few seconds. Each source type is called a voice and this survey corresponds to multi-voice acquisitions. There are various geophysical and operational benefits to operating the different sources as a DSA as opposed to deploying all sources simultaneously. The first advantage of the DSA approach is the use of more compact arrays which can be reduced to point sources as is the case for the TPS source. Point sources emit an azimuthally isotropic source wavelet free of array-induced effects such as multiple arrivals affecting near- and mid-offsets. More importantly, single sources reduce the risk of overdrive or amplitude saturation often observed on near-offset sensors, especially in shallow water environments. Another important advantage of the DSA, covered in this paper, is the better use of air consumption with different shot intervals. Low-frequency sources which use a lot of compressed air and require longer listening times for deep targets can be deployed more sparsely than high-frequency sources that use small amounts of compressed air and require shorter listening times for shallow targets.
The joint de-signature process involved in DSA has already been described in detail (Berkhout, 2012; Ronen et al, 2022;
Allemand et al, 2023). De-signature can be performed either in the time or in the frequency domain as well as in the data or image space. In the data frequency domain, it can be described as follows:
where r is the reflectivity, d is the data, s is the source signature (the upper bar (-) stands for complex conjugate) and is the pre-whitening factor to prevent division by 0. is typically a constant value, or frequency dependent with : is
Figure 1 Spectrum of the pre-survey simulations (TPS spectrum, CAG array spectrum, combined DSA spectrum).
Figure 2 Pressure wavefield modelling. On the left, the CAG array with 14 airguns of different volumes, on the right, the TPS point source. We can see the time delay for the different pressure waves generated by the array, leading to a multi-arrival source wavefield with a strong azimuthal dependency. For the point source, the pressure map is simpler: single arrival, azimuthally invariant.
the average instrument and ambient noise and λ a constant factor. If we have two sources, then joint de-signature reads,
How to configure a source vessel for DSA surveys?
1/ Hybrid survey operations
with two input data; d1 and d2, related to two source signatures s1 and s2
From the above expression, we define the equivalent DSA source spectra as,
The scheme can be extended to any number of sources.
TPS and CAG pre-survey source modelling
Our proprietary source modelling capabilities were used to model the 3D wavefield generated by a single TPS (28,000 cu.in, at 1000 psi) and the CAG array historically used in this area, see Figure 3.
From the simulation, the CAG array and TPS amplitude spectra are available to derive the DSA amplitude spectra (1c) as shown in Figure 2. The CAG array of ~3500 cu.in at 2000 psi emits a signal with a relatively flat spectrum between 6 Hz to 100 Hz. The TPS of 28,000 cu.in at 1000 psi provides a signal increase of 18 dB below 5 Hz. We can also note some peak and notch complementarity between the two sources within the 5-12 Hz bandwidth. DSA combines the best of TPS and CAG array sources with a robust handling of the point source notches. DSA is a flexible digital array offering an alternative to the conventional hard-wired CAG arrays. Pre-survey modelling is also useful to predict any signal saturation of the hydrophone and accelerometers. This is of particular interest in shallow water with OBN deployment close to the source. An accurate modelling can also be of use for QC or processing purposes such as the reconstruction of saturated samples and near-field hydrophone (NFH) modelling. One can see the good correlation between the simulation modelling and the real data recorded by the node for different angles of incidence (Figure 3). This highlights the ability of the simulations to model accurately the directivity effect for a point source.
The main survey objective was to demonstrate that TPS and CAG sources can be deployed simultaneously from a single seismic vessel on a commercial survey (Figure 4). CAG arrays and low-frequency TPS sources have complementary signatures, which means that their combination delivers the widest bandwidth possible.
However, both sources operate at a different pressure: 2000 psi for the CAG array and 1000 psi for the low-frequency TPS due to the mechanical limitations of the firing chamber. This poses a number of challenges to accommodate the requirements
Figure 4 TPS ready to be deployed from the source vessel back deck (full TPS installation video).
Figure 5 Schematic of the pressure regulation system installed on the seismic vessel to conduct an hybrid survey.
Figure 3 Signal from the TPS received by a node, plotted in the time domain, left-hand graph, and in the frequency domain, right-hand graph.
of both systems and allow safe operations using both seismic sources without the risk of overpressure.
The TPS solution was integrated within the existing vessel setup and a pressure regulation system was added to the vessel’s high-pressure lines. The pressure regulation system ensures pressure inside the TPS remains below 1000 psi while conventional sources are fed at 2000 psi. The system has regulators to decrease air pressure and a control valve as a safety device to bleed the umbilical if needed (Figure 5).
This survey has successfully demonstrated that it is possible to deploy and operate CAG arrays in parallel with the low-frequency TPS to get the best out of both sources. The graph below (Figures 6a and 6b) illustrates the TPS firing pressure and the manifold pressure where we can observe the ability of the
pressure regulation system to keep the TPS below its maximum operating pressure (1000 psi). During this survey, the firing pressure was set at 900 psi to safely absorb excess pressure related to slowdown in navigation speed which implies more charging time to reach the next predefined firing position. This would not be needed in the case of a firing strategy on pressure.
2/ Simulation and model
In addition to this operational deployment of a hybrid source from a single vessel, Sercel has developed the tools and expertise to accurately simulate the airflow rate from the compressors to the source (Figure 7). This knowledge has been used to build a model that can predict minimum refilling time for hybrid surveys (Table 1). Our tool is key to designing the shot grid with complex
Figure 7 Comparison between field data and simulation model prediction (source: TPS 28 000 cu.in + CAG 2 x 3460 cu.in. Note the good match between modelled TPS refilling purple curves and TPS chamber pressure data (blue dots).
Figure 8 DSA shot carpet illustration for the hybrid survey (TPS 100 m x 100 m in blue, CAG 50 m x 50 m in red).
Figure 6 A. TPS firing pressure distribution (+ 99% within a firing window of +/- 50 psi). B. Manifold and TPS firing pressure graphs during the hybrid survey.
A. B.
source setups such as hybrid configurations. Figure 7 compares field data with a simulation for a triple source design consisting of one TPS 28 000 cu.in source and two CAG arrays of 3460 cu.in.
From a more general perspective, our prediction model can be used to build a table for a range of configurations, with umbilicals from 100 m to 400 m and a diameter of 1’’ and 1’’1/4.
The table below shows refilling times and related SPIs (Shot Point Intervals) for source vessel speeds of 3 knots and 4 knots for various umbilical configurations. The source used to build this table is a triple source with:
• TPS 28,000 cu.in (@ 1000 psi)
• 2 x CAG 3460 cu.in (@ 2000 psi) firing every 12 s
The impact of a bigger conventional source is quite marginal and would be + 1 s per additional 1000 cu.in. Note: 1 knot = 0.514444 m/s or 1.852 km/h
Field data recorded by ocean bottom nodes
From the above analysis and taking into account imaging sampling requirements (denser for mid to high octaves, sparser for low octaves), a DSA shot carpet was designed and acquired with a 100 x 100 m TPS grid and a 50 x 50 m CAG grid (see Figure 8). To address the need for the long-offset and full-azimuth acquisition, GPR300 OBN nodes were deployed on the seafloor at a depth of 50 m. The 3C MEMS (micro-electromechanical
1
and
point
systems) were selected for their 3C sensing fidelity, especially in low frequencies. Figures 9 and 11 nicely illustrate how effective the TPS source is at generating the low-frequency seismic Earth response: the hydrophone data, in a common receiver gather (CRG), for two source lines, chunked according to TPS firing times over an extended time window [- 25 s,+20 s]; low-velocity sea bottom surface waves clearly stand out (high SNR) with the TPS source compared to the CAG source.
The additional robust subsurface information brought by the TPS source can also be illustrated on the first break (FB) data. Figure 10 displays low-frequency (1.5 Hz and 5 Hz) phase rings (for each source position (xs,ys) the phase at a given frequency
Figure 9 Hydrophone continuous record aligned on TPS firing time [-25 s:20 s] for one node and two source lines. The data are chunked with the TPS shot times. Hence, the TPS appears continuous while the four CAG shots are not continuous due to variations in vessel velocity. Note the stronger Scholte waves that are induced by the longer wavelength of the TPS compared to the depth of the seabed.
Table
TPS refilling times
shot
interval simulation for a hybrid survey with triple source.
is displayed). The phase continuity around the rings is related to the SNR of the information. Continuity indicates that the signal is dominant. Discontinuity indicates that the noise is dominant. As expected, the FB phase rings show a significantly higher SNR with the TPS source compared to the CAG array. The final purpose of the DSA approach is to combine the best of the TPS and CAG sources to emulate a rich multi-octave source starting from very low up to very high frequencies. Figure 11 displays the TX CRG for one source line, Figure 12 displays the FX amplitude spectra for the TPS, the CAG source and the combination (TPS+CAG).
Conclusions and way forward
This case study validates the possibility of efficiently operating the TPS source with a CAG array source simultaneously using a single source boat with existing compressor capacity in DSA mode. This combination efficiently produces an effective broadband seismic source starting from low frequencies over several octaves (+25 dB at 3 Hz compared to the CAG array alone). In the pre-survey analysis phase, simulation tools were essential for the design of the activation sequence for each source to safely optimise productivity. The post-survey analysis confirmed the accuracy of Sercel modelling solutions.
11 Common receiver gather in the TX (time, offset) domain for one source line: TPS point source (left), CAG array (centre), and TPS +CAG (right). Note the lower-frequency content of the TPS and stronger Scholte waves that are induced by the longer wavelength of the TPS compared to the depth of the seabed. The Scholte waves are aliased with 100 m shot intervals.
Figure
Figure 10 For a single node, hydrophone (P) and vertical acceleration (Z) phase rings at 1.5 Hz to 5 Hz for TPS and CAG over a 7 x 7 km source carpet. Inner small spatial wavelengths correspond to the slow-velocity surface waves, outer longer spatial wavelengths correspond to high-velocity first breaks.
The next step could involve the deployment of multiple source boats, with simultaneous firing on irregular grids and with some level of source encoding.
Acknowledgements
We would like to thank Petronas, SAExploration and Sercel for permission to publish this work. We also thank the SAExploration acquisition team and all vessel crew members for safely and successfully executing operations with this new source setup.
References
Allemand, T., Herrmann, P., Laroche, S., Ronen, S., Aznar, J., Large, J., Baeten, G., Kryvohuz, M., Perkins, C., Shang, X., Tang, Z., Theriot, C. and Wang, K. [2023]. Joint deblending and designature of multi voice-data: a marine example. 84th EAGE Annual.
Figure 12 Common receiver gather in the FX (frequency, offset) domain for one source line: TPS point source (left), CAG array (centre), and TPS+CAG spectrum (right). The variation in signal level as a function of frequency relate to the respective TPS, CAG and DSA source spectra, while the offset variations relate to the offset-dependent reflectivity. Note that below 5 Hz the TPS source is driving the signal. The DSA brings together the best of the two sources.
Aznar, J., Kuvshinov, B., Baeten, G., Macintyre, H., Large, J. and Ronen, S. [2022]. Successful modelling and sea trial of new low frequency sources using standard onboard air supply. First Break, 40(11).
Caldwell, J., and Dragoset, W. [2000]. A brief overview of seismic air-gun arrays, The Leading Edge, 19, 898-902.
Meritt, M., Baeten, G., Rambaran, V., Godfrey, K., Bianchini, K., Brothers, T., Chelminski, F., Hao, A., Gao, H., Su, Y. and Wei, Z., [2024]. Revealing the subsalt in Garden Banks with a sparsely-shot TPS OBN and FWI. IMAGE24 Annual Meeting
Ronen, S. and Chelminski, S. [2017]. Tuned Pulse Source – A New Low Frequency Seismic Source. SEG 87th Annual Meeting.
Ronen, S., Allemand, T., Laroche, S., Herrmann, P., Macintyre, H., Kryvohuz, M., and Baetan, G. [2022]. Joint designature of data with a diversity of sources. SEG/AAPG International Meeting for Applied Geoscience & Energy.
ADVERTISEMENT
OBN for the energy transition era
Robert Basilli1* outlines a fresh approach to OBN survey work, introducing a new class of vessels and utilising advances in remote OBN operations including ROVs and robotics.
Introduction
The marine seismic industry has a remarkable history of being a leader in embracing the possibilities of new technology as they have emerged. It is a phenomenon that dates back to transformative changes such as the move from analogue to digital recording, the shift to 3D multi-streamer acquisition, advanced navigation and positioning and onboard data processing now superseded by satellite communication, etc.
In this regard we believe the evolution of ocean bottom node (OBN) seismic acquisition is poised to benefit from refocusing on marine operations with a new concept vessel and ROV set up that implements step change advances in remote and robotic technology. This not only improves the cost, safety and efficiency of surveys but also meets the imperative of a reduced carbon footprint, a needed alternative to conventional, decade old, high emission operations. It is probably 15 years since oil and gas companies were persuaded that the imaging benefits provided by recording nodes on the seabed were overwhelmingly superior to ocean bottom cable and towed-streamer options in certain contexts and therefore worth the investment although cost and logistics were still not optimal. The arrival of commercially viable OBN coincided with changing oil company E&P strategies to adapt to the uncertainties of oil and gas supply and demand in a period of energy transition. OBN has proven to be the technology of choice for oil companies now targeting optimisation of existing resources via 4D seismic monitoring and a focus on infrastructure-led exploration (ILX) and so-called ‘advantaged
oil’. Seabed seismic operations are now closing in on winning a 50% share of the total offshore seismic acquisition market in an extraordinarily short space of time.
That is where the good news ends. It is no secret that oil company customers are frustrated by the cost and duration of OBN commissioned surveys from survey plan to final processing results plus the need to lessen the environmental impact. To take one example, which indicates in general terms some of the key issues, our analysis of several OBN surveys conducted in the GoM over the last few years suggests the average deployment speed of OBNs has been between 0 and 1 knot, depending on the node spacing, water depth and environmental conditions. Short battery duration of the nodes employed has on occasion meant deployment of multiple source vessels in order to acquire the data in the survey area before the battery runs out.
In order to meet client concerns (and achieve a technology edge over competitors), service suppliers continue to address the efficiency of launch and retrieval systems for nodes, improved performance of the nodes themselves, data delivery, etc. But change has been gradual and in small increments.
In this article, we outline a fresh approach, offering a comprehensive solution to many of the current shortcomings in OBN survey work. It introduces a new class of lean-crewed vessels and advances in remote OBN operations including the latest developments in ROVs and robotics. The focus has been on improved efficiency, safety, and data quality plus a significant reduction in environmental impact.
Figure 1 Eight of the Armada 78 m vessels have been delivered and are in commercial operation.
Vessel operation step change
Lean-crewed
Ocean Infinity’s revolutionary vessel design philosophy prioritises the strategic onshoring of data processing and payload control while allowing for measured, regulated onshoring of marine crews. This approach, among other things, reduces offshore exposure days as well as flight and accommodation requirements.
This vision is embodied in Ocean Infinity’s new class of purpose-built lean-crewed vessels first launched in 2022. The company’s fleet order includes both A78m and A86m designs. The A86s, particularly suited to OBN operations, feature two new-build electric work-class ROVs providing exceptional power and flexibility while enabling significant onshore control.
Some may already know that Ocean Infinity’s headline entry into the marine industry began in 2016 with the launch of the world’s largest fleet of autonomous underwater vehicles, equipment used for various applications including high profile seabed searches such as the missing Malaysia Airlines flight 370.
The Armada A78 vessel fleet, already active globally, showcases the company’s commitment to onshoring operations. The vessels were specifically engineered to allow data processing and payload control to be conducted from global onshore operation centres. This shift to onshore activities marks a significant step towards potentially crewless vessels for certain vessel operations in the future. In this respect Ocean Infinity has received a Statement of Compliance with DNV-GC-0264 Autonomous and
remotely operated ships guidelines from certifying authority DNV, the first in the industry.
Ocean Infinity has developed proprietary dynamic payload control software that enables remote operation of vessel equipment. This software system allows onshore teams to monitor and control multiple payload systems simultaneously, adapting to real time conditions while maintaining optimal performance. The software’s intuitive interface and automated decision-support features are designed to ensure that remote operators can effectively manage complex offshore operations, further reducing the need for onboard personnel.
For marine seismic projects each A86m vessel maintains a dedicated maritime crew for real time ship supervision and navigation while moving ROV and geotechnical teams to remote operational centres. This design choice enables a true alternative to conventional OBN survey systems. From purpose-built workspaces, onshore teams can pilot vehicles, control payloads, view live data, and oversee processing and data management to meet survey requirements while delivering highest-quality results. While the operational procedure for data acquisition and processing remains similar to conventional approaches, the vessel’s design still facilitates the onshore location of some staff thereby reducing offshore presence.
In summer 2024 inApril, as operator and supplier of nodes, node handling and data recording/management, and Ocean Infinity mobilised the A78-01 vessel to carry out OBN seismic acquisition for Equinor in the North Sea. This first successful commercial project adopted a basic approach with a single ROV being deployed and reloaded with nodes on deck between dives. It provided a stringent test for a lean-crewed operating philosophy with only 16 people performing the operation offshore with the remaining crew working from the control centre in Gothenburg, Sweden.
The prospect of fewer people needed offshore obviously reduces HSE exposure, the safety of crew always being the priority. Visits to the Armada vessel will still be required to conduct routine maintenance, equipment change to support specific operations, or other high intervention work scope. However, the duration of personnel exposure will be minimised and will potentially, with experience, reduce further over time and further developments.
An important additional benefit comes from all the incidental saved costs from less international travel and hotel expenses,
Figure 2 The world’s largest fleet of Hugin AUVs – Fly away or permanently installed.
Figure 3 At the heart of the Armada fleet concept are Ocean Infinity´s remote control centres. The Southampton Centre houses 22 operator control bridges.
which also have a bearing on environmental impact and carbon footprint.
Fuel efficiency
The Armada fleet is being constructed to take account of fuel efficiency and clean environmental requirements. Vessels are equipped with the latest in diesel-electric battery hybrid propulsion systems to improve efficiency and reduce emissions. In addition, pre-installed systems have been built in to allow seamless transition to alternative fuels such as methanol and ammonia when this is deemed feasible.
ROV innovation
A major disruptive application in the Ocean Infinity operational concept is likely to be the introduction of the all-electric workclass ROV (eWROV) in development with Saab, with first deliveries scheduled for 2025.
The electric thrusters are the key to the performance of the eWROVs. Generating 560 kgf, the unit’s power efficiency should unquestionably be greater than hydraulic systems, offering better acceleration, braking and reversal. The state of the art DC power transmission system is expected to enable significantly smaller and lighter units, both topside and subsea, allowing for the use of smaller diameter umbilicals as well as existing client-owned umbilicals.
Speed and acceleration tests are being planned to provide input and refinement to the concept development. Crucially, the speed through water of a fully laden eWROV with payload (skids fully loaded with nodes) will be assessed. We believe that speeds of greater than 3.5 knots can be achieved, which is a huge improvement on current industry standards. Initial tests are planned for the North Sea and will therefore not be representative of the ~3,000 m water depths of the Gulf of Mexico. Nor will they include incorporation of the electric tethered management system (TMS) which is under development.
To fully realise the environmental and efficiency benefits of electric ROVs, Saab has developed the Seaeye eM1-7 Electric Manipulator, thought to be world’s most advanced electric manipulator yet, and planned for incorporation in the new OBN operational solution with sea trials ongoing. The specifications of the Manipulator’s lift capacity and range of motion are being designed to exceed hydraulic equivalents particularly in achieving
more precise positioning with force feedback, increased dexterity (seven dual function configurations), lower water weight and greater reliability. For accuracy, units are based on modular electric joints to enable enhanced arm control, path planning solutions and actuator reuse. Aluminium construction should result in the necessary reliability.
Operational deployment of nodes will require a bespoke skid or basket and focus the efficiency on packing, deployment, recovery and reloading. The eWROV itself will be equipped with a range of positioning, survey and visual equipment. This can provide precise deployment positioning data while video/HD images can identify any possible Benthic locations and confirm seabed conditions of deployment and recovery.
Adding to exisiting technology
The new vessel/ROV concept has the potential to add an extra dimension to the existing current state of the art technology and best practice. We take some examples from exisitng and emerging technology developments by inApril, soon to merge with seabed seismic contractor SAExploration, which is committed to operating Ocean Infinity lean-crew automated vessels.
Node design
There is not a huge differentiation in the current generation of recording nodes available in the market, with negligible failure rates, such is their reliability. The emphasis these days is on seeking longer battery life to sustain longer acquisition periods, continuing reduction in size/weight and depth rating beyond the current 3000 m standard thereby allowing ROVs to load more units before having to refill the skid. Operating water depths of 4000 m and much longer battery duration are in reach.
Battery duration
Recent advances in battery technology mean nodes can be fully recharged in about 12 hours, at the same time as data is downloaded and the node is being prepared to be redeployed. Having the current 150-day battery life gives the freedom to deploy the full node spread before the source is mobilised, and only use one source vessel where two previously were necessary, an important saving in vessel days and mobilisation. It also reduces the risk of a full fleet being down due to one vessel having technical challenges.
Figure 4 12 eWROVS will join the Armada fleet with Ocean Infinity remote technology in 2025.
The next step may well be extending the battery life beyond 150 days, for example to 240 days, which would open the possibility of acquiring two or three surveys in one project. It could mean acquiring one long offset data at the same time as near- or zero-offset data over another field. We already know that OBN surveys with offsets up to 70 km can contribute to data uplift, and that operating the source over separate node patches at the same time can greatly increase efficiency and the possibility of acquiring long offset data. In all cases mobilisation and demobilisation costs may be reduced by up to 50% of the original costs if surveys can be acquired in tandem.
Back deck solution
All data handling systems aim for the same goal of maximum automation and elimination of manual handling of nodes to ensure the whole process is as safe and efficient as possible We envisage a solution based on experience to date and also informed by existing Ocean Infinity designs for geotechnical sample skidding and handling. At the moment our nodes are launched by a fully automated system from container storage to the staging area before deployment via the moonpool either by the vessel crane or by the modular handling tower over the moonpool. In solutions with two eWROVs in operation, the units will be deployed by the vessel crane. All such operations will be controlled from the remote control centre with sufficient monitoring in place to ensure the safety of personnel onboard the vessel.
The staging container will use existing systems for node handling using robotic arms to position the nodes into the racks which provide the charging, health checks and data downloads of each node. In this way a safe, environmentally controlled
location is provided for the storage of thousands of nodes on the back deck.
AI and machine learning
During OBN deployment, when possible over the horizon, ROV operations and latency of control commands occur, AI and machine learning support is seen as the best fine control of the ROV for this and through water efficiency generally. During a project, minimising contact/impact with the seabed to avoid disturbing sediments and restricting visibility is also important and AI can again play a role. The same applies in the recovery phase; the theory is that with sufficient training of the ROV and its systems, it can recognise the location, position and orientation of a node and deduce the appropriate action to take. All such automation alleviates the workload of the ROV pilots, in the process eliminating the impact of control latency from the vessel to the control centre.
Source solutions
OBN survey operations as described here are flexible as far as source configuration and shot density are concerned. For example, we are aware of an Aker BP initiative in which moving away from repeating shot positions in 4D was tested in favour of shooting denser and wider, creating a dense shot carpet over the OBN grid (see Time-lapse dual-source and hexa-source OBS imaging for the Edvard Grieg Field by F.Twynam et al, 2nd EAGE Seismic Today workshop, January 2023). In addition to the increase in operational efficiency and reduction in survey duration claimed, using a smaller source volume presented several additional operational, environmental and data quality benefits.
Data management
Every company has its own data management system in place, and inApril has been no different. So any new concept can lean into existing knowledge and experience. Its role is to provide the main interface with the node, data download and merging with navigation data to produce data deliverables. It is scalable to any node counts and comes with a number of modules – a node interface module automatically starts data download, clock drift measurement, battery charging and self-tests without operator interference as soon a node is docked; a data quality module automatically monitors the data download process and, if specified, further QC through a GIS-based user-friendly interface; a transcription module to generate final deliveries in SEG-formats with data ready within a few hours after retrieval.
OBN: the way forward
In conclusion, it may not be too much to say that OBN seismic acquisition has reached a pivotal moment in its technology evolution. As we see it, the benefits of fully automated systems using robotics and digitalisation advances are within our grasp to significantly improve the cost, efficiency, safety and environmental impact of operations. The introduction of the lean-crewed vessel with remote and robotic technology applications for ocean bed surveys may not be the only solution to emerge in the years to come. But for now, it seems to offer an extremely promising route to provide what the industry needs in the energy transition era.
Figure 5 Containerised automated node handling and staging.
Drone-borne marine magnetic surveys for UXO and cables/pipelines tracing
Alexey Dobrovolskiy1* explains why drone-born marine magnetic surveys are coming into their own, especially to aid the construction of offshore wind farms.
Introduction
Increased economic activities in coastal areas, especially the construction of wind farms around the world, raised the demand for magnetic data acquisition in construction areas. The main purpose of these magnetic surveys is to search for UXO in coastal areas (across Europe and some other regions) and precisely locate previously laid cables and pipelines to avoid their damage during new construction works.
The most popular and standard tools for these surveys are towed magnetometers, either in the form of a tow-fish with a single magnetometer or multiple sensors mounted on a frame to collect data with higher resolution and cover wider lines in a single pass.
No matter what sensor or frame is used, it will require a boat or ship, trained crew, and deep enough water to safely navigate the boat and support the required elevation of the sensor above the bottom (which is usually achieved by manipulating the speed of the boat). Also, during the survey, the sensor (tow-fish, frame) should be deep enough to exclude disturbance from surface waves.
Another option for marine magnetic surveys, quite exotic at the moment, is a magnetometer mounted on an underwater remotely operated vehicle (ROV). The benefit of these systems is that they may conduct surveys from very close distances but can cover only very limited areas. They are mostly instruments of close-up investigations of suspected targets detected using other systems, especially because ROV can be equipped with multiple sensors, cameras, first of all, and manipulators.
Therefore, the capabilities of existing systems create natural ‘blind’ areas for traditional methods — shallow waters, surf zones, and foreshore — that should be surveyed before laying
pipelines or installing wind farm export cables. Here, drone-borne magnetometer systems come to the scene.
Drone-borne magnetometer systems
Drone-mounted magnetometers are already widely accepted tools for UXO and utility detection in surveys over the land, especially for large unobstructed areas, where they bring productivity gains up to 6-10 times compared with terrestrial methods. Sometimes, a magnetometer with a UAV is the only option if the job should be done without entering the survey area.
Methodology and application workflows for UXO detection are well-developed, and there are sources of information for the detection ranges for typical objects of interest, mentioned in Magnetic models of unexploded ordnance (Billings et al. 2006); also mentioned by Dobrovolskiy and Brants (2024) in Estimates of various UXO (unexploded ordnance) maximum detection distances using magnetometers. However, the use of drone-borne magnetometer systems in marine environments is still something new, and there are not so many cases and publications about that scenario available in the public domain. This is especially true for UXO surveys because of the high sensitivity of the topic – no one customer of these surveys wants to disclose anything except very high-level information. As a standard rule, data, precise locations, and findings are not disclosed. Below, we will share a real-world case study for a pipeline localisation survey.
Figure 1 Geometrics G-882 marine magnetometer, can be mounted on TVG (Transverse Gradiometer) frame. Photo courtesy of Geometrics, Inc.
Figure 2 SENSYS SEARACK ROV-mounted magnetometers array with 4x fluxgate sensors. Photo courtesy of SENSYS GmbH.
Use of towed and UAV-mounted magnetometer systems
Datasets, photos, and project descriptions are provided by Shore Monitoring & Research, the Netherlands. The purpose of the survey was to precisely locate previously laid cables and pipelines and to find all ‘point’ targets for subsequent identification. The survey had to cover the area starting from the shoreline, so the decision was made to utilise a towed magnetometer system (2x Geometrics G-882 marine magnetometers mounted on TVG frame) for the ‘deep’ part and UAV-borne magnetometer (SPH Engineering’s MagNIMBUS magnetometer in gradiometer configuration) for ‘shallow’ part.
Data collected using a towed submersible magnetometer system covered the planned area where it was safe to navigate the survey boat (image 5). A long linear anomaly is the export cable from an offshore wind farm.
The ‘gap’ between the shoreline and the area covered with a towed magnetometer was surveyed using a drone-mounted Mag-
NIMBUS gradiometer system deployed from the shore (dike). Drone flights were planned to extend approximately 100 m from shore to have two data sets overlapped for comparison and cross-checking.
The survey allowed to trace the centreline of the offshore wind farm export cable, which was the primary target. It also revealed one more AC cable and multiple point anomalies in both deep and shallow parts of the scanned area. A strong magnetic anomaly over the dyke is caused by reinforcement in the concrete.
Sean Zandbergen, Shore Monitoring & Research, said: ‘We were happy with the results based on the overlap of the two datasets. Using a UAV magnetometer in this industry is a bit new, so we and our client were also curious about the result. It’s easier to process the drone data since the positioning is so much better/ more accurate. The line keeping while towing submersible magnetometer system is challenging, while the drone autonomously flies an accurate line plan based on RTK-GNSS. Therefore, it’s
Figure 3 Left: UAV-mounted magnetometer system during survey over the field; right: 76mm WW2 artillery shell detected using this system near the town of Bauska in Latvia. The cigarette pack is used as a scale. Photos courtesy of SPH Engineering.
Figure 4 Left: The drone with the MagNIMBUS magnetometer is in front of the survey boat. Right: The operator is monitoring drone flight.
easier to line up consecutive lines of the survey. The position accuracy of towed equipment is a bit lower since it is all based on the position of the vessel. Either by calculating the lay-back position (in the shallow) or acoustic/USBL (in >5m water depth), the position of the sensor is determined.’
Planning considerations
Here, we will assume that a survey using a UAV-mounted magnetometer will be combined with a more traditional survey using a towed magnetometer system — first of all, because the effectiveness of drone-borne systems in terms of detection probability will quickly decrease with depth. The only feasible exception is when it is known that the airborne system can detect all anticipated targets for the entire survey area’s depth range. For combined surveys, there are two possible approaches for the planning of the survey:
• Survey all accessible areas using towed magnetometers and use UAV-borne systems only to fill the gaps for shallow/ dangerous areas.
• Based on the maximum detection range for smaller anticipated targets, maximise the coverage for UAV-borne systems and minimise the use of towed magnetometers. That approach makes sense if the cost of a drone survey per unit of area is less than that of a towed system. Сosts of different types of
surveys highly depend on particular project conditions and are outside the scope of this article.
Method 1: Use UAV-borne systems only to fill the gaps in shallow/dangerous areas
That approach requires knowledge of the planned survey area for the towed magnetometer or, if data from the towed magnetometer is already available and processed, a gridded magnetic map to use as a reference for drone survey planning.
Almost all modern ground station software allows geotagged images to be used as map layers, and geotagged magnetic maps can be used as well. Workflow is very simple:
1. Generate a magnetic map using magnetometer data processing software and export it as a geotagged tiff (GeoTIFF) file.
2. Import this file as a layer into ground control software.
3. Plan survey grids for drone survey using the magnetic map as a reference, ensuring required overlapping between data sets for cross-validation.
Figure 7 illustrates this approach. Data collected using G-882 marine magnetometers is processed using Geosoft Oasis montaj. The analytic signal grid is exported into a GeoTIFF file and imported into UgCS flight planning software. After that, it is pretty simple to plan drone missions.
Figure 5 An analytic signal map generated from towed magnetometer data.
Figure 6.Analytic signal map generated from dronemounted magnetometer system combined with data from towed magnetometers.
Method 2: maximise the coverage for UAV-borne systems
The cost of a survey using a towed magnetometer system may be very high as it requires a survey boat, expensive survey equipment, and a highly skilled team of at least two men. Depending on particular conditions, the cost of a survey per unit of area may be considerably lower for a drone-borne magnetometer system, and that can be a strong motivator to maximise the use of an airborne system.
That approach requires three additional pieces of information:
• The smallest anticipated targets list should be part of job requirements or agreed upon with the customer,
• The detection range for the smallest anticipated targets should be known from a trustful source or measured at the planning stage of the project,
• Bathymetric data for the survey area should be available, or a bathymetric survey should be conducted before magnetic surveys are planned.
The absence of any type of listed input data means ‘no go’ for this approach as it will make drone-borne surveys unpredictable in terms of the probability of detecting targets of interest. Having a maximum detection range for the smallest anticipated targets and bathymetric data, it is pretty straightforward to determine areas where drone-borne survey makes sense. The only trick here is that sensor elevation above the water should be taken into consideration. From our experience, for drones equipped with terrain-following systems utilising radar altimeter, sensor-water surface clearance can be as low as 0.5m, especially if the sensor has a protection level enough to prevent damages in case of water splashes.
Example of estimates: for a British 500lb MC WW2 aerial bomb, maximum detection distance using a magnetometer exceeds 7 m, as Dobrovolskiy and Brants (2024) mentioned in Estimates of various UXO (unexploded ordnance) maximum detection distances using magnetometers. Minus 0.5 m for the water-sensor elevation means that drone-borne systems can be used for the detection of these and similar targets in areas with a maximum water depth of 6.5 m.
Our recommendation is to be conservative in these estimations and have quite a high ‘safety margin’ while never using the maximum detection range for planning purposes. If
the planned elevation of the sensor above the bottom (depth + sensor elevation above water) reduces by 25%, the probability of detection increases more than two times (rule of third power of the distance), this is not a very high cost for better confidence in survey results. In the example above, the search area would be limited by a maximum depth of 4.75 m (7 m – 25% - 0.5 m). If we are considering detecting targets buried below the bottom, we must also subtract the estimated maximum burial depth from the maximum water depth.
Common considerations and recommendations
With extensive combined experience from multiple customers and our surveys, we can suggest additional important points that should be taken into account when planning and conducting surveys using UAV-borne magnetometer systems over the water.
Mind the tide
For some coastal areas, the difference in water surface level between low and high tide can be up to a few metres. That may critically influence the probability of detection of targets, as
Figure 7 UAV-mounted magnetometer system survey planning using data from towed magnetometer for reference.
Figure 8 Example of difference between low and high tide. Source: www.tideforecast.com
magnetic anomaly decreases as the third power of the distance between object and sensor. Therefore, the general rule is to conduct surveys with drone-borne magnetometers at low tide.
Safety procedures and check-list, insurance
Magnetic surveys for UXO, utilities, and similar over the land are usually pretty safe because of their low altitude. Even drone crashes, in most cases, will not lead to expensive damages. In contrast to that, any crash of a drone system into the water will lead to the total loss of equipment. Even if the system is recovered, water (especially salt) will damage the drone and payload to the state when any repair is not possible. Therefore, any measures to prevent drones from falling into water should be considered with the utmost importance.
In addition to the standard rules, we would recommend additional measures and checks for flights over the water:
• The planned flight time is less than 70% of the drone’s maximum flight time with the used payload.
• Start a survey from the furthest survey lines.
• Monitor during the whole flight with the readiness to issue an RTH (return to home) command to the drone in case of any doubts.
• Insurance is mandatory. Please check with the insurer that the drone and payload are covered. In case of a claim, providing logs from the drone is not mandatory (in case of total loss or damage to equipment, you may not be able to download logs).
Additional issues with low-level flights over the water can be swimmers and local navigation traffic. Drones flying with water-sensor clearance of 0.5 m will not be a direct danger for swimmers, but it is better to avoid close proximity, of course. But suddenly approaching speed boats can collide with the flying drone, and the consequences may be very serious. The only feasible measure is constant monitoring of the survey area and, in case of approaching a boat or ship, immediate interruption of the flight and ascending to a safe altitude.
Importance of the terrain-following system
Technically, flight over water is very simple because of the practically flat surface under the drone. One may attempt to plan drone flight using built-in altimeters of the drone. Standard commercial off-the-shelf drones use fused barometric altimeter and GPS data to measure the altitude of the drone relative to the take-off position elevation. This approach has many drawbacks:
• Barometric altimeters have significant drift, which can be up to a few metres during a single flight. That’s acceptable for normal photogrammetry or other missions with altitudes starting from a few dozen metres above the surface, but it is not acceptable at all if we need sensor-water clearance of just 0.5 m.
• If the drone has a built-in GNSS RTK receiver and can use it for navigation, including altitude estimation (as modern drones like DJI M350/300 RTK), it may follow a pre-planned altitude with high precision. However, if the drone loses the RTK fix for any reason, it will switch to a barometric altimeter only and may go to the water.
• Even if we are 100% sure of the reliability of the data link between the drone and the ground station (to supply a non-interrupted stream of RTK corrections), the method requires knowledge of the elevation of the take-off position and the water level.
• Water level elevation may change during the flight to wide extents because of tides.
Therefore, the only recommended solution is to use a drone equipped with the true terrain-following system utilising radar altimeters. Laser altimeters can also be used, but they do not work reliably over water surfaces.
Importance of the obstacle detection systems
Although drone operators are required to monitor the entire flight, in some situations, they may miss the sudden appearance of low-profile obstacles such as kayaks or canoes, especially at long distances (more than a few hundred metres), in waves, and if the operator is approximately level with the water. To prevent collisions, it is highly recommended to use drones equipped with obstacle sensors. Some modern drones are equipped with built-in obstacle sensors utilising multiple small cameras, but they are absolutely not effective in marine environments. It is recommended to use drones with radar obstacle detectors, capable of detecting objects at distances up to a few dozen metres in any environmental conditions.
Conclusion
The use of drone-borne magnetometers to detect UXO, pipelines, cables, etc., in marine environments is a new application at the moment, but as it has strong motivation in terms of the economy and safety of the process, we expect the rise of this approach. It is good that it is 100% based on well-developed technologies and workflows for similar tasks over the land and just requires adaptations for the new environment.
References
Billings, et al. [2006]. Magnetic models of unexploded ordnance. DOI: https://doi.org/10.1109/TGRS.2006.872905. Link: https://ieeexplore. ieee.org/document/1661800
Dobrovolskiy, A. [2024]. Detection of underground power cables using drone-carried magnetometers. The Dangers Below, 2(2), 40-41. https://www.locatingunlimited.com.au/media/#dangers-below-volume-2-edition-2/40/
Dobrovolskiy, A. and Brants, M. [2024]. UAV-based magnetometer comparison: UXO test. Online publication. Link: https://www.sphengineering.com/news/uav-based-magnetometer-comparison-uxo-test
Dobrovolskiy, A. and Brants, M. [2024]. Estimates of various UXO (unexploded ordnance) maximum detection distances using magnetometers. Online publication. Link: https://www.sphengineering.com/ news/estimates-of-various-uxo-unexploded-ordnance-maximum-detection-distances-using-magnetometers
Dobrovolskiy, A. [2021]. Detection and Localization of Ferrous Underground Objects and Buried Utilities Using Airborne Magnetometers and Metal Detectors. First Break, 39(8), 79-80.
Transforming seismic surveys with OBN and autonomous technology
Lerish Boshoff1* considers the evolving landscape of OBN seismic operations.
The seismic industry has long been a leader in adopting innovations, continually adapting to the complex challenges of exploration and production. One of the most important developments in recent years has been the rise of ocean bottom node (OBN) seismic operations, which deliver significantly higher-resolution data and broader azimuth coverage compared to traditional towed-streamer surveys. OBN technology has enabled energy companies to successfully explore in geologically complex areas, delivering more accurate data where conventional methods fall short. As companies seek to maximise returns from both mature and newly discovered fields, OBN has emerged as the go-to solution for obtaining deeper insights into subsurface formations.
This detailed data allows for more precise planning and decision-making, reducing drilling risks and supporting enhanced recovery techniques by revealing subtle structural traps, fault networks, and variations in reservoir quality that traditional seismic methods might miss. The growing demand for high-quality seismic data underscores the critical role OBN plays in unlocking these complex geological structures and optimising field production, making it a preferred tool for exploration and production companies alike.
However, despite the clear advantages, widespread adoption of OBN has been hindered by the operational complexity and cost associated with its deployment. Traditional OBN methods, which rely on work-class Remotely Operated Vehicles (ROVs) to deploy and retrieve nodes, are time-consuming, and costly. While these costs have historically made OBN surveys challenging for
smaller companies or those working in more remote or complex environments, it’s important to note that costs are decreasing as automation, particularly through AUVs, becomes more integrated into OBN operations. This shift is gradually making OBN technology more accessible and viable for a wider range of projects.
The challenge, then, is clear: how can the seismic industry continue to deliver the high-quality data that OBN provides while reducing the time, cost, and environmental impact of operations? The answer lies in automation, digitalisation, and innovative cross-industry partnerships that bring fresh solutions to the challenges of subsea exploration. By integrating these advancements into OBN operations, companies can reduce operational bottlenecks, increase efficiency, and make OBN surveys more accessible to a broader range of operators and a larger proportion of the marine seismic market.
Overcoming the challenges of ROV operations
Subsea operations present unique challenges, whether in shallow or deep waters. The often harsh environments encountered during OBN operations — ranging from rugged seabeds to areas with complex subsea infrastructure — make precision and efficiency challenging to achieve. ROVs face difficulties such as navigating, communicating, and managing power systems under high pressure and low-visibility conditions, all of which can lead to costly delays or operational failures if not managed effectively.
Traditional OBN deployment methods involve work-class ROVs tethered to Tether Management Systems (TMS), which are
connected to surface vessels via umbilicals. The umbilical, typically 4000 to 5000 m long, connects the TMS to the ship, while the tether, typically around 1000-m long, connects the ROV to the TMS. This process, while reliable, is slow, labour-intensive, and costly. The ROVs must carefully navigate the seabed to ensure accurate node placement, and managing the long umbilical and tether adds additional complexity. In ultra-deepwater environments, the length of these connections increases drag, slowing down the vehicle and limiting its range. The tether further restricts the manoeuvrability of the ROV, making it difficult to operate efficiently in areas with rugged or uneven topography.
The risk of tether entanglement is an ever-present challenge, especially in areas with subsea pipelines, wellheads, and other infrastructure. This issue not only slows operations but also heightens health, safety, and environmental (HSE) risks. In the event of entanglement, recovery operations can significantly extend downtime and operational costs.
The seismic industry is increasingly adopting autonomous underwater vehicles (AUVs) to mitigate these challenges. These untethered systems eliminate the limitations imposed by ROV tethers, allowing for faster, more efficient node deployment and retrieval. AUVs reduce HSE risks and offer greater operational flexibility by relying on digitalisation and AI technologies to optimise subsea operations. This shift promises to enhance the precision, safety, and cost-effectiveness of seismic surveys across various subsea environments.
The role of digitalisation and intelligent automation in subsea operations
Digitalisation and automation are transforming industries globally, and the seismic sector is no different. The integration of artificial intelligence (AI), machine learning (ML), and robotics is showing strong potential to enhance subsea operations, paving the way for more precise, efficient, and safer OBN deployment and retrieval. One of the most promising developments is the use of autonomous underwater vehicles (AUVs) equipped with AI-driven systems. These advancements have the potential to drastically improve how nodes are deployed on the ocean floor, allowing for faster, more accurate surveys and significantly cutting the time and costs typically associated with traditional ROV-based methods.
As noted earlier, traditional ROVs operate at a speed of about 0.75 and 1 knot, and each node placement is done manually, making the entire process slow, requiring continual manual piloting. The slow pace of ROV operations is largely due to the tether, which creates significant drag and limits manoeuvrability. These factors, combined with the need for precision in complex tasks, require ROVs to operate at lower speeds to ensure stability and accuracy. By contrast, AUVs are faster, more hydrodynamic, and capable of untethered operation. These vehicles are equipped with advanced obstacle avoidance systems, near real-time decision-making capabilities, and AI-driven navigation, allowing them to operate in complex, unpredictable underwater environments. This level of autonomy not only increases operational efficiency but also improves the repeatability of node placement, as autonomous systems are less prone to human error. While the positioning systems
used by AUVs and ROVs are identical, the automated, precise movements of AUVs ensure better node-to-node consistency, resulting in more reliable data collection over time and enhancing the overall quality of seismic operations.
While AI and ML algorithms have made significant strides in improving efficiency, particularly in processing the vast amounts of data generated by OBN surveys, it is important to note that real-time data analysis and decision-making during subsea operations are still evolving. The technology continues to be refined to handle the growing volumes of data more efficiently, providing even faster and more accurate insights for operators.
Another critical benefit of digitalisation is the reduction of human presence in offshore environments. AUVs can operate independently by automating node deployment and retrieval, significantly reducing the need for crewed vessels and personnel in hazardous offshore conditions. This has a direct impact on HSE risk, as fewer people are exposed to the dangers of subsea operations, including extreme weather, mechanical failures, and other unforeseen challenges. In addition to improving safety, reducing the number of people and vessels involved in OBN operations also contributes to lower carbon emissions, aligning with the industry’s broader sustainability goals.
Cross-industry collaboration as a catalyst for innovation
One of the most exciting developments in recent years has been the increasing collaboration between industries to drive technological innovation in subsea operations. The energy sector, traditionally focused on oil and gas exploration, has begun to look to other industries — particularly defence, robotics, and AI — for solutions to the complex challenges of OBN seismic surveys. These partnerships have proven to be highly effective, as they allow the seismic industry to benefit from technological advancements that were originally developed for other purposes.
Defence technologies, for example, have long been at the forefront of subsea operations, with military applications often requiring the same precision, reliability, and autonomy needed in seismic surveys. Initially developed for defence purposes, autonomous vehicles are now being adapted for use in commercial OBN operations, bringing with them decades of experience and technological advancements. Defence contractors have perfected the use of AUVs in challenging environments, and their expertise
Figure 3 A bottom-up view of MantaRay hovering autonomously.
is now being applied to OBN operations, where precision and reliability are paramount.
With the exponential growth in data generated by seismic surveys, particularly with the rise of OBN technology, the integration of AI and ML is more crucial than ever. These technologies are enabling companies to handle increasingly large datasets with greater efficiency, allowing them to process and analyse more data in less time. As seismic surveys generate more detailed and expansive datasets, AI-driven algorithms can quickly interpret the data, identifying patterns and anomalies that provide deeper insights into subsurface formations. By streamlining the data processing pipeline, companies not only speed up decision-making but also have access to a broader and richer dataset, leading to more informed and strategic decisions. This cross-industry application of AI and ML is accelerating the development of new tools for the seismic sector and fostering greater collaboration and innovation. These collaborations are essential for driving the future of OBN operations. The challenges of subsea exploration are too complex for any one company or industry to solve alone. By working together, energy companies, defence contractors, robotics firms, and AI developers can create the next generation of subsea technologies that will improve the efficiency, safety, and sustainability of seismic operations. Cross-industry partnerships also help to mitigate the risks associated with developing new technologies, as they allow companies to share resources and expertise.
Spotlight on the partnership with Manta
A good example of cross-industry collaboration is the partnership between PXGEO and Manta, a company dedicated to enhancing subsea operations through digitalisation and intelligent automation. Manta’s expertise in AUV and node technology, combined with PXGEO’s experience in OBN seismic, is driving a new era of efficiency and sustainability in the industry. By leveraging Manta’s technological capabilities, PXGEO is able to offer its clients more efficient, cost-effective OBN solutions that meet the industry’s evolving needs.
Manta’s mission is clear: to optimise productivity and reduce risk in subsea operations while maintaining harmony with the oceans. This aligns perfectly with PXGEO’s goals of increasing the accessibility and efficiency of OBN surveys. Through their partnership, PXGEO and Manta are developing cutting-edge AUVs capable of deploying and retrieving nodes autonomously without the need for a tether or manual intervention. These AUVs are designed to operate in challenging subsea environments, providing greater operational flexibility and efficiency than traditional ROVs.
These AUVs, designed for operation at depths of up to 3000 m, are equipped with advanced propulsion systems and variable buoyancy controls, allowing for faster, more precise node placement. They are also node-agnostic, meaning they can handle any type of node available on the market, providing flexibility and adaptability to a wide range of seismic operations. By driving down the cost of OBN through increased efficiency, the opportunity for higher node density, or densification, becomes more feasible, especially in deepwater environments. This densification improves the resolution of subsurface imaging, delivering more detailed data and clearer insights into geological formations. The ability to operate at greater depths and in more complex environments is a crucial advantage, as it allows operators to conduct OBN surveys in areas that were previously inaccessible or too costly to explore.
Manta’s commitment to digitalisation drives advancements in real-time data collection, enabling the seamless gathering of operational data during subsea activities. By integrating AI and ML algorithms into the data collection process, Manta’s systems efficiently gather vast datasets in near real-time, ensuring data quality is maintained throughout operations. This near realtime data collection is critical in dynamic environments where
Figure 4 Close-up of MantaRay being submerged into the water.
Figure 5 The Manta team in action.
conditions can change rapidly, allowing operators to monitor operations closely and make timely adjustments based on the data being gathered in the field.
In addition to seismic operations, Manta’s AUV technology is also being adapted for subsea inspection and long-term residency, opening up new possibilities for continuous monitoring of subsea assets. This capability is particularly valuable for the oil and gas industry, where the ability to monitor and inspect infrastructure remotely can significantly reduce the cost and risk of maintenance and repairs. By integrating AUV technology into their operations, companies can improve their subsea assets’ reliability and safety while reducing operational costs.
The future of OBN: Seismic efficiency and sustainability
As the energy industry continues to evolve, the dual goals of increasing efficiency and reducing environmental impact are becoming more closely intertwined. The adoption of autonomous, digitalised systems in OBN operations is not only improving productivity but also providing a pathway towards more sustainable practices. As regulatory pressures around carbon emissions and environmental protection intensify, companies are increasingly required to enhance operational efficiency in ways that also support sustainability. The ability to streamline operations not only reduces resource consumption and costs but also aligns with long-term environmental goals. Autonomous systems such as AUVs offer a solution to this challenge by reducing the need for large crewed vessels and minimising the environmental footprint of seismic surveys.
Reducing the environmental footprint of seismic operations is a critical concern for the industry, especially as regulations around carbon emissions and environmental protection become more stringent. By minimising the need for crewed vessels and reducing the overall time spent on seismic surveys, autonomous AUV systems are helping to lower the carbon footprint of OBN operations. Additionally, the repeatability of AUVs in node placement reduces the number of repeat operations required as their automated systems ensure consistent and precise node-to-node
positioning. This consistency decreases the likelihood of human error, further minimising the environmental impact. The ability to conduct operations more efficiently means that fewer resources are used, which is beneficial not only from a cost perspective but also in terms of sustainability.
The future of OBN operations lies in the continued development of these autonomous systems, coupled with advances in AI and data analytics. As these technologies mature, they will enable the industry to conduct seismic surveys more efficiently, safely, and sustainably. The ability to collect high-quality data while minimising the environmental impact of operations will be a crucial driver of the industry’s growth in the coming years. Companies that can successfully integrate these technologies into their operations will be well-positioned to lead the seismic sector into the future.
Conclusion: Building a future-proof seismic industry
The seismic industry is on the cusp of a new era, one defined by autonomy, digitalisation, and sustainability. Through strategic partnerships like the collaboration between PXGEO and Manta, the industry is well-positioned to overcome the challenges of ultra-deepwater OBN operations and continue delivering the high-quality data that energy companies need to drive exploration and production efforts.
The seismic sector is building a future where operations are more efficient, safer, and environmentally responsible by embracing the latest advances in AUV technology, AI-driven data analysis, and cross-industry collaboration.
Figure 6 MantaRay moving gracefully through the water.
Figure 7 A top-view of MantaRay.
Trusting the process
Neil Hodgson1*, Lauren Found1 and Karyna Rodriguez1 demonstrate how discarded legacy seismic data can be reprocessed to make the invisible unmissable and let the prospectivity shine out brighter than it ever did before.
Introduction
Were Dylan Thomas a geophysicist, he might have written that ‘Though wise men know that dark is right, because their seismic had forked no lightning they do not let it go gentle into that good night’. Indeed, all geophysicists appreciate that new information can be extracted by reprocessing the flaws out of legacy seismic data. Old seismic is never done with being useful as the rate of progress in imaging technologies races ahead at the speed of compute growth, which has never been more available or cheaper than it is today. So legacy seismic data that has had its day in the sun but failed to show prospectivity, can now be reprocessed to make the invisible unmissable and let the prospectivity shine out brighter than it ever did before.
Just better Imaging
In some instances, the removal of multiples and noise make unimaged horizons appear, which yields new prospectivity. Figure 1 is an example of such from Block 18 Offshore Oman. (Hodgson et. al., 2022) This Block is now in the 2024/5 Offshore Licensing round announced in October 2024, and closing in January 2025.
While legacy data from the late 1990s showed no consistent reflectivity below the late Tertiary to recent gravity-driven fold and thrust belt decollement, Searcher’s reprocessing now images Block 18 offshore Oman so beautifully one can hardly believe that this is the same data (Hodgson et. al., 2023). Reprocessing is not only fantastic in the full PSDM stacked
Figure 1 Comparison of 1990s legacy data from offshore Oman Block 18 above, with Searcher’s 2021 PSDM Reprocessed data below. Key information on source rock, reservoir and trapped hydrocarbons are all revealed on this reprocessed data.
image, but it also allows us to work with a full range of offset stacks, and flat gathers. Not only does the section below the decollement have a consistent type IV AVO indicative of the presence of an organic-rich source rock (Loseth et al 2011), but the decollement itself is seen to have topography that has created structure and closures.
Reprocessing the data allows us to correlate away from the scant well data in the Sea of Oman so that the thick Cretaceous section containing sands and source rocks (Andy Racey, 2024) can be correlated into the basin (Figure 2). This allows us, for the first time to determine the age of the sediments in the deep basin which perhaps holds the potential for carbonate build-ups adjacent to continental crust micro-continent that are fuelling
the new Licensing Round and will be a prized exploration target offshore Oman.
Hands across the ocean
As Shakespeare would have undoubtedly put it, Figure 3 shows ‘Two seismic lines alike in dignity, In Pelotas and Orange basins where we set our scene, To ancient bias they bring new mutiny, from Orange glory unseen Pelotas shining seems.’
The two near conjugate sections matched at equal oceanic crust age in Figure 3 are from Pelotas (Brazil/Uruguay) in the west and Orange (Namibia/South Africa) in the east. To the right of the figure is oil exploration’s most successful and well tested basin in recent years (Orange) and to the left there is the
Figure 3 Left-hand Side (LHS): West-East line 2024 3D acquisition ‘quick look’ Pelotas Basin, Brazil. Right-hand Side (RHS): West -East composite of 2D legacy data from Orange Basin on the South Africa/Namibia Border. Both lines are in Depth and have not been adjusted to make them tie.
Figure 2 Reprocessed 1990s seismic line running South-North through the BMB-1 well on the shelf. This line not only facilitates correlation of the thick Cretaceous section in the BMB-1 well, but also reveals the extraordinary geology of that deeper basin.
surprisingly similar yet innocently unexplored largest clastic wedge on Earth (Pelotas).
It is clear that Aptian source rock sits on volcanic oceanic crust buried under an identical thickness of sediment in Pelotas to that which is proven as mature and prolifically generative with expulsion of oil in Orange. With counter regionally dipping draped soft kick clastic basin floor fans just above the source on both sections, all the key risk elements of these two basins appear identical except that one is a ‘hotspot’ (where super major IOC’s and NOC are dividing and negotiating the spoils and most of the key acreage is held) and the other is blushingly unexplored, still open for suitors.
One could get really excited about what we can see using 20-25 year old 2D seismic data in Orange and 2024 3D data from Pelotas. Legacy seismic has its place, and lets us make this prospectivity jump across the pond. Yet we are comparing a ‘quick look’ section from Pelotas, migrated in ‘2.5D’ coming directly off the Shearwater Duchess Vessel, processed by Shearwater’s excellent team while still acquiring data. It is very provisional data but look at what modern processing sequences in skilled hands can give you even off the boat! Although it is perhaps nine months away from being ‘fully processed’ through a Post Stack Depth Migration, yet it is (a) incredibly free from multiples and noise already, (b) revealing which of the Open blocks in the next licensing round one should target for the ‘Venus’ and ‘Late Cretaceous’ plays, indeed (c) showing a large and extensive flat event in the Upper Cretaceous channel sequence.
On the other side of Figure 3 is a composite legacy section from the proven part of the Orange Basin close to the Namibia/ South Africa border. While the Venus-style Lower Cretaceous fan on source rock in a counter-regional trap oil play looks identical to Pelotas, the Upper Cretaceous play is missing, as
the orderly clastic pro-aggrading slope of Pelotas is replaced by a cacophony of a 2-km thick mass transport system. The data from Orange Basin is a confection of legacy 1990s and 2000s 2D lines and is disappointingly vague about the details of the section. So, to chase the Mopane discovery (inner basin) equivalents from Namibia into South Africa, Searcher is about the begin reprocessing a grid of 2D data from the Orange Basin in South Africa. Until that is done, to give an indication of the potential uplift, Figure 4 is a fast-track PSTM line in depth from Searcher’s and Shearwater’s 2023 Gap survey showing the uplift that even fast-track modern processing can have on these sections.
A brave new world is upon us where seismic reprocessing adds more value to the explorer’s seismic assets than ever before. Searcher is here to help you rage against the dying of the seismic light. As the American poet Emma Lazarus might have said: ‘Give us your tired, your poor, your huddled seismic yearning to breathe free, and by reprocessing it, we will lift the lamp to show where the black-gold be.’
References
Andy Racey, Proprietary Report. [2024]. Onshore Evaluation of post ophiolite, Campanian-Lower Miocene Geology Eastern Oman. Andy Racey Geoscience Ltd. Hodgson, N., Rodriguez, K., Debenham, H. and Found, L [2023].
Affordably Making the Invisible Unmissable. First Break, 41 Hodgson, N., Rodriguez, K., Davies, J., Hoiles, P. and Al Albani., S. [2022]. Offshore Oman; Stunning hydrocarbon geology from a closing ocean. GeoExpro., 19(2), March 2022.
Løseth, H., Wensaas, L., Gading, M., Duffaut, K. and Springer, M. [2011]. Can hydrocarbon source rocks be identified on seismic data? Geology, 39(12), 1167-1170. doi: https://doi.org/10.1130/G32328.1
Figure 4 Fast Track PSTM Gap 3D in depth from Namibia indicating the potential uplift obtainable from reprocessing the 2D legacy data shown on Figure 3 RHS.
FIRST EAGE ATLANTIC GEOSCIENCE RESOURCE EXPLORATION & DEVELOPMENT SYMPOSIUM
ATLANTIC CONJUGATE MARGINS AND THEIR GLOBAL SIGNIFICANCE IN OUR ENERGY FUTURE
5-7 MAY 2025 I MARRAKECH I MOROCCO
CO2 storage capacity classification and compliance
SPE’s CO2 storage resources management system is under revision. Ruud Weijermars1* clarifies the status of the proposed classification methodology and how new tech companies are developing a separate code of practice for carbon removal claims.
Abstract
The Society of Petroleum Engineers (SPE) has proposed a framework for the classification of CO2 storage resources and storage capacity, known as the Storage Resources Management System (SRMS, 2017). The SRMS framework aims to provide guidelines for the classification and reporting of CO2 assets. It’s similar to SPE’s well-established framework for hydrocarbon resources and reserves classification, known as the Petroleum Resource Management System (PRMS, 2018). Meanwhile, revisions of both the 2018 version of PRMS and 2017 version of SRMS are underway, with public consultations of practitioners completed in 2024 (PRMS, 2024; SRMS, 2024).
While the PRMS is actually used industry-wide, this cannot be said for the SRMS. There is a nascent CO2 storage business segment, with mushrooming carbon removal startups. This fledgling new industry is now proposing a carbon removal quality assurance Code of Practice, at the same time as SPE is spending efforts on revising its carbon resources management system. The absence of track record and weak incentives, as well as a lack of case studies on how to apply SRMS in practice, are lurking in the background. Additionally, the groundswell of arguably opportunistic providers of storage capacity has created an unprecedented situation where carbon removal companies offer mostly unclassified storage capacity (in SRMS classification’s sense), as will be detailed below. The lack of validated storage capacity is an important hurdle in startup credibility — investors should be wary.
Introduction
The estimation of proved reserves is a highly technical skill, which requires modelling of production behaviour of a well system and flow conformance in the reservoir to estimate the anticipated recoverable hydrocarbon stream for detailed cash flow analysis under P90 (90%) certainty. This skill set is taken very seriously by petroleum engineers working as reserves estimators, because misjudgments of reserves and overlooking impending impairments when commodity prices are volatile (such as is the case for oil and gas prices, and similarly for CO2 credits) can result in significant losses of company asset values. A practical example of how misjudging reserves can result in rapid loss of company asset value was observed first hand by the
author. In 2011, a small US-based shale operator (Petrohawk) sold out for $12 billion, which was 1.65 times the company market value based on its stock price times the number of outstanding shares. The acquiring company (BHP BiIliton), an Australian Gold Mining Company with little or no experience in the evaluation of shale assets, bought the Petrohawk acreage comprised of shale gas leases in the Haynesville-Bossier play. However, within six months of the Petrohawk acquisition, BHP had to impair a major portion of the newly bought ‘proved’ shale gas reserves, resulting in an almost instantaneous evaporation of over $6 billion in asset value.
Was this impairment of gas reserves unforeseen? Certainly not! An independent appraisal was completed in spring 2011, presented at a major SPE forum (Weijermars, 2011) and an SPE journal paper was submitted June the same year. What we did not know was that Petrohawk was already up for sale, and its management was apparently alerted of the impending publication of our study by an SPE journal reviewer. He turned out to also be an advising consultant for Petrohawk’s reserves classification, and informed them about our devastating asset value appraisal paper. The pre-publication leakage of the information in the forthcoming SPE paper led to mysterious invitations: First, the son of the Petrohawk CFO, then the CFO himself invited my consultancy company to come over to Petrohawk’s HQ to tour their assets.
We never saw the need to tour the Haynesville asset, because we knew our study was based on sound production forecasting principles, paired with a robust cash flow analysis. Understandably, Petrohawk was very nervous that our negative appraisal, if published too soon, might throw a spoke in its impending deal with BHP. Needless to say that publication of our SPE paper was very much delayed. Finally, a summary of the study was published a year later (Weijermars, 2012) and more details followed in First Break (Weijermars and Van der Linden, 2012).
Our assertion that the Petrohawk assets were vastly oversold (repeated in Weijermars and Hulbert, 2012) appeared correct: BHP Billiton duly impaired its gas reserves and took the hit on its balance sheet, halving the asset value under pressure of compliance to the strict SEC reserves reporting guidelines and booked a loss of $6 billion on its shale assets.
The message here is that investors may get quickly duped by companies using deficient resource classification practices, and
likewise when asset-acquiring companies have no or only poor expertise in the validation of reserves targeted for acquisition. Claiming to possess proved reserves is one matter. Having assets hedged against impairment scrutiny is another matter.
Likewise, in the case of CO2 storage capacity claims, there is an important difference between commercially motivated claims of possessing capacity (with a certain asset value), and actually mastering the technology and engineering skills required for delivery of the stated storage capacity (with validated asset value). The remainder of this article points out what the SPE SRMS entails and then highlights why there is reason for concern.
Storage capacity maturation process
Prior to SRMS classification of subsurface resources, any operator aiming to store CO2 in the subsurface first must obtain permits from environmental regulators. In the US, EPA’s Class VI injection well guidelines apply (EPA, 2010), and similar rules exist in the EU (EU, 2009) and Australia; most nations may have something similar in place. For example, EPA (US) is authorised by the Safe Drinking Water Act (SDWA) to regulate GCS, for which Class VI injection wells guidelines apply. The principal purpose is to protect public health and Underground Sources of Drinking Water (USDW), considering the mechanisms involved in CO2 injection. For EPA approval of Class VI CO2 injection wells, the following are the principal obligations that must be met (EPA, 2010):
1. Pre-development
• Site characterisation: to ensure the geology in the project area can receive and contain the CO2 within the zone where it will be injected, including that the area is free of faults and fractures and that induced seismicity is not a concern.
• Computer modelling: of the predicted extent of the injected CO2 plume and associated pressure front for flow conformance control, and to identify and address any deficiencies of existing wells through corrective action regarding the regions where the injected plume and its associated pressure front may impact pore fluids.
2. Operational Requirements
• Well construction: to ensure the Class VI injection well is constructed in a manner that will prevent any CO2 from leaking outside of the injection zone.
• Testing and monitoring: to monitor the integrity of the injection well, groundwater quality, and the movement of the CO2 plume and pressure front throughout the life of the project, including after CO2 injection has ended, until the permitting authority determines no additional monitoring is needed to ensure that the GCS project does not pose a danger to Underground Sources of Drinking Water (USDW).
• Operational Soundness: to ensure the injection activity is appropriate to the well’s construction and geologic characteristics so that it will not endanger USDWs or human health.
• Plug and abandonment: the injection well must be plugged in a manner that will not allow fluid movement that endangers USDWs.
3. Administrative requirements
• Financial instruments: sufficient to cover the cost of corrective action, plugging the injection well, post-injection site care, and emergency and remedial response for the GCS project (i.e., financial responsibility).
• Emergency and remedial response plan: maintain a site-specific plan.
• Reporting: to report all testing and monitoring results to the permitting authority (EPA) to ensure the project is operating in compliance with all permit and regulatory requirements.
However, getting approval to drill for claimed storage capacity utilisation from an environmental regulator is not the same as possessing a specific storage capacity (with validated asset value). In SRMS-jargon, the asset value and capacity to store a mass of CO2 is only possessed by the company when it is ascertained that four criteria are satisfied: (1) the target geologic formation must be discovered and characterised (including containment), (2) it must be technically possible to (safely)
Figure 1 (a) SRMS-classification terms and criteria for distinguishing storage resources and capacity as the GCS project storage resources mature. (b) Subcategories based probabilistic appraisal of storage resources. From SRMS (2024).
(a)
(b)
inject at the required rate, (3) the development project must be commercial, and (4) the source must be available for the project. SPE should almost certainly add in its impending SRMS revision that (5) the operator should own or have a lease for storage in the reservoir (or sequestering on land surface) valid for the life of the project, as well as fulfillment of the EPA regulatory requirements.
Storage capacity (Figure 1a) must be further categorised in accordance with the level of certainty (Figure 1b), and proved storage capacity can only be claimed when achievable with 90% certainty (P90 capacity). Note that the word ‘inject’ in the second criterion suggests SRMS only applies to sequestration in the subsurface; if ‘inject’ were to be replaced by ‘sequester’, this limitation would be relaxed.
In any case, to validate the above SRMS requirements, a so-called resource maturation and appraisal workflow must be completed, which is a multi-year iterative process, involving extensive modelling to reduce uncertainty before a final investment decision can be taken based on validated storage capacity estimations. The modelling does not stop after project implementation, but must continue to sync the plume migration model and the monitored plume advance to ensure the environmental regulation requirement for safe storage is still met. Importantly, one should realise storage capacity would be impaired if the regulator revokes the injection permit or paralyses the injection process (as happened several times in the world’s largest GCS project associated with the Gorgon Gas Field development, Australia; see Weijermars, 2024).
Figure 2 shows an example of the plume advance modelling outcome and comparison with seismic field diagnostics of plume advance, as is periodically conducted for the Sleipner GCS project. The seismic monitoring helps calibrate the rates in the TOUGH2 model predictions for plume migration (Chadwick and Noy, 2015). The asset value in the Sleipner GCS project
Figure 2 Comparison of plume advance as imaged with 4D seismic (top row) and as modelled with TOUGH2 computer code (bottom row). After Chadwick and Noy (2015).
was created by the Norwegian government’s decision in 1991 to introduce a carbon tax valued at $60/tCO2
If the P90 certainty of storage capacity over the life of the GCS project cannot be ascertained by advanced modelling, impairment of the asset and its value follows. Such storage capacity impairment would have been the case based on delays in the Gorgon GCS Project due to technical complications and environmental regulator-mandated injection rate ceilings to avoid pressures getting too close to the cap rock fracturing limit.
However, SEC style mandatory reserves reporting of hydrocarbon reserves in sync with PRMS, does not currently exist for CO2 storage capacity claims. The business segment apparently is still too nimble to have attracted regulation from the Securities and Exchange Commission (SEC), which is the compliance watchdog of the New York Stock Exchange (NYSE). Nonetheless, the importance of hedging against the risk of capacity impairment in the carbon removal appraisal process will be highlighted in the next section.
Hedging risk
There are many sources of risk in the CO2 storage resource maturation process, which may range from technical setbacks to volatility in storage revenues. Some typical technical setbacks have surfaced in prior GCS pilot studies by Equinor (Sleipner, Snøhvit, In-Saleh), and the Gorgon GCS project, currently executed by a joint venture of Chevron, Exxon and Shell as majority shareholders.
The number one risk is the premature pressure escalation, and imminent risk of cap rock failure, due to which injection rates and estimated ultimate storage are lower than originally planned (Hauber, 2023). In Gorgon, the pressure escalation was further exacerbated by wells being drilled in relatively close proximity,
leading to faster pressure escalation than if wells were spaced wider apart (Afagwu and Weijermars, 2024).
A second technical hurdle is the occurrence of water blockage at the bottom of the injection wells. In the Gorgon GCS project, it appeared necessary to remove humidity from the CO2 injection stream, requiring extra fractionation. By knocking out water at the surface, injection of a purified CO2 stream prevents water-blockage from occurring in the bottom of the well system at reservoir level due to the prevailing temperature and pressure (Trupp et al., 2013, 2021).
A third technical hurdle that occurred first in Sleipner, was again encountered at Gorgon – sand production in the injection wells leading to a workover decision in 2023 of wells drilled in 2015 (Weijermars, 2024). Figure 3 gives a sketch of the Sleipner injection well with its perforation section in gravel packs and zonal separation screens to prevent backflow and formation sand loss due to differential pressure circulation. The original well perforation over 100 m length of September 1996 should have corresponded to a well injectivity of 100 m3/day/bar (Hansen et al., 2005). However, sand influx lowered the well’s injectivity and, in August 1997, the well was re-perforated over a 36 m interval on the lower end of the wellbore. Gravel packs and 200-micron sand screens were installed (Figure 3), which stabilised the well’s injectivity.
In addition to technical risks in GCS projects, a financial risk of project cash flows being rendered negative resides in the pricing of CO2 credits, which may fluctuate over time. This CO2 credit volatility can be mitigated by financial hedging using derivatives and by negotiating long-term price guarantees before project execution. In summary, the SPEs Storage Resources Management System (SRMS, 2017, 2024) is under revision and could lead to mandatory reporting of proved storage capacity similar to what SEC mandated for PRMS-based reserves classification. However, a booming carbon removal trade is emerging and means of verification of the carbon storage claims are still being studied (VCMI, 2024).
Startup claims of storage capacity (carbon removal)
The globally emergent commercial CO2 market is now teeming with very small startup entities. What is typical for such startup companies is that these are commonly not listed on any stock
exchange, as the Initial Public Offering (IPO) commonly does not occur until a company has created enough asset value. Also, listing on the New York Stock Exchange (NYSE) requires that a company must be willing and capable of fulfilling certain reporting requirements. For example, even a mature company like Saudi Aramco, when taking its first steps towards privatisation in 2019, did not IPO on NYSE but instead opted for listing on Riyadh’s Tadawul (Weijermars and Moeller, 2019). NYSE listing became contiguous because the SEC reserves reporting requirements were allegedly deemed too strict.
Appendix A lists a selected number of startup companies offering CCS solutions. None of these corporate entities employ SRMS-style CO2 storage capacity classification. The businesses are typically funded with a requirement of life-cycle analysis (LCA), techno-economic analysis (TEA), or generic measurement, reporting and verification workflow (MRV).
For example, Vaullted, a Houston-based company, claims it will inject carbon-rich biomass deep underground (Figure 4). The biomass is not incinerated but turned into a slurry, and then injected via disposal wells for permanent GCS. Wait a minute – now we inject biomass slurries? That means EPA strict Class VI injection well regulation does not apply, and instead more lenient permits for waste disposal Class V wells suffice. The costly plume advance monitoring requirement does not apply for Class V wells. But CO2
Concept for carbon removal by Vaulted, which does not provide any compelling details about how a biomass-slurry can be technically, safely and costeffectively stored in the subsurface, yet obtained an advance market commitment of $58.3 million.
Figure 3 Gravel packs with sand screens occupy the perforation interval of the Sleipner injection well 15/9-A16 landed in the Utsira storage reservoir (after Hansen et al., 2005).
Figure 4
gas will almost certainly form and accumulate in the stored slurry, and venting may be needed to avoid pressure escalation.
And what quantity of slurry volume needs to be stored to meet the 152,480 contracted tons of carbon removal for which Vaulted secured an advance market commitment (AMC) upscaled to $58.3 million? Its upscaled AMC-contract stipulates carbon removal must occur between 2024 and 2027. Technology issues like how biomass slurry storage quantities of a salt cavern are computed and whether injection pumps can handle biomass slurry are not clarified by Vaulted. Such conceptual claims will almost certainly not be possible to classify as storage capacity according to SRMS, yet Vaulted has secured an AMC contract.
The corporate reporting of companies operating in the startup field is very opaque. Take, for example, Carbfix, operating in Iceland, a subsidiary of the Reykjavik Energy Group (Orkuveita Reykjavíkur; OR). The parent company OR provides Carbfix with procurement, accounting, financial and risk management services. Carbfix claims its storage capacity for CO2 in basaltic rocks is immense, with the possibility of storing more than 100 kgCO2/m3. They injected approximately 10,000 ton CO2/y at the Hellisheiði site since 2014, and claim less than 0.01% of the storage capacity was utilised.
Note that the aforementioned term ‘storage capacity’ is not used in compliance with the SRMS classification usage. Storage capacity in SRMS jargon is the commercially and technically accessible capacity (proved plus probable plus possible) in the reservoir, which must be validated by sophisticated modelling. No such validated model is known to exist for quantifying the mineral trapping rate in basalts, and storage resources in place are different from proved (90% certainty) storage capacity – subtle, but important differences in wording with huge implications for what represents validated asset values. What is relevant here is that neither Carbfix nor OR need to classify proved storage capacity according to the SRMS framework; there are no SEC/NYSE-style oil and gas reserves reporting guidelines to adhere to, as would apply to NYSE-listed oil and gas companies when reporting proved hydrocarbon reserves.
The asset values of Carbfix are not separately detailed in any of OR’s corporate reporting. This is partly because OR is listed on the very insular Icelandic Stock exchange market, which happens to be part of Nasdaq’s Nordic electronic trading platform (i.e., a subsidiary of Nasdaq), but should not be confused with the US trading platform of the parent company Nasdaq. OR has been approved for listing on the US-based Nasdaq Sustainable Bond Network (NSBN) since 2020, a global platform for sustainable,
green and social bond issuers. However, again NSBN is not part of the Nasdaq Stock Market LLC (‘Nasdaq’), thus OR is not stocklisted on Nasdaq. Carbfix’s corporate governance and liquidity are entirely in the hands of its parent company OR, which is credit-rated BBB- by Fitch and Baa3 by Moody’s; just one notch away from non-investment grade.
Next, take Lithos Carbon, a company founded in 2022 by two professors at Yale University and Georgia Tech who claim they can accelerate the natural process of rocks capturing CO2 from the air by enhanced rock weathering. The removal of CO₂ from the atmosphere happens in an open field with piles of crushed-up basalt from quarries strewn across cropland. The basalt is assumed to react with rainwater to form bicarbonates, which removes the CO2 from the atmosphere. The bicarbonate ultimately makes its way to the ocean, thus storing the CO2 for millennia. So, we would not even need to go to the subsurface to mitigate anthropogenic CO2 emissions.
A company like Lithos Carbon cannot classify its storage capacity using SPE’s SMRS, unless the revised SRMS replaces ‘inject’ by sequester, as mentioned earlier in this paper; there is no regulation that forces them to apply any storage capacity classification framework. Besides, the company is privately held, so no public reporting exists. Carbon removal companies without SRMS-style validation of storage capacity can get access to financing. The rigour applied in reserves-based financing routinely required by smaller oil companies, which must follow PRMS and SEC guidelines even when not listed on NYSE, is not (yet) seen in carbon removal financing, as explained below.
Leveraged financing: Advance market commitment
The financing for carbon storage startups is now facilitated by Frontier Climate, an advanced market commitment (AMC) initiative, which aims to accelerate the development of carbon removal technologies by matching demand and supply (Figure 5). Founded in April 2022 by major tech companies led by Stripe (Table 1), Frontier announced a $925 million advance market commitment (until 2030) of carbon dioxide removal (CDR) from companies that are developing CDR technology. Frontier is privately held, which means corporate accountability is only due to the founding partners (Stripe, Alphabet, Meta, Shopify and McKinsey). Frontier calls itself a first mover coalition, poised to jumpstart the carbon removal market place. It envisages carbon removal will cost $100/tCO2 (or less) by 2050.
Figure 5 Frontier acts as facilitator for matching suppliers and buyers of carbon removal capacity. The facilitator was founded by four major US and Canadian tech companies (Alphabet, Meta, Stripe and Shopify) and one US-based consultancy (McKinsey).
Commercial success Founder Owners
Alphabet Google
Larry Page (51); Sergey Brin (51)
Meta Facebook Mark Zuckerberg (40)
Shopify Online retail Tobias Lütke (43)
Stripe Retail Payment processing
Jon Collison (34); Patrick Collison (36)
McKinsey Consultancy Corporate Branch
Table 1 Frontier founders
Technical and commercial consultants from academia and industry (listed on the Frontier website) vet, on behalf of buyers, the purchases of carbon storage solutions by high-potential carbon removal companies. There are 60 technical reviewers (individuals with a wide range of expertise), who review Frontiers carbon removal projects, and one can imagine they attempt to look into the aspect of rigorous and transparent methods for monitoring and verification of the storage. The Frontier AMC agreements include a requirement of third-party life-cycle analysis (LCA), techno-economic analysis (TEA), or generic measurement, reporting and verification workflow (MRV), which it seems must serve also as proxies for independent storage validation. However, use of MRV based on US EPA guidelines and regulatory thresholds in the application of solid waste to agricultural land, including products containing trace metals, are not designed to verify quantities stored, and provide only safety thresholds for usage. Yet, a carbon removal company like Mati (see Appendix A) uses it as a validation argument.
The criteria used by Frontier to vet carbon removal suppliers are listed in Table 2. In the present context, the verifiability requirement is relevant. Scientific rigour and transparency in monitoring and verification of the storage is expected. However, further details are wanting, and none of the 41 Frontier-sponsored companies inspected by the author (Appendix A) provide any open source details of substance.
However, without an objective set of specific guidelines about what methods to use for verification of storage capacity claims, the auditing and verification of such claims becomes a very subjective process, depending on choices made by the
participating individuals. In contrast, one of the cornerstone fundamentals for SPE-style verifiability of reserves and storage capacity is that the guidelines must be rigorous, such that no matter which expert(s) execute(s) the resource estimation, the outcome should be the same (or within a few percent error margin). This objectivity is achieved when we apply SEC/PRMS reserves reporting rules. The same level of objective verifiability is strived for in SPEs SRMS.
Instead, it seems the new entrant businesses in carbon removal are reinventing the wheel and are still pondering on how to develop carbon removal and storage reporting guidelines. Another major problem with after-the-fact verification of AMC payment for quantities stored (using LCA, TEA, or MRV) is that no long-term business commitment is possible, as would be required for multi-year forward projections of storage capacity, coupled with a discounted cash flow model.
For example, Frontier has signed a carbon removal agreement with Lithos Carbon (in December, 2023), entitling $57.1 million to remove 154,240 tonCO2 in the period between 2024 and 2028. One should realise, an AMC is a promissory agreement to buy a product. Assuming it will be successfully developed, non-delivery would mean non-payment. Note again that the 154,240 tons of carbon dioxide to be stored by Lithos is not utilising a proved storage capacity as intended by SRMS. But there presently exists no regulation to enforce audits of the unprecedented containment of CO2, which makes it very difficult to verify whether the claimed amounts of storage capacity actually exist with P90 (90%) certainty. Lithos Carbon is a startup company with private investors only, which means no public reporting exists on asset value in the balance sheet or cash flow of the project and profit/loss achieved in their storage capacity endeavour. How the actual amounts of storage will be verified remains largely unclear.
Offering carbon removal solutions is now so in vogue that even Direct Air Capture (DAC) devices gained advance market commitment from Frontier. DAC units are steel boxes with innards that can sequester 1000 tons of atmospheric carbon dioxide annually. The cost of carbon removal using DAC currently stands at between $600 and $1,950/tonCO2. Frontier has agreed to pay DAC producer Skyscraper $1 million for 1 DAC unit to remove 1000 tons of atmospheric carbon dioxide.
Physical footprint Takes advantage of carbon sinks and sources that do not compete for arable land
Cost Has a path to being affordable at scale (<$100 per ton)
Capacity Has a path to being a meaningful part of the carbon removal solution portfolio (>0.5 gigatons per year)
Net negativity Maximises net removal of atmospheric carbon dioxide
Additionality Results in net new carbon removed, rather than taking credit for removal that was already going to occur
Verifiability Has a path to using scientifically rigorous and transparent methods for monitoring and verification
Safety and legality Is working towards the highest standards of safety, compliance, and local environmental outcomes; actively mitigating risks and negative environmental and other externalities on an ongoing basis
Table 2 Criteria for durable carbon removal solutions applied to Frontier’s portfolio.
Parent
Just to keep some sanity-check on numbers: a tree can grow 25 kg/y in dry mass and captures thereby 0.025 ton/y (Carbon Plan, 2024). A hundred trees capture 2.5 tons per year and to capture 1000 tons of atmospheric carbon dioxide we would need to plant 40,000 trees. So should we instead we pay $1 million for a DAC unit? Even with the poor total life-cycle analysis outcome of DAC units? And how long will DAC units last?
Now wait a minute… was not Chesapeake nearly driven to bankruptcy, a decade ago, when it aggressively financed its business via advance production payments on gas resources (Weijermars, 2012) it had yet to produce? Actually, it did not possess the claimed forward production capacity because its production forecasts were based on overoptimistic well deliverability. Arguably, the promissory notes of Frontier’s advance market commitments have a similar status as advance production payments (APP); except the credit payment of AMC is not made upfront as is the case in APP.
Speculative financial engineering must now help to solve the world’s carbon storage problem. And the IPCC keeps studying and recently commissioned a Report on CarbonDioxide Removal Technologies, Carbon Capture Utilisation and Storage Activities to be completed by 2027. The first step along the path towards the preparation of this report was an expert meeting held in Vienna, Austria, July 2024. A 2015 Financial Times article was titled: Carbon capture: Miracle machine or white elephant? Ten years on, we still don’t know the answer.
Discussion: and recommendations SRMS modifications. The SPE was soliciting input suggestions until 1 October 2024, to improve the SRMS. One suggestion could be to change the somewhat cryptic term ‘capacity’ (see Figs. 1a,b) into ‘storage capacity’. Another issue is that SPE is not specifying the reporting unit for the storage resources; it seems advisable to recommend the use of the metric ton, while for oil and gas resources volumetric units are used. And then there is the suggested replacement of ‘inject’ by ‘sequester’ to broaden the SRMS application potential. Also, a fifth criterion needs to be added (5) the operator should own or have a lease for storage in the reservoir or sequestering on land surface valid for the life of the project, as well as fulfillment of the EPA regulatory requirements.
Storage capacity reporting. Regulation of reserves reporting is a sovereign responsibility. In the US, this responsibility is delegated to SEC for stock-listed entities; even foreign companies listed on NYSE need to comply with SEC hydrocarbon
Figure 6 Blue hydrogen production from natural gas with GCS and green hydrogen from renewable energy sources prevent atmospheric pollution by combustion engines in the transportation system. Image courtesy energyinst.org.
reserves reporting guidelines (closely aligned with – but not identical to – PRMS; for example, SEC mandates additional constraints be used in current year commodity price assumption to report fair value of the proved reserves). Other countries have similar reserves reporting guidelines, and instead of resorting to PRMS-based resource classification (SRMS, 2017, 2024), use the comparable but independent UNFC classification framework (UNECE, 2019).
Organisations like the International Energy Agency put much emphasis on the need to adopt an SRMS type of CO2 capacity classification (IEA, 2022). However, no regulations exist for carbon storage capacity reporting. Only environmental regulation is binding with EPA required to give approval for Class VI injection wells for GCS projects. EU-based projects need to comply with similar environmental regulation for carbon sequestration. However, it seems high time that carbon storage resource reporting (based on either SRMS or UNFC type classification) will be mandated by governments. Strict guidelines are needed for validation of carbon storage capacity claims. These are needed to protect investors and avoid both private and public funds being squandered on poorly studied pilot projects of privately owned companies with not much incentive to share data and lessons learnt.
Blue hydrogen and carbon storage. Blue hydrogen is produced by splitting natural gas molecules and storing the carbon dioxide molecules in the subsurface (Renssen, 2020). It’s an effective way to reduce atmospheric pollution caused by fossil-fuelled combustion engines. Using blue hydrogen has no carbon emissions (Figure 6). Permanent GCS is necessary for the produced CO2 and therefore adoption of SRMS-style capacity classification and reporting would help make asset values explicit. Additionally, temporal storage capacity of hydrogen will be needed for buffer demand fluctuations.
Conclusions
There is a disconcerting gap between the regulatory compliance and corporate diligence in oil and gas reserve reporting by traditional oil and gas the and the likewise expected compliance in the nascent carbon capture and storage (CCS) business. A brief overview was given of the start-up companies active in the fledgling and highly speculative business of carbon removal technology solutions. By sketching how the new industry leverages responsibility for reporting compliance across different corporate entities, it appears there is room for a much more aggressive regulation of the sector to help investors separate the ‘would-be’ from the ‘competent’ providers of carbon removal and storage solutions.
Acknowledgement
The author acknowledges the generous support provided by the College of Petroleum Engineering & Geosciences (CPG) at King Fahd University of Petroleum & Minerals (KFUPM).
Disclaimer
The assertions made in this paper are based on research and expertise of the author and EAGE and KFUPM are not liable for the findings and opinions expressed.
Appendix A — Carbon removal startups and capture technology choices of tech companies
Tech companies have large carbon footprints due to their enormous power consumption in server stations around the world. This leads them to invest in carbon removal credit purchases. For example, Microsoft announced in July 2024 it will purchase 500,000 tons of carbon dioxide removal (CDR) credits of 1PointFive, a carbon capture, utilisation and sequestration (CCUS) subsidiary of Occidental Petroleum. The CO2 will be stored in the Stratos DAC facility in Ecor County, Texas, which aims to pump up to 500 kton of DAC sourced CO2 per year into the subsurface. Stratos is expected to be commercially operational in 2025. There will be an EPA approved monitoring program for Class VI injection wells, but there is no mention of SRMS-style capacity classification and reporting.
Other tech companies (Alphabet, Meta, Shopify and Stripe) have elected to purchase credits via carbon removal startup companies they help jumpstart in their Frontier initiative. Table A1 lists 41 projects contracted under a combination of Carbon Removal Purchase Agreements and Advance Market Commitments (2022-2024). The total carbon removal quantity under contract until 2030 is 26,031 tCO2. Thus far, participants come from the US (26), Canada (4), UK (3), Germany (2), Israel (2), Australia (2), Mexico (1), and Singapore (1).
The projects are focused on sequestration of carbon through (1) the weathering-enhanced mineralisation on crop fields, and its naturally present or artificially added micro-organisms, (2) mineralisation in the subsurface, (3) leaching mine tailings, (4) the turning of slag into carbonatite and road pavement, (5) the liming of river water and/or ocean water, (6) the biowaste disposal via injection into salt caverns, (7) the dumping of biowaste in anoxic
# Carbon removal solution
deep ocean basins, (8) the surface vaulting of biowaste, and (9) DAC units.
From a research perspective, no new technologies are proposed, because all of the concepts have been around for decades. But perhaps Frontiers’ funding and AMC may help to turn some ideas into practically implementable storage solutions. However, the pricing of the pilot studies of the 41 companies listed in Table A1 is still exceptionally high, reaching $1,960/tCO2, and with a mean price at $911/tCO2 Considering the current carbon taxation on average is $48.56/tCO2 in Europe, the project portfolio of Frontier (Table A1) only underscores how far away we still are in finding economically viable solutions for durable carbon capture solutions.
References
Afagwu, C., and Weijermars, R. [2024]. Sensitivity Analysis of CO2-Migration Paths in Geological Carbon-Dioxide Sequestration: Case Study of the Gorgon GCS Project. https://dx.doi.org/10.2139/ ssrn.4985775
Carbon Plan [2024]. https://carbonplan.org/research/dac-calculator. Chadwick, A.J. and Noy, A.D.J. [2015]. Underground CO2 storage: demonstrating regulatory conformance by convergence of history-matched modeled and observed CO2 plume behavior using Sleipner time-lapse seismics. Greenhouse Gases Science and Technology, 5, 305-322. https://doi.org/10.1002/ghg.1488
EPA [2010]. Federal Requirements Under the Underground Injection Control (UIC) Program for Carbon Dioxide (CO2) Geologic Sequestration (GS) Wells Final Rule. https://www.epa.gov/uic/ federal-requirements-under-underground-injection-control-uic-program-carbon-dioxide-co2
EU [2009]. Directive 2009/31/EC of the EU on the geological storage of carbon dioxide. https://eur-lex.europa.eu/legal-content/EN/TXT/ PDF/?uri=CELEX:32009L0031
Hansen, H., Eiken, O. and Aasum, T.O. [2005]. Tracing the path of carbon dioxide from a gas-condensate reservoir, through an amine plant and back into a subsurface acquifer. Case study: The Sleipner area, Norwegian North Sea. SPE96742. https://doi.org/10.2118/96742-MS
Hauber, G. [2023]. Norway’s Sleipner and Snøhvit CCS: Industry models or cautionary tales. Institute for Energy Economics and Financial Analysis. https://ieefa.org/resources/norways-sleipner-and-snohvitccs-industry-models-or-cautionary-tales
Company name
1 Weathering and mineralisation on crop fields or algal mass Andes Ag, InPpanet, Nitricity, Lithos, Living Carbon,
IEAGHG [2023]. Classification of Total Storage Resources and Storage Coefficients. https://ieaghg-publications.s3.eu-north-1.amazonaws. com/Technical+Reports/2023-05+Classification+of+Total+Storage+Resources+and+Storage+Coefficients.pdf.
PRMS [2018]. Petroleum Resources Management System. Society of Petroleum Engineers. https://www.spe.org/media/filer_public/0c/83/0c835db9501f-4ce7-97f1-a1d6bb4e3331/prmgmtsystem_v103.pdf.
PRMS [2024]. Call for public input for possible revisions of PRMS 2018 Petroleum Resources Management System. Society of Petroleum Engineers. https://jpt.spe.org/petroleum-resources-management-system-public-comments-period-open.
Renssen, S. van [2020]. The hydrogen solution? Nature Climate Change, 10, 799–801. https://doi.org/10.1038/s41558-020-0891-0.
SRMS [2017]. CO2 Storage Resources Management System. https://www. spe.org/en/industry/co2-storage-resources-management-system/.
SRMS [2024]. Call for public input on SRMS Draft 6 of 2024 revisions of SRMS 2017. https://www.spe.org/media/filer_public/6f/2f/6f2ff44babae-4894-821a-f17c35df22ea/srms_draft6_public_review.pdf.
Trupp, M., Frontczak, J., and Torkington, J. [2013]. The Gorgon CO2 Injection Project – 2012 Update. Energy Procedia, 37, 6237-6247. https://doi.org/10.1016/j.egypro.2013.06.552.
Trupp, M., Ryan, S., Barranco, I., Leon, D., Scoby-Smith, L. [2021]. Developing the world’s largest CO2 Injection System – a history of the Gorgon Carbon Dioxide Injection System. Proceedings of the 15th Greenhouse Gas Control Technologies Conference, 15-18 March 2021. http://dx.doi.org/10.2139/ssrn.3815492.
UNECE [2019]. United Nations Framework Classification for Resources. https://unece.org/sites/default/files/2023-10/UNFC_ES61_ Update_2019.pdf.
VCMI [2024]. Claims Code of Practice. Building integrity in voluntary carbon markets. August 2024, version 2.1. https://vcmintegrity. org/document/vcmi-claims-code-of-practice-aug-2024-version-2-1/ vcmintegrity.
Weijermars, R. [2011]. Security of Supply - Operational Margins at the Wellhead and Natural Gas Reserve Maturation. AAPG Search and Discovery Article #70106 of Forum Presentation, AAPG General Assembly A, 12 April 2011. http://www.searchanddiscovery.com/ documents/2011/70106weijermars/ndx_wei...
Weijermars, R. [2012]. Credit Ratings and Cash Flow Analysis of Oil & Gas companies: Competitive disadvantage in financing costs for smaller companies in tight capital markets. SPE Economics & Management, 3(2) (April), 54-67 (SPE paper 144489). https://doi. org/10.2118/144489-PA.
Weijermars, R. [2024]. Concurrent Challenges in Practical Operations and Modeling of Geological Carbon-dioxide Sequestration: Review of the Gorgon Project and FluidFlower Benchmark Study. Energy Strategy Reviews, in press.
Weijermars, R. and Hulbert, M. [2012]. Shale gas assets – overpriced or a liquid turn for Mining Giant BHP? The Oil Drum, Online, 14 August 2012, 4 pages. http://www.theoildrum.com/node/9397.
Weijermars, R, and Moeller, J, [2020]. Saudi Aramco Privatization in Perspective: Financial Analysis and Future Implications. Journal of Finance and Economics, 8(4), 161-170. doi: 10.12691/jfe8-4-2.
Weijermars, R. and Van der Linden, J. [2012]. Assessing the Economic Margins of Sweet Spots in Shale Gas Plays. First Break, 30(12), 99-106. https://doi.org/10.3997/1365-2397.30.12.65623.
To serve the interests of our members and the wider multidisciplinary geoscience and engineering community, EAGE publishes a range of books and scientific journals in-house. Our extensive, professional marketing network is used to ensure publications get the attention they deserve.
EAGE is continually seeking submissions for both book publishing and articles for our journals.
A dedicated and qualified publishing team is available to support your publication at EAGE.
CALENDAR OF EVENTS
20-21 Nov Asia Petroleum Geoscience Conference and Exhibition (APGCE) icep.com.my/apgce
25-26 Nov First EAGE/SBGf Conference on The Roadmap to Low Carbon Emissions in Brazil www.eage.org
December 2024
3-5 Dec First EAGE Symposium & Exhibition on Geosciences for New Energies in America www.eagenewenergies.org
EAGE Workshop on Near Surface Geoscience & Mineral Exploration in Latin America
Part of First EAGE Symposium & Exhibition on Geosciences for New Energies in America
EAGE Workshop on Geothermal Energy in Latin America
Part of First EAGE Symposium & Exhibition on Geosciences for New Energies in America
EAGE Workshop on Water Footprint
Part of First EAGE Symposium & Exhibition on Geosciences for New Energies in America
3-5 Dec 3 rd SEG/EAGE Workshop on Geophysical Aspects of Smart Cities www.eage.org
11 Dec 1st CEEC London Roadshow www.ceecsg.org/1st-ceec-london-roadshow/
México City
México
February 2025
3-4 Feb EAGE Workshop on Carbon Capture and Storage (CCS) in Basalts www.eage.org
17-19 Feb EGYPES 2025 www.egypes.com
18-20 Feb International Petroleum Technology Conference (IPTC) 2025 www.iptcnet.org
20-21 Feb GeoTHERM Expo & Congress 2025 www.geotherm-offenburg.de/en
27-28 Feb First EAGE Workshop on the Triassic and Jurassic Plays in Northwest Europe www.eage.org
March 2025
19-21 Mar 2 nd AAPG/EAGE Papua New Guinea Petroleum Geoscience Conference & Exhibition www.eage.org
24-26 Mar 5 th EAGE Digitalization Conference and Exhibition www.eagedigital.org
April 2025
2-4 Apr 23 rd European IOR+ Symposium wwww.ior2025.org
14-16 Apr Fifth EAGE Well Injectivity & Productivity in Carbonates Workshop (WIPIC) www.eage.org
14-17 Apr International Scientific Conference on Monitoring of Geological Processes and Environmental Conditions www.eage.org
29-30 Apr EAGE Workshop on Advanced Sesimic Solutions for Complex Reservoir Challenges www.eage.org
May 2025
5-7 May First EAGE Atlantic Geoscience Resource Exploration & Development Symposium www.eage.org
8-9 May First EAGE Workshop on Land Seismic Acquisition www.eage.org
CALENDAR OF EVENTS
August 2025
26-27 Aug EAGE Workshop on Machine Learning for Geoscience www.eage.org
September 2025
Sep Second EAGE Workshop on Hydrogen (Exploration and Development) www.eage.org
1-4 Sep World CCUS Conference www.wccus.org
7-11 Sep Near Surface Geoscience Conference and Exhibition www.eagensg.org
7-12 Sep 32 nd International Meeting on Organic Geochemistry (IMOG) www.imogconference.org
9-11 Sep Second EAGE Conference and Exhibition on Guyana-Suriname Basin www.eage.org
14-18 Sep Seventh International Conference on Fault and Top Seals www.eage.org
16-18 Sep The Middle East Oil, Gas and Geosciences Show (MEOS GEO) www.meos-geo.com