Introduction to Exploration Geophysics with Recent Advances

352 Pages • 185,911 Words • PDF • 59.9 MB
Uploaded at 2021-09-24 17:40

This document was submitted by our user and they confirm that they have the consent to share it. Assuming that you are writer or own the copyright of this document, report to us by using this DMCA report button.


Introduction to

Exploration Geophysics with Recent Advances

Martin Landrø and Lasse Amundsen

Introduction to

Exploration Geophysics with Recent Advances

Introduction to

Exploration Geophysics with Recent Advances

Martin Landrø and Lasse Amundsen Editor: Jane Whaley

Dedications: To my wife Eli Reisæter, without whose faith and support this book could not have been written. — Lasse Amundsen To my wife Ingjerd Landrø, for warm and thoughtful support throughout the writing of this book – and in life. — Martin Landrø

Introduction to Exploration Geophysics with Recent Advances © 2018 Martin Landrø and Lasse Amundsen. All rights reserved. No portion of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, scanning, or by any information storage and retrieval system, without the express permission of the authors and the publisher. First Published 2018. Design and Layout: Bruce Winslade, Stroud, United Kingdom. Email: [email protected]. Front cover image: Håkon Gullvåg. Printed and bound in Riga, Latvia. Publisher: Bivrost Email: [email protected] ISBN: 978-82-303-3764-6

GEOSCIENCE & TECHNOLOGY EXPLAINED

iv

Acknowledgements Our cooperation with GEO ExPro started when Halfdan Carstens, who was the editor at that time, approached Martin and asked for a short technical article about the new GeoStreamer®, which had been launched by PGS that same year. Martin accepted, under the condition that a co-author could be found. The person who accepted, under mild pressure, was Lasse, who has continued in that role. Without this initiative from Halfdan, this book would never have been written, and we are therefore greatly indebted to him for approaching us and pushing us on to write many more stories. The first article was published in September 2007, and since then we have contributed to every issue; a total of 62 articles by the end of 2017. The cooperation with both the first and second editors of GEO ExPro, Halfdan Carstens and Jane Whaley, has been excellent over all these years. Jane has done a tremendous job of carefully reviewing all the contributions. Without her continuous efforts and innovative suggestions for improvements, we would not have been able to accomplish this book project. We are grateful for the good collaboration which we have had with GEO ExPro and the support that we have received from GEO ExPro’s Directors Kirsti and Tore Karlsson. We are grateful to many colleagues who have contributed to sections of the book, including: Kent Andorsen, Børge Arntsen, Olav Barkved, Vidar Danielsen, Per Eivind Dhelie, Rigmor Mette Elde, Jo Firth, Christine Fischler, Per Gunnar Folstad, Eivind Frømyr, Kjetil Eik Haavik, Nirina Haller, Arild Jørstad, Einar Kjos, Tron Kristiansen, Jan Langhammer, Jan Erik Lie, Lars Ole Løseth, Jürgen Mienert, Odleiv Olesen, Johan O. A. Robertsson, Hans Christen Rønnevik, Thomas Røste, Mark Thompson, Dirk-Jan van Manen and Vetle Vinje. We thank Per Arne Bjørkum, Kenneth Duffaut, Svein Ellingsrud, Christine Fischler, Stig-Kyrre Foss and Odleiv Olesen for reading and commenting on draft chapters and for specific suggestions to improve the text. Without the help of many individuals providing figures and illustrations, this book would have been much inferior, or even impossible. We thank Aker BP, Chevron, ConocoPhillips, Earthmoves, CGG, EMGS, Institute of Marine Research (IMR), IOGP, Lunar and Planetary Institute, Lundin, Magseis, MicroSeismic Inc., NASA, NGU, Norwegian Petroleum Directorate (NPD), Open University (OU) UK, Optoplan, Pemex, PGS, Polarcus, WesternGeco/Schlumberger, SeaBird Technologies, Sercel, Statoil and Directorate General of Oil and Natural Gas Ministry of Energy and Mineral Resources of Republic of Indonesia for providing illustrations. We thank Christine and Birgitte Reisæter Amundsen, at age 10, for making wonderful drawings of fish and, a few years later, for contributing the chapter on the mysteries of space. We are very grateful to Per Høiem, the owner of Galleri Ismene in Trondheim, for helping us find the book cover, which is Noah’s Ark by Håkon Gullvåg (1959–). Gullvåg is one of Norway’s foremost artists. Since 1995, he has received several large commissions for liturgical artworks, while also receiving acclaim for his portrait-project of well-known Norwegians, including in recent years his portraits of the Norwegian king and queen. We sincerely thank Gullvåg for allowing us to use his famous graphic work on the book’s cover. Noah’s Ark is one of the few biblical stories that most people know. “Make for yourself an ark of gopher wood; you shall make the ark with rooms, and shall cover it inside and out with pitch.” (Genesis 6:14). Traditionally, the pitch that was used to waterproof the Ark has been understood to be bitumen, a petroleum-based product. We do not, however, enter into a discussion on how the youngearth biblical age perspective of about 6,000 years fits with the old-earth perspective required for oil and bitumen to form. We sincerely thank Inge Grødum and Geco for making available Inge’s illustrations which were made for Geco around 1980. Grødum (1943–) is a Norwegian illustrator. He worked for the newspaper Nationen from 1973 to 1987 and for Aftenposten since 1987. He has published several books of collections of his editorial cartoons. The preparation of this book would not have been possible without the wholehearted cooperation and assistance of the Editor-in-Chief of GEO ExPro, Jane Whaley, who edited the book. We thank Jane for her many ideas and helpful guidance in preparing the book. We sincerely thank Bruce Winslade for the layout and production, and Dominique Shead for painstakingly proof-reading the whole book. We are forever grateful to Lundin Norway for sponsoring this NTNU project and making the book available for Norwegian universities and high schools.

v

Contents Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Chapter 1:  Looking into the Earth 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Geophysical Exploration Methods… and Some History . . . . . . . . . . . . . . . . . . . . . . . 6 1.3. Seismic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 Electromagnetic (EM) methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 1.5 Gravimetry and Magnetometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 1.6 Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 1.7 Rock Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Chapter 2:  Elements of Seismic Surveying 2.1 A Brief History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.2 4C – Ocean-Bottom Surveys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 2.3 4C-OBS Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.4 4C-OBS: Superior Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 2.5 Towards Full Azimuths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 2.6 The Ultimate Marine Seismic Acquisition Technique? . . . . . . . . . . . . . . . . . . . . . . . 83 2.7 Codes and Ciphers: Decoding Nature’s Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 2.8 Simultaneous Source Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Chapter 3:  Marine Seismic Sources and Sounds in the Sea 3.1 Airguns for Non-Experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.2 Airgun Arrays for Non-Experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.3 A Brief History of Marine Seismic Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.4 High Frequency Signals from Airguns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.5 The Far Field Response of an Airgun Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 3.6 Sound in the Sea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.7 A Feeling for Decibels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 3.8 Marine Mammals and Seismic Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 3.9 The Hearing of Marine Mammals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 3.10 Fish are Big Talkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.11 Fish Hear a Great Deal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 3.12 Seismic Surveys and Fish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 3.13 Effect of Seismic on Crabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 3.14 A New Airgun: eSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Chapter 4:  Reservoir Monitoring Technology 4.1 An Introduction to 4D Seismic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.2 Time-Lapse Seismic and Geomechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 4.3 Time-Lapse Refraction Seismic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 4.4 Measuring Seismic with Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.5 The Valhall LoFS Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

vi

Contents 4.6 4.7 4.8

The Ekofisk LoFS Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 The Snorre PRM Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 The Grane PRM Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Chapter 5:  Broadband Seismic Technology and Beyond 5.1 The Drive for Better Bandwidth and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.2 Exorcising Seismic Ghosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.3 The Interaction between Ocean Waves and Recorded Data . . . . . . . . . . . . . . . . . 202 5.4 PGS’s GeoStreamer – Mission Impossible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.5 GeoStreamer – A Double Win . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 5.6 CGG’s BroadSeis – A Change of Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 5.7 BroadSeis in the Search for Oil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5.8 IsoMetrix – Isometric Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.9 Which Technology to Choose? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Chapter 6:  Gravity and Magnetics for Hydrocarbon Exploration 6.1 Gravity for Hydrocarbon Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.2 Magnetics for Hydrocarbon Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Chapter 7:  Supercomputers for Beginners 7.1 Introducing Supercomputers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 7.2 Parallel Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 7.3 GPU-Accelerated Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 7.4 Quantum Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Chapter 8:  Gas Hydrates 8.1 Burning Ice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.2 Rock Physics: An Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 8.3 Where are Gas Hydrates Found? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 8.4 Gas Hydrates: The Resource Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 8.5 Hydrates in the Arctic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 8.6 Gas Hydrate Plumes and Vents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 8.7 Hydrates in Outer Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Chapter 9:  Dwelling on the Mysteries of Space The Universe Through Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 9.1 From Big Bang to Our Solar System and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . 294 9.2 Planets on the Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 9.3 Is Anybody Out There? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.4 Gravitational Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

vii

Preface The purpose of this book is to give an introduction to exploration geophysics.

Geophysicists explore the earth by making physical measurements at the surface. They use these measurements to map subsurface rocks and their fluids at all scales, and to describe the subsurface rocks in physical terms – velocity, density, electrical resistivity, magnetism, and so on. In this book we explain how geophysics is used to ‘look into’ the earth and how geophysical technologies help us to understand the solid subsurface beneath our feet.

The book is derived from articles in the column ‘Recent Advances in Technology’ in the interdisciplinary magazine GEO ExPro, which the authors have been writing since 2007. Each article is based on our research and experience, with help from internet sources as well as from several colleagues from industry and academia who have contributed as guest authors. Each technical subject has its own chapter, and can be read independently of other chapters, but to make it easier to read, we have prepared an introductory chapter called ‘Looking into the Earth’, which outlines the scope of geophysical methods and gives an overview demonstrating how everything in petroleum geophysics is interrelated. We focus on basic principles and from there explain the recent advances within various exploration techniques, hopefully in an easy way with limited use of equations but with many examples. We acknowledge that the area of application of geophysical methods is very wide, and much has had to be omitted. It is inevitable that our selection of topics has prejudice resulting from personal interest and experience, which has led to a bias towards descriptions and examples based on our extensive expertise in the marine seismic industry in north-west Europe.

The chapters are: 1. Looking into the Earth: an introduction to geophysics and geophysical exploration methods, including seismics, electromagnetics, gravimetry and magnetometry and use of satellite data, followed by illustrative data examples of how geophysics is used to look into the earth. We show how rock physics can translate geophysical observations into reservoir properties. 2. Elements of Seismic Surveying: a brief history of seismic and, in particular, seismic surveying in the North Sea. We introduce four-component oceanbottom seismic surveying and imaging, as well as wide and full azimuth seismic surveying. In Codes and Ciphers we look at ways of decoding nature’s

viii

disorder, and use that to discuss simultaneous sources as one avenue to improving the way geophysicists deliver high-quality data while maintaining viability in a cost-focused market. 3. Marine Seismic Sources and Sounds in the Sea: we summarise salient points for geoscientists who need to sharpen their rusty skills in seismic source technology. We give you a feeling for decibels in air and water; discuss the effect seismic sources may have on marine life; and present sound and sound modelling in the sea. 4. Reservoir Monitoring Technology: a review of the use of time-lapse (4D) seismic for reservoir monitoring and an introduction to the noble art of analysing data. We discuss typical monitoring parameters: fluid saturation, reservoir pressure, and reservoir compaction. Rapid implementation of 4D technology has led to increased hydrocarbon production through infill drilling. It is also effective in the early detection of unwanted and unforeseen reservoir developments, such as gas breakthrough and sudden pressure increases. Finally, we present several field examples from Life of Field Seismic (LoFS) / Permanent Reservoir Monitoring (PRM) over Valhall, Ekofisk, Snorre and Grane. 5. Broadband Seismic Technology and Beyond: an introduction to what broadband seismic is, what this technology has to offer, and the various broadband technologies that have recently been commercialised; and we describe how seismic ghosts can be exorcised. 6. Gravity and Magnetics for Hydrocarbon Exploration: how unexplored sedimentary basins can be unravelled by gravity, and how gravity and magnetics are used in cross-disciplinary workflows for hydrocarbon exploration. 7. Supercomputers for Beginners: an introduction to supercomputers: how to design parallel software, what is GPU-accelerated processing, supercomputers and seismic imaging, and quantum computers. 8. Gas Hydrates: gas hydrates have been often mentioned as an important possible source of natural gas. We give simple explanations of what gas hydrates are, where they can be found on Earth and what their physical properties are. We also discuss the possibilities of gas hydrates on Mars and elsewhere in outer space.

9. Dwelling on the Mysteries of Space: this last chapter looks at the monsters, marvels and machinery of the universe, and some important discoveries that have led the way to mind-boggling and fascinating questions at the cutting edge of physics today: from Big Bang to our solar system; observations of hot Jupiters, which have opened the possibility that our solar system was not born in the configuration that we see today; and gravity waves, which give us a new window on the universe, back at the beginning of time. Final answers cannot be given, but the chosen topics show where the physical sciences are making contributions and are affecting our thinking and understanding of the world.

Martin Landrø

has been a professor at the Norwegian University of Science and Technology (NTNU) since 1998. He received a M.Sc. (1983) and a Ph.D. (1986) in physics from the Norwegian University of Science and Technology. From 1986 to 1989 he worked at SERES A/S and from 1989 to 1996 he was employed at IKU Petroleum Research as a research geophysicist and manager. From 1996 to 1998, he worked as a specialist at Statoil’s research centre in Trondheim, before joining NTNU. He received the Norman Falcon award from EAGE in 2000 and the award for best paper in GEOPHYSICS in 2001. In 2004 he received the Norwegian Geophysical award, and in 2007 Statoil’s researcher prize. He received the SINTEF award for outstanding pedagogical activity in 2009. In 2010 he received the Louis Cagniard award from EAGE and in 2011 the Eni award (New Frontiers in Hydrocarbons), followed by the Conrad Schlumberger award from EAGE in 2012. Landrø’s research interests include seismic inversion, marine seismic acquisition, and 4D and 4C seismic, including geophysical monitoring of CO2 storage. In 2014 he received the IOR award from the Norwegian Petroleum Directorate. He is a member of EAGE, SEG, The Norwegian Academy of Technological Sciences and The Royal Norwegian Society of Sciences and Letters.

The book is designed primarily to reach students of all academic levels, including those at high school, college and university, as well as geo-students who plan to enrol in exploration geophysics graduate programmes. The authors also hope that the book will be a resource for those just entering the industry as well as for experienced professionals who are interested in learning more. Lasse Amundsen and Martin Landrø Trondheim, 1 April 2017

Lasse Amundsen

is Senior Advisor at Statoil and since 1995 has been Adjunct Professor at the Norwegian University of Science and Technology (NTNU). He received a M.Sc. in astrophysics in 1983 and a Ph.D. in physics in 1991 from NTNU. From 1983 to 1985 Amundsen worked with Geophysical Company of Norway (GECO) as a geophysicist and from 1985 to 1991 with IKU Petroleum Research as a research geophysicist. He was Adjunct Professor, University of Houston, Texas, from 2005 to 2008, and Chief Researcher at Statoil from 2008 to 2014. He received the Norwegian Geophysical Prize in 2002 and EAGE’s Conrad Schlumberger Award in 2010. He is co-author of the SEG book Introduction to Petroleum Seismology (2005 and 2018). Amundsen’s current research deals with the theory of wave propagation in acoustic, elastodynamic, and electromagnetic media, with application to petroleumrelated geophysical problems. Amundsen is a member of SEG, EAGE, Norwegian Physical Society, and The Royal Norwegian Society of Sciences and Letters.

ix

Chapter 1

Looking into the Earth Live as if you were to die tomorrow. Learn as if you were to live forever.

Courtesy WesternGeco

Mahatma Gandhi

Marine seismic survey geometry superimposed on top of the city of Oslo. This picture serves to illustrate the main components and the size of a conventional 3D marine seismic acquisition system, referred to as the largest moving man-made object on Earth. The vessel tows two airgun source arrays with three sub-array strings in each (white lines). In 2010, seismic vessels typically towed in the order of 10 streamers (yellow lines), 6,000m long, with 100m separation between the streamers. Today, we are seeing a range of streamer spreads, formatted to optimise survey productivity. The most advanced vessels can deploy up to 24 streamers, 10 km in length. As the streamers get longer, the recording time for any one shot must also increase in order to capture the returning signal at the longer source-receiver offsets.

1

Inge Grødum/Geco

Figure 1.1: Petroleum exploration is a very old activity. Every petroleum-bearing basin in the world has numerous oil seeps where oil leaks up to the surface. From the time of Noah, pitch (the black glue-like substance left behind when coal tar is heated) has been used to make boats waterproof. Early uses of oil also included medication and warfare. People once believed that oil collected in large underground pools of liquid; all we needed to do was to find the pool and tap into it. Nothing could be further from the truth. Hydrocarbons are found in the pores of the reservoir rock, usually sandstones or coarse-grained limestones. Illustration by Inge Grødum, made for Geco around 1980, showing the Viking ‘Leiv’ investigating oil rock.

2

1.1 Introduction The world’s oil and gas – collectively known as petroleum, which is Latin for ‘rock oil’ – is formed from countless dead microscopic marine organisms like plankton, algae and bacteria. They piled up on the seabed as thick sludge and, together with plants and dead animals (including occasionally turtles, crocodiles and dinosaurs), were gradually buried by the minerals and sediments that accumulated on top of them. Bacteria then rotted the organisms into substances called kerogen (from Greek kero, κηρός ‘wax’ and -gen, γένεση ‘birth’). Over millions of years, as this kerogen was buried beneath several kilometres of sediments, heat and pressure cooked them into oil and gas. The ‘oil window’ is found in the 60–120°C interval (approximately 2–4 km depth), while the ‘gas window’ is found in the 100–200+°C interval (about 3–6 km depth). In north-west Europe, the Kimmeridge Clay Formation, deposited around 150 million years ago, is the most economically important unit of such rocks, being the major source rock for oil fields in the North Sea hydrocarbon province. The Kimmeridge Clay, named after the village of Kimmeridge on the Dorset coast of England, is also exposed as grey cliffs on the Dorset coast, forming part of the Jurassic Coast World Heritage Site. Although there is evidence for the non-organic origin

Figure 1.2: Petroleum… before it becomes petroleum: plankton, derived from the Greek word planktos, which means wanderer or drifter, are freefloating organisms riding the ocean currents. Individual zooplankton or animal plankton (a) are usually microscopic, but some (such as jellyfish) are larger and visible with the naked eye. Petroleum can be traced to the burial of marine organisms, primarily prehistoric zooplankton and algae. Zooplankton eat by ingesting objects they encounter in the water, including phytoplankton (b), which are plant plankton that can make their own food by transforming energy from the sun via photosynthesis, as well as through other zooplankton and detritus.

a

b

Jane Whaley

Figure 1.3: Kimmeridge Clay exposed at Kimmeridge Bay on the southern coast of England. The yellowy-brown, more resistant layers are composed of dolomite.

3

of methane gas, the majority of geoscientists consider oil and gas found in sedimentary basins to originate from source rocks. Without these, all the other components and processes needed to form hydrocarbons become irrelevant. The source rock’s hydrocarbon-generating potential is directly related to its volume (thickness times areal extent), organic richness and thermal maturity (exposure to heat over time).

In the Earth’s history, most rich source rocks were deposited during six time periods that favoured their deposition and preservation, starting with the Silurian (444–416 million years ago, or Ma), including the above mentioned Late Jurassic (165–145 Ma) up to the Oligocene–Miocene period (34–5 Ma) (see Figure 1.4). Since oil and gas are relatively low density compared

Figure 1.4: The geological time scale is used by earth scientists to describe the timing and relationships between events that have occurred throughout the 4.6 billion years of Earth’s history. It is organised into intervals based on important changes that have been seen in the geologic record. The largest defined unit of time is the eon. Eons then are divided into eras, which in turn are divided into periods, epochs and ages. Eras, periods and epochs are usually named after places on Earth where the rocks of those times were first discovered. More specifically, eras or ‘chapters’ in Earth’s history are separated according to the nature of life they contained. Eras begin and end with dramatic changes in the types of animals and plants living on Earth. Periods are based upon the nature of the rocks and fossils found there. Life on Earth started 3.5 billion years ago. During the Cambrian period of the Palaeozoic era, shellfishes and corals developed. In the Silurian period, around 435 million years ago, the first land plants, swamp trees and primitive reptiles evolved, and around 225 million years ago during the Triassic period the dinosaurs came on the scene. The Jurassic saw the first mammals and birds. Around 2 million years ago, in the Pliocene, Homo habilis appeared – the first human species to be given the genus name Homo, meaning ‘man’. Because of the geological record we know that continental drift – the movement of the plates on the earth’s crust – resulted in the current position of the continents. Further, we know that the geological record contains evidence of several extinctions, including the Cretaceous–Paleogene (K–Pg) extinction event that marks the end of the Cretaceous and the beginning of the Paleogene 65 million years ago, when more than half of the earth’s species were obliterated, including the dinosaurs. Scientists have two main hypotheses to possibly explain this extinction: an extra-terrestrial impact, such as a huge meteorite or an asteroid, or a massive bout of volcanism. A layer of rock rich in the metal iridium dated to the extinction event is found all over the world, on land and in the oceans. Iridium is rare on Earth but is commonly found in meteorites. Later in this chapter you will read the story of the Chicxulub crater, found during geophysical surveying over Mexico’s Yucatán Peninsula, and dated to 65 million years ago.

Palaeocene 65.5 million years

First shellfish & corals

First monkeys

Eocene 34

First fish

444 416

Miocene

299

Pliocene

251

5 1.8

First dinosaurs First mammals

Last dinosaurs First flowering plants

First birds

Carboni- Permian Triassic ferous

359

First modern humans

23

First land plants First insects First tetrapods First reptiles First mammallike reptiles

2,000 4

First hominids

Oligocene

56

Cambrian Ordovician Silurian Devonian 488 542 million years

First apes

Jurassic

200

1,000 million years

Pleistocene / Holocene

First First horses whales

Cretaceous

145.5

Paleo- Neogene gene

65.5

Paleozoic

0

Mesozoic

0

to water, they may escape their source Tra p rock, seeping upwards through fractures in the water-filled rocks. This ‘migration’ takes the easiest route, and can cover considerable Gas distances, both vertically O ve r b u rd e n and laterally, but where Oil the hydrocarbons meet impermeable rocks (‘cap’ Gas rocks), such as shale or Oil salt, they are trapped in the underlying ‘reservoir’ rock and cannot migrate Seal M i grat i ng further. A reservoir rock H ydro car b o ns is a porous sedimentary Rese r voir rock, such as sandstone, Source Rock which is composed of sand grains about Figure 1.5: A schematic cross-section of part of a petroleum-bearing 0.1–1 mm diameter, s a n d s to n e basin identifying the key elements. Four phases of petroleum migration or limestone that was are considered during resource assessment: primary migration or expulsion m u d s to n e deposited as shell beds from the source rock; secondary migration from source to reservoir along a simple or complex carrier system; tertiary migration, which is migration to the or reefs. Sand grains are surface; and re-migration between reservoirs. Geologic studies have indicated that migration N e t G ro s s like spheres (although the can continue over hundreds of kilometres. The mechanisms and physics of primary migration actual shape of a grain are still poorly understood and are a subject of debate in the scientific community. might differ significantly from a perfect sphere), and in the pores – the spaces between that we explore and drill, looking for areas where the geological grains – a lot of oil and gas can be stored. In limestones, conditions for commercial oil and gas deposits are met: i.e. the there are pores between the shells and corals, in addition to presence of source rock, reservoir rock, trap and seal. fractures, both of which can store oil and gas. Petroleum geoscience concerns the disciplines of geology The pores in the reservoir rock were originally filled with and geophysics as applied to understanding the origin, salt water, which occurs naturally in subsurface sedimentary distribution and properties of petroleum and petroleumrocks. In the hydrocarbon trap, the fluids separate according bearing rocks, and to the production of oil and gas. Over the to their density, with the gas moving to the top of the trap years, geologists and geophysicists have steadily developed to form the free gas cap. The oil stores itself in the middle to new exploration techniques to improve our understanding of form the oil reservoir and water is forced to stay at the bottom petroleum geology and to increase the efficiency of exploration of the reservoir. The interfaces between the fluids are mostly and the accuracy of well siting, to reduce costs, and to enhance flat; therefore, in seismic sections, geophysicists search for ‘flat the chance of success. An additional important task is to spots’, which are reflections of gas-oil and oil-water contacts predict the likely recoverable volumes that may be present. (when all fluids are present), or gas-water contacts (when there When a field has been found in the rock formations, reservoir is no oil), or oil-water contacts (when there is no gas). management, applying elements of geology, geophysics and Rocks are divided into igneous rocks, which, like granite petroleum engineering, is used to predict and manage the and basalt, are the result of volcanic or plutonic processes; recovery of oil and gas. metamorphic rocks, such as gneiss, which have undergone By definition, geophysics is the scientific study a physical change due to extreme heat and pressure; and of the whole Earth using methods of physics and sedimentary rocks composed of sediments which were mathematics, and so involves many disciplines, including deposited on the surface of the earth or the bottom of the sea. seismic, electromagnetism, gravimetry, geomagnetism, About 99% of the sedimentary rocks that make up the Earth’s geothermometry and tectonophysics. From measurements crust are shales, sandstones and carbonates (limestones and made mostly at its surface, we can ‘look into’ the Earth. These dolomites). measurements are sensitive to the physical properties of the Sediments consist either of particles such as sand grains subsurface rocks and fluids and therefore we can describe formed from mechanical weathering debris, or seashells, or rock the subsurface in physical terms such as acoustic velocities, salt that precipitated from evaporating water, all of which are density, electrical resistivity, magnetism, and so on. deposited over many years as sand along beaches, sand and mud Petroleum geophysics has a more restricted meaning as it on the seabed, or beds of seashells, for example. These ancient applies methods of physics to understand how commercial deposits, piled layer upon layer in sedimentary basins, today accumulations of petroleum have developed and can be found form the sedimentary rocks in the uppermost crust of the earth within sedimentary basins, and how they can be produced.

5

1.2  Geophysical Exploration Methods… and Some History Robert Mallet (1810–1881)

John Clarence Karcher (1894–1978)

Ludger Mintrop (1880–1956)

Gerhard Keppner

Figure 1.6

Carl August von Schmidt (1840–1929)

Geophysicists commonly use four surface methods to explore the subsurface: seismics, electromagnetics, gravity and magnetics. Most of the money spent on exploration today is used for seismic acquisition. However, there has been an increased focus on electromagnetic methods over the past decade, and also on how to combine the four methods in more efficient ways.

1.2.1  Four Basic Methods One of the first studies within seismology was undertaken by the ‘father’ of seismology Robert Mallet in 1846, who studied the damage after an earthquake in Italy by visual inspection and use of photos. In 1851, he used dynamite explosions to measure subsurface sound propagation velocities. In 1888 August Schmidt estimated seismic velocities from plots showing seismic travel time versus distance. In the 1920s both John Clarence Karcher and Ludger Mintrop made significant contributions to seismology by using reflection and refraction seismic for exploration purposes, and by the end of this decennium seismic exploration had become a common tool. The first magnetometer (an instrument used to measure the variation in the Earth’s magnetic field as a function of position) was developed by Carl Friedrich Gauss in 1833, who is mostly known for his significant contributions within mathematics. Together with Wilhelm Weber, he spent time studying and measuring the Earth’s magnetic field. The unit for the magnetic field was therefore named the ‘gauss’ to honour his achievements. In 1960 the international committee for units (SI) incorporated the unit into a new metric system, and the unit for magnetic field strength was changed to Tesla (1 Tesla = 10,000 gauss). This was in honour of Nikola Tesla (1856–1943), a Serbian physicist who worked for Thomas Edison, and lived in

6

the US for most of his life, making significant contributions to the development of alternating currents (AC). He patented an AC induction motor – and today Tesla is possibly best known as a brand of electric car. Magnetometry is commonly used both for mineral and hydrocarbon exploration. In physics, the weber (Wb) is the SI unit of magnetic flux. A flux density of one Wb/m2 is one Tesla. Gravity is a force of nature that we experience every day. It is produced by all matter in the universe and attracts all bodies of matter, regardless of type. The principle of the gravitational force of attraction F between two mass bodies (m and M ) separated by a distance r was introduced by Isaac Newton (1642–1726) in his Principia from 1687: Here, G = 6.673x10-11 m3 kg-1s­-2 is the universal gravity constant. However, what we today refer to as Newton’s gravitational law has a somewhat turbulent history. Robert Hooke (1635–1703) claimed that he had suggested the idea of the gravitational behaviour in 1660. Historians are still discussing this issue, and a common understanding seems to be that Newton’s contribution of demonstrating the accuracy of the law is important. At the time that Newton was working on his masterpiece, the Principia, several scientists had suggested a -law, and hence Newton acknowledges Wren, Hooke and Halley in his famous book. Today, physicists are struggling to unify the four forces known in the universe: the strong force (acting between quarks), the weak force (responsible for natural radioactivity), the electromagnetic force (acting between electrically charged particles) and gravity (acting between mass particles). It is evident that gravity is the weakest force, but it is also the force that is still lacking the interaction particle associated to it. Isaac Newton (1642–1726)

Figure 1.7

Carl Friedrich Gauss (1777–1855)

Wilhelm Eduard Weber (1804–1891)

Nikola Tesla (1856–1943)

Lasse Amundsen

Figure 1.9

Henry Cavendish (1731–1810)

Lucien J. B. LaCoste (1908–1995)

Arnold Romberg (1882–1974)

m

r R

Figure 1.8: Newton’s law of gravitation gives the force of gravity between two masses separated by a distance, measured between their centres.

For the time being these particles are denoted as gravitons, but they have never been established nor seen in a controlled experiment. A major step forward was however achieved in 2016 when physicists were able to measure gravitational waves for the first time. The other interaction particles for the three remaining forces are known as gluons (strong interactions), W and Z 0 (weak interactions) and photons (electromagnetism). In 1797 Henry Cavendish (1731–1810) performed an experiment in which he measured the average density of the Earth, which he found to be 5,448 kg/m3. This corresponds to G = 6.74x10-11 m3kg-1s-2, a deviation of only 1% from today’s universal constant. Cavendish did not, however, calculate the value of G from his experiment. In the century after this initial, ground-breaking development in our knowledge of gravity, it was realised that the acceleration of gravity g at the Earth’s surface could be Figure 1.10: In 1912 Conrad Schlumberger, using very basic equipment, recorded the first map of measured using a pendulum of period T and equipotential curves at his Normandy estate near Caen. length L: Various types of pendulum equipment were developed and used from 1890 to 1930, until 1934,when Lucien LaCoste (1908–1995) suggested using a spring system instead of a pendulum to measure g. Together with his physics teacher, Arnold Romberg (1882–1974), he designed several high precision gravimeters in the next decade. It was further realised that the value of g varied not only as a function of latitude, but also more locally, and that these variations might be used to map subsurface variations in geology. To appreciate this, we can combine Newton’s famous law of universal gravitation with his second law of motion, F = ma, where a is acceleration, which is valid everywhere. In Figure 1.8, consider the sphere to represent the earth with mass M and radius R. Place

Image courtesy of Schlumberger

M

the mass m on the surface of the earth, where it represents the gravimeter with mass m. Setting the two laws equal, the mass m miraculously cancels, and a = g is the gravitational acceleration on the Earth’s surface related to its mass and radius as: If you now put the mass and radius of the earth into the equation, g comes out to 9.8 m/s2. If the Earth were a true sphere of uniform density, then g would be constant anywhere on its surface. However, g varies from place to place due to the effects of latitude, altitude, the Earth’s rotation, topography and geology. The Earth is compressed at the poles, where the radius is 21 km less than the equatorial radius. By removing the non-geological variations of measured gravity, its residual variation can be interpreted as density variations of the subsurface rocks (see Section 1.5, on Gravimetry and Magnetometry). Today, gravimeters are used on land, on ships and in aeroplanes to measure gravity, and used for mineral and petroleum prospecting, seismology, geodesy and metrology. In 1912, Conrad Schlumberger (1878–1936) (Figure 1.10) mapped electric equipotential curves (curves having the same potential value, similar to height curves, or contours, on a topographic map) to detect metal ores. He also showed that this method could be used to map the shallow subsurface structure. Together with his brother, Marcel, he founded his first company in 1919, and they went on to found the Schlumberger Company

7

in 1926, under the name ‘Société de Prospection Électrique’. In 1915 Frank Wenner published a paper entitled ‘A method of measuring earth resistivity’, in which he suggested using four electrodes to measure the earth’s resistivity: two electrodes to generate an electric current and two to measure the resistivity. Although the principle behind Schlumberger and Wenner’s methods is the same, they differ with respect to how the experiment is carried out. Traditionally, these resistivity methods have been used for measuring soil resistivity, as they still are. However, the focus on mapping deeper earth resistivity for hydrocarbon exploration has increased (see Section 1.4, EM methods.) Seismic is a technology-dependent business. The history of exploration geophysics is a mixture of incremental improvements combined with major breakthroughs. An example of the former is the gradual increase seen in computer power, since computing is obviously a key enabler for the changes that have taken place in the seismic industry. Modern seismic surveys can cover thousands of square kilometres of the sea surface, take months to complete, and produce petabytes of data that must be processed to determine if it will be profitable to drill an exploration well. Oil and gas companies are participants in the exascale race — towards computers capable of executing 1 quintillion (1018) operations per second. The U.S., Japan, China and Europe have all established programmes to enable exascale capabilities by around 2020, and the seismic industry is the fastest follower. (See Chapter 7: Supercomputers for Beginners.)

It is worth mentioning that many of the seismic data processing and imaging techniques that are heavily used today were actually developed in the 1970s and 1980s, but, due to the lack of computing power at that time, the techniques could not be applied at as large a scale as they are used today. Examples of more sudden stepchange developments are the transition Figure 1.11: Rudesindo from 2D to 3D seismic in the 1980s, Cantarell Jiménez the introduction of four-component (1914–1997) discovered (4C) and time-lapse (4D) seismic, and the world’s secondlargest oil field, named the controlled source electromagnetic Cantarell in his honour. (CSEM) method; these developments Since the Mexican are discussed later. Constitution grants the One purpose of this chapter is state the ownership rights to all natural to show that the search for oil and resources, he had no gas has come a long way since its legal claim to it. He died early days. However, those early days ‘penniless’ – but rich with were nevertheless fascinating, as oil friends since his discovery explorers travelled on foot or horseback transformed the small fishing city Ciudad del mapping large areas, searching for the Carmen, nicknamed ‘The tell-tale oil seeps from rock formations Pearl of the Gulf’, into an that might hold the promise of oil boomtown. hydrocarbons. The early drillers located their wells on such seeps. Next, as the understanding of how oil and gas is trapped beneath the surface increased,

Earthmoves Ltd. GEO ExPro Vol. 12, No. 2

Figure 1.12: An artist’s illustration of an asteroid hitting Earth. Over billions of years the Earth has been bombarded by comets, asteroids and meteorites. The Chicxulub crater in the Yucatán peninsula, Mexico, is today buried partly underneath the peninsula and partly beneath the adjacent sea. The impact is believed to have caused the extinction of more than 50% of species on Earth, including dinosaurs.

8

David A. Kring

geologists were hired to project the sedimentary rock layers outcropping on the surface of the ground back into the subsurface to map traps that could hold oil. Later, the seismic method was developed to detect these traps in the subsurface.

1.2.2  The Cantarell Story Oil fields have even sometimes been found by accident. Out in his little fishing boat in the Campeche Sound one day in 1961, roughly 100 km off the small Mexican town of Ciudad del Carmen at the western edge of the Yucatán peninsula, the fisherman Rudesindo Cantarell Jiménez saw something shiny floating on the sea surface. It was oil stains. Initially, he believed the slicks came from leaks spilling from ships passing through the area, but the oil continued reappearing. What he had discovered were natural seeps rising to the surface through 40m of sea water from the oil-filled formations beneath. In 1968, when he visited the city of Coatzacoalcos, Veracruz, to sell huachinango (red snapper), Figure 1.13a: Gravity map revealing the Chicxulub crater, which formed 65 million years ago in one he told the state-owned oil company Petróleos of the largest collisions in the inner solar system to have occurred in the last four billion years. The Mexicanos, or Pemex, about his findings. They crater is believed to be a multi-ring basin with an outer ring about 300 km in diameter (the black circle did not react immediately, but three years later, in outlines the ~180-kilometre diameter crater). The colours represent variations in the magnitude of the 1971, samples were taken and analysed, leading to gravity field at sea level. Positive gravity anomalies (blue is maximum) indicate dense rocks created by melting of the crust, forming a thick pool of impact melt that eventually solidified into a dense mass the discovery of the supergiant oilfield Cantarell, of igneous rock. Negative gravity anomalies (lower than normal gravity; red) indicate that the nearnamed after the finder. Pemex began producing surface crustal rocks created by impact-generated shattering of the rock have relatively low densities. The Chicxulub basin appears as a circular region in which gravity values are usually lower than the oil at Cantarell in 1979, and at its peak, in 2004, it regional values. Observe the ring-like variations in the gravity field. The irregular black line marks produced 2.1 MMbopd. the shoreline of the Yucatán peninsula, and the straight lines mark province borders. The original But the story does not end here. Following petroleum exploration borehole locations (C1, S1, and Y6) are shown where a drill core was recovered. the Cantarell discovery, Pemex started an oil That core was sufficient to prove the crater had an impact origin. A scientific borehole Yax-1 was drilled in 2001-2002 and produced continuous rock core sequence through the impact melt-bearing exploration programme, including geophysical rocks in the crater. A borehole was drilled at sea (Chicx-03A) in 2016. surveys in south-eastern Mexico, which yielded surprising results. In 1978 a consultant the great mass extinction of the dinosaurs at the end of the geophysicist with Pemex, Glen Penfield, set out to acquire Cretaceous received little notice. Although they had a lot of an airborne magnetic survey of the Gulf of Mexico north geophysical data to present, they had no rock cores or other of the Yucatán peninsula. The magnetometer measures physical evidence of a meteor impact, so they reluctantly went the magnetic field of the rocks in order to map out their back to their Pemex work. composition beneath the surface and thus determine the likelihood of finding oil. In the data, however, he observed a However, a Houston Chronicle reporter who was present at huge underwater arc with extraordinary symmetry in a ring the convention put a story on the front page of the paper on 70 km across. This observation was inconsistent with what he New Year’s Eve, 1981 and he kept telling the story to whoever knew about the region’s geology. Feeling curious, he looked might listen. Finally, one day in 1990, at a science meeting into old gravity data acquired in the 1960s where he found in Houston, a graduate geology student, Alan Hildebrand, did another arc on the peninsula itself. Comparing the magnetic listen. He had published on earth impact theories and was and gravity maps, he saw that the separate arcs neatly formed searching for candidate craters, so he contacted Penfield, who a circle, 180 km wide, centred near the village of Chicxulub was able to find drill samples from old Pemex wells in the (pronounced cheek-shoe-lube). Together with Pemex geologist area. Hildebrand analysed the samples, which clearly showed Antonio Camargo-Zanoguera they concluded that the huge shock-metamorphic materials, providing the first proof that ring, half on land, half under the Gulf of Mexico, must be an Chicxulub crater in the Yucatán peninsula was the impact site impact crater. They felt certain that the ring shape had been for a 10–15 km diameter meteorite which hit Earth 65 million created by a cataclysmic event in geological history, such as a years ago, at the end of the Mesozoic Era. meteorite hitting the earth. We now know that when the huge Chicxulub meteoroid The two Pemex geoscientists were allowed to present their collided with the earth, a massive crater 300 km wide and findings at the 1981 conference of the Society of Exploration many kilometres deep was formed. The explosion was equal to Geophysicists. Their talk attracted scant attention, and the that caused by 100 million megatons of TNT, producing shock ‘exciting news’ of an impact crater that could be the clue to waves with a pressure of 660 gigapascals – a pressure greater

9

David A. Kring, NASA/University of Arizona Space Imagery Center

Figure 1.13b: Schematic diagram of how the Chicxulub impact crater may be structured beneath the overlying sediments, with impact breccias overlying impact melt and impact-fractured rock. The Chicxulub Crater has a peak-ring structure. This means there is a circular uplifted mountainous ring within the crater. As shown schematically here, that peak ring (beige-colored peaks) was covered by impactites (purple). The peak ring also bounded the central impact melt sheet (red) and an overlying layer of impact breccias (purple). Additional impact melt and breccias occurred between the peak ring and crater rim. Impact breccias were also ejected from the crater. The mantle beneath the crater (brown) was uplifted about one kilometer. The entire crater was subsequently buried by Tertiary sediments (grey). In 2001–2002, the Chicxulub Scientific Drilling Project drilled through those sediments and the impact sequence in the trough between the peak ring and zone of slumped blocks (called a modification zone) in the outer portion of the crater. That borehole is called Yaxcopoil-1.

Figure 1.14: Chicxulub impact breccia: the discovery of shocked quartz, shocked feldspar and impact melts in the Yucatán-6 exploration borehole from the interior of the Chicxulub structure proved it was an impact crater and that it was produced at the Cretaceous– Paleogene (K-Pg) boundary. This is a photograph of one of the two samples behind that discovery. It is a polymict breccia recovered from a depth of ~1.2 km in the Yucatán-6 borehole. Beneath the polymict breccia is a melt rock sample that was recovered from a depth of ~1.3 km. The discovery was initially reported by David A. Kring, Alan R. Hildebrand, and William V. Boynton at the 1991 Lunar and Planetary Science Conference. Additional details about this sample were subsequently published by Hildebrand et al. (Geology 19, 867–871, 1991) and Kring and Boynton (Nature 358, 141–144, 1992).

10

iridium-rich ash was deposited on top of the breccia. Today, the crater is buried beneath up to 1 km of carbonate sediments. Most scientists believe that the Chicxulub impact caused or contributed to the extinction of the dinosaurs, while oil explorers know that the meteorite was responsible for forming the Cantarell field’s complex reservoirs from the carbonate breccia of Cretaceous age that was rubble from the impact. The seal of the reservoir is the iridium-rich ash and the main source rocks in the area are the organic-rich calcareous shales of Tithonian age (the youngest age of the Late Jurassic epoch). David A. Kring

than that found in the earth’s centre. At that time, the Yucatán peninsula was on the continental shelf, which consisted mainly of coral reefs. The bedrock fractured, the uppermost rocks were instantly vaporised, and the underlying rock rapidly melted. The shock of the meteorite caused a destabilisation of the platform edge in the form of an underwater landslide, and thick carbonate breccias were thrown into deeper waters, followed by several rebounding mega-tsunami deposits, the tsunami waves being up to 100m high. Debris was thrown high into the atmosphere, with serious global environmental consequences: periods of darkness, low temperatures and acid rains. An

1.3  Seismic Methods Seismic is the most commonly used geophysical method in the oil and gas industry. Using seismic methods, we can make images of layers and structures down to depths of many thousands of metres, with resolution of tens of metres. In addition to telling us about structures, seismic carries information about stratigraphy, subsurface rock properties, velocity, stress and reservoir fluid changes in time and space. It is important to discriminate between seismology and seismic: the former involves natural sources while the latter uses man-made sources to produce acoustic or elastic waves. Seismic methods evolved from the classical earthquake research that started in the early 1900s. During the First World War they were used to locate artillery by detecting the seismic waves which they created. The use of seismic data for oil exploration, however, began in the 1930s, when the primary concern was to determine the structure of the subsurface. The basic principle is very simple; low-frequency acoustic sound (seismic) waves are generated at the surface from high-energy man-made sources. The waves propagate through rocks, and refract and reflect at any layer boundaries which they meet, before they return to the surface of the earth where they are recorded by receivers resembling microphones. The time it takes for these ‘echoes’ to return to the surface is then analysed to determine the internal structure of the subsurface. A major step forward was the introduction of computers in the 1960s, which enabled geophysicists to process and image large amounts of data. Until the 1970–80s petroleum-related seismic was primarily used in oil and gas exploration, but by the end of the 1980s it had become more and more common to actively use seismic data in the production phase as well. Around 1960 it was usual to collect seismic data along twodimensional (2D) profiles, both on land and at sea, but in the 1970s the early ideas on how to obtain a 3D subsurface image began to emerge, gradually evolving into what we now know as 3D seismic. This proved to be another significant development as both exploration and production found 3D seismic data to be a major game changer, and today it is the key tool in understanding the subsurface. In addition to providing structural images, 3D surveys can help provide maps indicating reservoir quality and the distribution of hydrocarbons. Although 3D seismic data has given geoscientists a great tool to map reservoirs and identify whether they are filled with oil, gas or merely water, it is not a silver bullet, even when combined with information from other geophysical tools and all available geological information. Exploration remains a complex business and success is never certain, which is why oil and gas companies are continuously investing in research and technology development – from supercomputers to the latest advanced geophysical techniques on offer. The industry integrates a broad array of geoscience disciplines and geo-technologies to reduce exploration risk – but drilling remains the only sure way to find out whether oil or gas is present down there.

1.3.1  3D Seismic Examples Geomorphology is the scientific study of landforms and the

processes that shape them over time. Geomorphologists seek to understand why landscapes look the way they do, to understand their history and dynamics and to predict future changes. With the advent of 3D seismic, seismic geomorphologists were given the opportunity to study the 3D landforms beneath the sea surface. 3D seismic, combined with new image technologies, enabled geologists to see and understand in more geological detail than ever before how landscapes had evolved through time and to identify lithologies, stratigraphic architecture and geological processes. Obviously, this had significant commercial value since geoscientists could better predict the spatial and temporal distribution of subsurface horizons that could potentially host and seal hydrocarbons, as well as gain an idea of compartmentalisation. Let us look at three examples of the use of 3D seismic.

1.3.1.1  Underworld Middle Jurassic Delta Some 200 million years ago, in the late Triassic period, in part of the area that is the present-day northern North Sea, a rift valley called the Viking Graben was formed as a result of major tectonic activity. During this time, the eastern coast of presentday North America (Laurentia) and Southern Europe (Baltica) were still joined as one large landmass and the Atlantic Ocean did not yet exist. Ignoring the minor detail that homo sapiens and cars had not yet appeared on Earth, you could have jumped into your car in the morning in North America and driven to Southern Europe for an afternoon tea. Dinosaurs occupied the land, and wide rivers ran across it. In the Middle Jurassic, the great Brent river delta encroached upon today’s northern North Sea area, laying down in the Viking rift valley the thick sandstones with occasional mudstone lenses today known as the Brent Group. The continental shelf, consisting of shallow seas and coastal and delta plains, was surrounded by swamp vegetation and marsh. As time passed, the whole area subsided beneath the sea and thick deposits of mud settled on the sea bed. As the Jurassic rocks were buried, sand became sandstone and the muds became clay deposits. Millions of years passed. The continents moved into their present positions. Humans evolved and settled into all parts of the earth. Oil explorers acquired seismic data and found large numbers of oil and gas fields in the sedimentary rocks that had filled the Viking Graben, including, for example, the main reservoirs of the Gullfaks, Oseberg and Statfjord fields, all of which were found in the Brent Group. There are also large reservoirs in rocks formed from the sand that was deposited on alluvial plains during the Triassic Period (e.g. the Snorre field), in shallow seas in the Late Jurassic (the Troll field) and as subsea fans during the Palaeogene Period (the Balder field). The Brent Group consists of five units, known as the Broom, Rannoch, Etive, Ness and Tarbert Formations, the initial letters combining to form the name of the group. The Broom Formation is only encountered on the British flank of the North Sea, but is replaced by the contemporaneous Oseberg Formation on the Norwegian side. The seismic display in Figure 1.16 shows what the ancient Brent delta looks like on seismic

11

Inge Grødum/Geco.

Figure 1.15: Seismic waves generated by natural or man-made sources are ubiquitous in the Earth’s crust. Recording this ‘voice’ of nature by employing arrays of hydrophones in the sea or geophones on the ground gives information about the geology. Seismic listening holds potential for oil and gas prospecting. The million dollar question is: do oil and gas reservoirs produce a unique signature or a kind of ‘music’ that can be measured to provide direct information about their locations and characteristics? Illustration by Inge Grødum, made for Geco around 1980.

12

Statoil

11 km km

«Ocean» «Ocean» Beach Beach system system

11 km km Floodplain/ Floodplain/ river river system system

Marsh Marsh

Figure 1.16a: Amplitude patterns, topography and geometries in seismic data showing the mid-Jurassic transition between the Etive and Ness Formations of the Brent Group beneath the Troll field can be compared to the scale and geometries of modern wave-dominated coastal systems with beach ridge complexes. The Brent Group comprises the most significant hydrocarbon-bearing reservoir in the northern North Sea, and is also one of the most important hydrocarbon reservoir systems from a global perspective.

Eiji Matsumoto

acquired over the giant Troll gas field, although note that in Troll the hydrocarbons are found in the Jurassic sandstones of the Viking Group, which were deposited after the Brent Group. A modern analogue to the ancient Brent delta as mapped from seismic is the Paraiba do Sul delta in Brazil, which features a wave-dominated shoreline supplied with sediments from two rivers that cross the low-lying coastal plain. Sandy beach ridges

are oriented roughly parallel to the shore line, with river systems and marsh further inland. The modern beach ridge system shows very similar geological features and striking geometrical similarity to the forms mapped from the Brent seismic. Such modern analogues can be used to understand the reservoir properties and quality which could be expected in the ancient delta sediments we map from seismic in the search for oil. Figure 1.16b: The modern analogue to the Brent delta is the Paraiba do Sul delta on the northern coast of Rio de Janeiro State in Brazil. The river is the main fluvial system entering the Campos Sedimentary Basin, where it develops a large coastal plain covering 30,000 km2 as a result of deltaic progradation. Extensive areas of beach ridges are present on both sides of the river mouth, attesting to the succession of accretionary beaches during the course of coastline migration. The river channel width varies from about 500m to 1 km, which corresponds to the width of the channels observed in the Brent seismic.

13

Statoil

1.3.1.2  Unveiling the Secrets of Mysterious Johan Sverdrup Late in 2007 Lundin Petroleum made an oil discovery in the Luno prospect, today known as the Edvard Grieg field, located in the Utsira High in the central North Sea. The Utsira High is a large crystalline basement horst, flanked by the South Viking Graben to the west and the Stord Basin to the east. The present structural configuration has been inherited from extensional tectonism which occurred in the late Palaeozoic, 250–200 million years ago. It is believed that deep weathering of the granitic landscape during the late Triassic, around 200 Ma, was followed by a marine transgression and deposition of sedimentary strata. A detailed and comprehensive exploration study was initiated in the area around this discovery, as a result of which, in September 2010, Lundin drilled an exciting new well, 16/2-6, testing the Avaldsnes prospect, which also proved a great success.

Figure 1.17: The two structural highs, Avaldsnes and Aldous Major South, on Block 16/2 in a water depth of 115m, about 140 km west of Stavanger, Norway, were explored as two separate prospects with two different wells, 16/2-6 and 16/28, respectively. Well 16/2-8 met a 65m oil column, whereas well 16/2-6 proved a 17m oil column. The wells proved to belong to the same accumulation with the same oil-water contact, pressure regime, oil type and reservoir and the field was renamed Johan Sverdrup. The reservoir depth is about 1,900m. Aldous Major North is separated from the southern part by a major fault and does not contain a commercial discovery. (Well number 16/2-6 means that the well was drilled in Norwegian quadrant 16. Each quadrant is divided into 12 blocks, and this particular well was drilled in block 2 as the 6th well on that block.)

In the summer of 2011 Statoil announced that their Aldous Major South 16/2-8 well had found oil on a nearby structural high and proved that the Lundin and Statoil wells had drilled into the same oil accumulation, with a saddle between the two highs. Since these discoveries constituted one single field, it was renamed Johan Sverdrup after the father of Norwegian parliamentarism. By 2015 more than 30 wells had been drilled to appraise the size and reservoir quality of the field. In August 2015, the plan for development and production was approved by the Norwegian government. Statoil is the operator and the partners are Lundin Norway, Petoro, Det Norske and Maersk Oil. The Johan Sverdrup field is ranked as one of the top five discoveries ever made in Norway, with estimated recoverable reserves of between 1.7 and 3 Bbo. But how could such a giant field in the mature part of

Figure 1.18: Johan Sverdrup geosection between selected wells, with different geological formations. The oil resources are located in a reservoir that consists of interconnected Late Triassic to Early Cretaceous sandstones. The majority of the resources occur in the intra-Draupne sandstones of the Viking Group, deposited in the Late Jurassic. In the eastern part of the field, the Viking Group contains Draupne shale, with the remaining oil resources found in sandstones in the Statfjord and Vestland Groups. In Norse mythology, Draupne is a gold ring owned by the Viking God Odin, which has the ability to multiply itself; every ninth night, eight new rings ‘drip’ from Draupne, each one of the same size and weight as the original. In this way, Draupne is an endless source of richness. The name is considered particularly appropriate in view of the Draupne Formation’s role as a prolific hydrocarbon source in the northern North Sea. In both age and depositional environment the Draupne Formation is equivalent to the UK Kimmeridge Clay Formation. To the south-east, the location of well 16/3-2, drilled in 1976 by Norsk Hydro some hundred metres off the oil column, is shown. Observe the scale of the geosections. The upper one has no vertical exaggeration, while the lower one has vertical exaggeration 33 times. Sequences of sedimentary rocks are subdivided on the basis of their lithology. Going from smaller to larger in scale, the main units recognised are bed, member, formation and group. For instance, the Viking Group is subdivided into five formations: the Heather, Draupne, Krossfjord, Fensfjord and the Sognefjord Formations. The Draupne sandstone consists of unconsolidated very coarse to medium grained sands, with very few sedimentary structures. (Courtesy Statoil and licence partners Lundin Petroleum, Maersk Oil, Petoro, and Det Norske Oljeselskap.) Geitungen

0.2 km

Espevær High

NW-SE correlation section

Avaldsnes High

Røvær Basin

5 km (no vertical exaggeration) 16/2-19

Geitungen metres

1850

NW

16/2-12 16/2-19A 16/2-19

Espevær High

Avaldsnes High

Røvær Basin

16/2-10 16/2-16AT2 16/2-16

16/2-13A 16/2-13S

16/3-6

SE

16/3-4A 16/3-4

N 16/3-2

0.2 km

1900

0

2.5

Group 1950

Viking Group

2000 5 km (vertical exaggeration: 33 x)

14

16/3-2

Vestland Group Statfjord Group Hegre Group

5

km

Anette Westgard/Statoil

Figure 1.20: An oily core sample from the Johan Sverdrup field. What sets the reservoir on Johan Sverdrup apart from many others is the coarse grain size, which results in large pores and exceptional flow properties.

Edvard Grieg

Migration routes PL546

gG rab ben

PL625

Viki n

the North Sea remain undiscovered for so long, especially considering it lies in the area of the first ever exploration licence awarded on the Norwegian Continental Shelf (NCS). Yet it evaded discovery until 2010, even then coming as a big surprise to the majority of the oil industry. Four blocks on the Utsira High, including Block 16/2 which contains Johan Sverdrup, were included in the first exploration licence PL 001, awarded to Esso Exploration and Production Norway in 1965. Exxon was interested in this area since its geologists found it natural to start searching for oil and gas where hydrocarbons could have migrated from deeper source rocks up towards a ‘high’, before eventually being trapped in structural and stratigraphic traps. A number of fields have been discovered in this region, including Balder, Sleipner, Grane, Jotun and Ringhorne. Why did Exxon, and subsequently other oil companies, not recognise the huge potential of the Utsira High on the spot where Johan Sverdrup is located? Norsk Hydro and Elf in their first well 16/3-2 on the NCS in 1976 in PL 007 drilled only 400m off the Johan Sverdrup structure, targeting both Palaeocene and Jurassic sandstone reservoirs of good quality in a presumed stratigraphic trap – but without hitting oil. On the web page of the Norwegian Petroleum Directorate it explains, with reference to this well: “There were no sands in the Palaeocene and the Cretaceous chalk was tight. A 20m-thick immature Draupne shale was encountered at 1,955m. The well then encountered a 31m-thick late Jurassic sandstone from 1,975m to 2,006m. Below this sandstone was a 9m-thick layer of weathered basement overlying the solid granite. The well proved to be water wet all through, and no shows were recorded. Three cores were cut. Core 1 gave no recovery, while core 2 recovered 3.5m core from the interval 1,998m to 2,000.6m in the Late Jurassic sand. Core 3 was cut from 2,017.5m to 2,019m in basement rock… The well was permanently abandoned on 8 March 1976 as a dry well.” Two simple reasons lie behind this apparent oversight. Firstly, the Johan Sverdrup field is located almost 40 km from the ‘kitchen’ of the oil – the closest mature source rocks are in the Viking Graben – from which it is separated by the solid granite of the Utsira basement ridge, seen as an oil migration barrier. Most geologists therefore did not seriously believe that oil could have migrated from the deep source rocks into

PL779

Johan Sverdrup

Edvard Grieg

PL501

16/1-14

PL673

16/1-15 16/1-13 16/1-8 16/1-12

16/2-11

16/2-5

PL338

16/3-2 16/3-5

PL265

16/3-7

Luno II 16/4-6

PL778

PL410

PL359

PL544

16/1-8

16/1-13

0

10 km

16/1-15

Luno II

Johan Sverdrup

BCU

Middle Jurassic Inconformity

BJU

16/4-6

16/1-14

16/1-12

16/2-11

16/3-5

16/3-7

Figure 1.19: Cores from the Edvard Grieg and Johan Sverdrup fields. Wells are cored through the reservoir section for direct analysis of reservoir properties, including age dating, log calibrations, and analysis of the depositional system of the reservoir sands. The upper right figure displays three possible migration routes for oil. The closest mature source rocks are located in the Viking Graben. The green arrows represent possible migration of oil from the Viking Graben around the Utsira High on the north or south side or through weathered basement, via Edvard Grieg South. A production licence (PL) is a concession which grants exclusive rights to conduct exploration drilling and production of oil and gas within a delimited area on the NCS. (Rønnevik et al., 2017. Lundin.)

sandstone reservoirs of Jurassic age on the basement high. The oil either had to take the long journey around the ridge where it potentially could migrate into reservoir structures, or find a shortcut through the granite. Both routes were considered extraordinarily challenging journeys for migrating oil. Secondly, seismic data quality in the area had been limited, as shown in the example from the Luno discovery in Figure 1.21. It is very hard to identify the structure above the basement, and also to identify the top of the reservoir. On new broadband seismic data we clearly see a triangle-shaped sedimentary infill on top of the basement. The Edvard Grieg field is located in this triangle. Let’s discuss this in more detail. The Edvard Grieg reservoir is a Triassic aeolian and alluvial sandstone and conglomerate reservoir. In Edvard Grieg South, Lundin discovered oil in the weathered basement rock. The Edvard Grieg discovery opened up an undersaturated oil system on the flank of a saturated, older hydrocarbon system proven on the crestal part of the southern Utsira High. The discovery of the relatively recent migration of the undersaturated oil system also increased exploration focus on the eastern side of the Utsira High.

15

a)

c)

16/1-8

102

54

db

db

102

16/1-8

0

54

250

Frequency (Hz)

0

Frequency (Hz)

250

Luno Graben S T 9511

1995 Conventional Streamer b)

Luno Graben 2009 GeoStreamer

LN0902

d)

16/1-8

16/1-8

db

102

54

0

Frequency (Hz)

250

59°N

N Edvard Grieg PL501

59°45'N

Johan Sverdrup

PL265 PL 501B PL502

Utsirahøyden Luno II 2°30'Ø

0

12 km

2°40'Ø

Luno Graben 2008 OBC

Luno Graben LN0803

2012 BroadSeis/Broadsource

LN12M02

Figure 1.21: Old (a) and (b, c, d) newer versions of 3D seismic showing the discovery well for the Luno prospect. Notice the improvements in the definition of the Luno graben from the new kind of seismic, discussed further in Chapter 5: Broadband Seismic Technology and Beyond (Rønnevik et al., 2017).

In addition to the limited quality of the seismic data, there were significant concerns related to late migration of oil into both the Edvard Grieg field and the Johan Sverdrup field from known mature source rock located in the Viking Graben. Lundin re-examined the results from the 1975 well 16/3-2, which showed excellent quality Jurassic sandstones. Their analysis was important when the Avaldsnes prospect was generated by the Lundin team. The three paths shown by green arrows in Figure 1.19 represent possible migration routes of oil from the Viking Graben around the Utsira High on the north or south side or through weathered basement, via Edvard Grieg South. Lundin’s 16/2-6 well in 2010 into the Avaldsnes prospect confirmed high quality Upper Jurassic sand. A full scale production test confirmed world-class flow properties and unconstrained flow of oil from a minimum of three kilometres radius from the well bore. The dawn of a new giant oil discovery was in sight. Figure 1.22 shows a 3D seismic line covering the Edvard Grieg and Johan Sverdrup fields. Intensive work and testing of various seismic acquisition methods and data processing techniques have revealed further secrets in the Utsira High

16

area. The chalk layer above the reservoir rocks (coloured light blue in the figure) is thinning towards the west, and is relatively easy to interpret on the seismic, since the acoustic impedance (density times velocity) contrast between the chalk and the surrounding sediments is strong. On the other hand, this strong impedance contrast represents a challenge in seismic data processing and imaging, since multiple reflections and other ‘shadowing’ effects make it hard to identify the top reservoir seismic reflection event. The high velocity in the chalk makes it a ‘blocking layer’, preventing much of the acoustic signal from reaching the layers below; hence the seismic image below the chalk is weaker and contains more noise. The relatively large uncertainty in the recoverable volume estimate is also related to challenges in precise and high resolution seismic imaging. The reservoir depth is at approximately 1,900m, the porosity of the reservoir rock is 28%, and the permeability is extremely good (10–40 Darcy). Future production and analysis will show if it is possible and economically feasible to produce oil from the weathered basement in this ‘golden’ area for Norwegian oil and gas exploration.

TWT

Johan Sverdrup Discovery Well 16/2-1 (projected) 16/2-6

E Lundin

Edvard Grieg Discovery Well 16/1-8

W

-

Late Cretaceous

1000 Johan SverdrupField Field Johan Sverdrup Edvard Grieg Field

2000

P aleocene

OWC

OWC

Early Cretaceous

P aleozoic

3000

B as ement Jurrasic/Triassic 0

Viking graben

2.5

5

10 km

Haugaland High Luno graben

Augvald graben

Avaldnes High

Figure 1.22: Discoveries on the southern Utsira High: Edvard Grieg (Luno) and Johan Sverdrup (west – south-east line). The mature source rock in this area is located in the Viking Graben. The seismic section illustrates well the challenge for oil to have migrated into the Johan Sverdrup field, which is overshadowed by the basement high (Rønnevik et al., 2017).

1.3.1.3  The Johan Castberg Field A spectacular example of the information available through seismic data is the Barents Sea Johan Castberg (previously Skrugard) field, found in 2011. Interest in the Skrugard block was low until 3D seismic was acquired in connection with the Norwegian 20th licence round in 2009. This new data showed ‘direct hydrocarbon indicators’ (DHIs) more clearly,

particularly two ‘flat spots’ interpreted as reflections of gas-oil and oil-water interfaces (see Figure 1.24). Combined with positive results from electromagnetic (EM) surveys (see Section 1.4), this attracted several companies. This discovery in the Barents Sea opened up a new oil province, with significant additional exploration resource growth potential.

Figure 1.23: Perspective view of the Barents Sea seen from the northwest, based on 3D seismic, showing the Top Realgrunnen Formation/ Base Cretaceous Unconformity (BCU). This is a major erosional surface visible over the entire North Atlantic, which developed at the transition from late Jurassic to early Cretaceous. Large regional geological elements define the structural picture: the Nordkapp Basin, Loppa High, Hammerfest Basin, and Bjørnøya Basin. The location of the Skrugard, Havis, Goliat and Snøhvit fields are shown. One of the biggest challenges in the Barents Sea is the sealing potential of reservoirs; this applies particularly to possible Jurassic reservoirs. The Skrugard structure consists of rotated fault blocks of Jurassic age (145–200 Ma). Wergeland et al., 2013 3P Arctic, Stavanger. (Courtesy Statoil and licence partners Eni Norge and Petoro. Data courtesy of Schlumberger Multiclient.)

17

Figure 1.24: 3D view of top reservoir depth map showing the setting of the Skrugard structure. Well 7220/8-1 in 2011 proved hydrocarbons in the porous sandstone rocks. This discovery was a significant milestone for hydrocarbon exploration in the Barents Sea, and positively confirmed the geoscientists’ play model and prospect evaluation. The two flat spots seen on the seismic map (left) were good hydrocarbon indicators and, as expected, they proved to represent a gas-oil contact (GOC) and an oil-water contact (OWC). The well, in approximately 1,250m of water, encountered a 33m gas column and an 83m oil column. The first flat spot results from the increase in acoustic impedance when the gas-filled porous rock (with a lower acoustic impedance) overlies the oil-filled porous rock 7220/8-1 (with a higher acoustic impedance). The second flat spot likewise results from the increase in acoustic impedance between the oil-filled porous rock and the brine-filled porous rock beneath. They stand out on the seismic image because they are relatively flat and thus contrast the surrounding dipping strata reflections. Generally, however, there are a number of other possible causes for Flatspot - GOC a flat spot on a seismic image, so that flat spots do not necessarily indicate hydrocarbons. In the seismic section, 83m oil column the flat spots have been enhanced by a data processing Flatspot technique called optical stacking, where multiple seismic OWC sections are summed together to see flatish reflectors clearly. (Courtesy Statoil and licence partners Eni Norge, and Petoro. Data courtesy of Schlumberger Multiclient.) Wergeland et al., 2013 3P Arctic Stavanger.

Well 7220/8-1

a Figure 1.25: Information from seismic data is used to make geosections for pre-well interpretation. In this geosection of Skrugard (a) the different geological formations, faults, compartmentalisations and fluids are indicated, along with interpretation of the seismic line (b). Orange, green, blue and purple denote Paleogene, Cretaceous, Jurassic and Triassic formations, respectively. The Realgrunnen Subgroup is subdivided into four formations: Fruholmen, Tubåen, Nordmela and Stø. The red and green colours overlying the reservoir formations represent gas over oil. The well 7220/8-1 was drilled in 2011. (Courtesy Statoil and licence partners Eni Norge and Petoro. Data courtesy of Schlumberger Multiclient.) Wergeland et al., 2013 3P Arctic, Stavanger.

18

Well 7220/8-1

b

7220/8-1

7220/5-1

7220/7-1

Havis

Skrugard

1.3.2  Seismic Acquisition

Inge Grødum/Geco

Seismic surveying technology is the key tool helping us to ‘see’ underground both on land and beneath the sea.

Figure 1.26: Inge Grødum’s artistic view of seismic acquisition and processing. In seismic surveying, a sound wave is generated and sent into the earth. When the wave reaches boundaries between geologic layers having different impedances (product of velocity and density) part of the wave will be reflected back to the surface where it is recorded by sensors. The strength (amplitude) of the returning wavefield and the time it has taken to travel through the various layers in the subsurface is used in specialised seismic data processing to make images of the rock layers in the survey area. In the 1980s Geco opened several high technology processing services around the world.

19

20

a

PGS

Marine seismic is presented in Chapters 2 and 3 (Elements of Seismic Surveying; Sounds in the Sea and Marine Seismic Sources), and the reader is referred to those chapters for a detailed discussion. Here, we give a brief introduction. In a modern marine seismic acquisition campaign a large seismic vessel with the capacity to tow two large airgun source arrays and up to 24 streamers or cables (the maximum practical number today), each 6–10 km long, traverses the sea surface along pre-determined sail lines. The illustration on the front page of this chapter shows the typical layout of a marine seismic acquisition system, referred to as the largest moving man-made object on Earth! However, until the mid-1980s most seismic vessels towed only a single, relatively short (0 is the bulk modulus and is the shear modulus (which is zero for air and water). From these equations you see why the P-wave velocity for a given rock is always larger than the corresponding S-wave velocity. Anisotropy is defined as the property of being directionally

2600 2700 2800 time after each event (sec)

2900

dependent, as opposed to isotropy, which implies identical properties in all directions. An example of anisotropy is the light coming through a polariser. Another is wood, which is easier to split along its cells than against it. Seismic anisotropy refers to the variation of wave velocities with the direction of propagation and is widely observed in the Earth’s subsurface. Seismic anisotropy has a variety of practical applications, and is important particularly in seismic imaging as we are moving towards larger azimuthal coverage and larger offsets of 3D surveys. Several seismic processing and inversion methods (inversion is the mathematical process of estimating parameters in a medium, such as velocities and density, from seismic reflection data) utilise anisotropic models to reduce uncertainty in seismic imaging of reservoir volumes and to precisely image internal and bounding-fault positions. Shale, which comprises much of the clastic fill of sedimentary basins, is known to produce transverse isotropy (TI), which is the commonest type of anisotropy. As a first

Figure 1.39: P-wave velocity, density and S-wave velocity from a well from the Gullfaks field. The top of the reservoir is about 1,830m, where the S-wave velocity increases abruptly.

28

approximation, shales are assumed to be made up of horizontal layering where the rock matrix is stiffer in the direction parallel to the layering. The horizontal velocity can be up to some 10% faster than that in the vertical direction. Where the layering is flat, the anisotropy is constant with azimuth; hence the terms transverse isotropy with vertical symmetry axis (VTI) and polar anisotropy. Where the layering dips, or is fractured, the situation becomes more complicated. Many deep hydrocarbon reservoirs may have undergone tremendous overburden forces as well as local stresses. Such

forces exceed the strength of the rocks and open up fractures in them, resulting in stress-induced anisotropy. Depending on the number of fracture sets and fracture orientation, anisotropy varies from HTI (transverse isotropy with horizontal symmetry axis), orthorhombic, to the even more complicated monoclinic or triclinic type. From the rock physics point of view, seismic anisotropy can result from preferred alignment of mineral grains and pore spaces, from preferred direction of fluid flow among pore spaces, and from intrinsic seismic anisotropy of minerals. Figure 1.40: Typical rock P-wave velocities.

Figure 1.41: Typical rock S-wave velocities.

29

1.3.5  Analysis and Interpretation of Seismic Data The objective of seismic interpretation is to extract subsurface information from the processed seismic data. This information includes structure, stratigraphy, subsurface rock properties, velocities, stress distributions and reservoir fluid changes in time and space. In a virgin area, without well control, it is more challenging to find potential reservoir rocks through interpreting seismic data than it is in an area where several wells have been drilled prior to the interpretation. If there is a well in the area, one usually starts with making a well-tie, by using well logs to link the seismic data to the information gained from them. An example is shown in Figure 1.42, where the velocity, density and gamma logs from the well have been used to identify the top of a sand layer; low gamma-values indicate sand, higher values indicate shale. It is important to notice the scale differences on this plot. In the well we measure the seismic velocity and the density for each 30 cm; however, due to the fact that the seismic wave has a relatively long wavelength (from 20m to several hundred metres), the seismic wavefield has a resolution that is similar to this (20– 200m), depending on depth. It is also important to note that the seismic wave is reflected at interfaces where there is a contrast in acoustic impedance (velocity multiplied by density, as shown in the fourth column in the well-tie figure). By tying, for instance,

the maximum peak of the seismic signal (or, alternatively, the zero crossing, where the amplitude value is zero) to the synthetic seismic signal that is based solely on the well log information, one can identify where the top sand reflection is on the seismic. Once this is done, one can follow the same part of the seismic wavelet (maximum or zero crossing) away from the well. At some distance away from the well, the signal might change abruptly, indicating the existence of a fault (Figure 1.43). Based on geological knowledge the interpreter has to decide what type of fault this might be, and identify the same interface or layer at the other side of the fault. There are several computer systems for automatic fault detection available, and such methods may be used to aid the interpretation process, especially in heavily faulted areas. Seismic interpretation requires skills in both geology and geophysics, and is often regarded as an art. Often, there are several possible interpretations, and to limit the number of different earth models, seismic modelling is used to constrain and reduce the number of interpretations. After interpreting a given interface, it is common to produce amplitude maps, showing the strength of the seismic reflection. We observe that for the top reservoir interface at Gullfaks (Figure 1.44), there is a strong correlation between high amplitudes and the presence of hydrocarbons. In such cases, amplitude maps are extremely valuable and can be used directly in reservoir management for well planning.

Figure 1.42: Well calibration: the figure shows (from left to right): Gammalog, P-wave velocity, density, acoustic impedance, reflection coefficients, synthetic seismic (same trace plotted 10 times), real seismic data from the well point (also plotted 10 times) and real seismic data around the location (the red line shows the well location). All logs are plotted against depth, and the green dotted line shows an example of how top sand can be identified on seismic data, which is often referred to as well-tie.

30

a

b

Figure 1.43: a. Faulted sand layers from an outcrop in La Jolla, California; thickness of layers is 0.2–0.3m (photo: Martin Landrø). b. Seismic interpretation and well calibration: the blue seismic trace shows the synthetic seismic data based on well logs, and we see that there is a certain shift between the blue and the grey seismic event, which is corrected so that the seismic data is calibrated at the right seismic event. After this is done, we can interpret the top reservoir (light green line), and when we come to a discontinuity, it will often (but not always!) indicate a fault, as shown by the black sloping line on the left of the well location. Note that the fault throw in the seismic data is of the order of 50–70m while the fault throw in the outcrop picture is 10–20 cm.

1.3.6  AVO Analysis In seismic, Amplitude Versus Offset (AVO) is the term used for referring to the relationship between the reflection amplitude and the source-receiver distance, or offset. The term AVO indicates that we are studying the reflection amplitude as a function of offset, but it is a slight misnomer since we tend to look at seismic reflection amplitude as a function of incidence angle at an interface. Amplitude versus angle (AVA) is thus a better term. AVA is based on the relationship between the amplitude reflection coefficient and the angle of incidence, which was published in 1919 by Karl B. Zoeppritz, and these equations are known as the Knott-Zoeppritz equations, since Knott derived the corresponding equations for displacement potentials 20 years earlier. AVO/AVA analysis of seismic data is applied by the geophysicist to predict a rock’s fluid content, porosity, density, and seismic velocity. AVO analysis often includes the use of rock physics models, inversion procedures and detailed calibration to existing wells. A simple, qualitative use of the method is shown in Figure 1.47. From the ‘full stack’ (seismic data where all data acquired along the streamer is used) prospects A, B and C appear similar with respect to amplitude. However, when the data are separated into near (first third of the streamer), mid (second third) and far (last third) offset sections we observe that prospect A is very different from B and C: for both B and C the amplitude is increasing with offset, while the opposite is observed for A. Hence, it was decided not to drill prospect A; B and C proved to be oil-filled rocks. Figure 1.45: a. Cargill Gilston Knott (1856–1922); b. Karl Bernhard Zoeppritz (1881–1908).

a

b

Figure 1.44: Seismic amplitude map for the top reservoir at the Gullfaks field. The purple line shows the oil-water contact. We see that there is a good correlation between the oil-filled reservoir and strong amplitudes, shown in red. The blue circles show well locations.

reflected S incident P

Ө

reflected P

refracted P refracted S Figure 1.46: Knott-Zoeppritz equations describe the relationship between the amplitudes of P-waves and S-waves at each side of an interface between two elastic media as a function of the angle of incidence.

31

Figure 1.47: Full stack data (top) and three substacks (bottom) for the near, middle and far offset sections of the streamer. Notice that prospect A has a different amplitude behaviour with respect to offset than B and C (dimming versus brightening).

1.3.7  4C Seismic For marine streamer data, information related to the S-wave velocity is hidden: it must be extracted indirectly using AVO analysis and through exploiting the fact that the reflection coefficient for P-waves depends on the S-wave velocity when the incidence angle is not zero. This indirect method has limitations, and led to the development of 4C seismic by Statoil

in the early 1990s (Berg et al., 1994). 4C is an acronym for four components: x, y and z (geophone measurements) and a hydrophone measuring the pressure component. In order to measure these four components special receiver units were developed that could be placed at the sea bottom in order to achieve a good coupling. Today there are various ways of acquiring 4C seismic data, ranging from single nodes, 4C receivers chained together and cables that are either deployed directly on the seabed or buried (if they are used as permanent arrays) a metre into the seabed. Why do we want to measure the S-waves directly? Their analysis yields a more precise measurement of the S-wave velocity for a given subsurface layer, enabling us to reduce the uncertainty related to whether the rock is shale or sand. In most cases sands have a lower ratio between P-wave and S-wave velocity, and hence the knowledge of S-wave velocity is important. Furthermore, the extra information from recording the S-waves can be useful to determine whether a flat spot observed on the seismic data is caused by fluid effects (for instance oil-water) or is lithology related. Since S-waves do not see fluid effects, a ‘real’ flat spot should not be visible as a shear reflection, while a lithology contrast will show up. A nice example demonstrating this from the Alba field in the North Sea is shown in Figure 1.48. The PP-wave (P down to the reflector and back up again) section on the left shows a clear flat spot that terminates at the end of the turbidite sand channel, while the PS-wave (P down and S up again) section (right) shows the top and base of the sand channel. The combination of these two images tells a more complete story than either of the two alone, and helps to identify the sand channel and the oilbearing sands. In this case the presence of the oil water contact was confirmed by the well penetrating the central part of the channel. The contrast in S-wave velocity between the surround shale and the sand channel was also confirmed by the well.

1.3.7.1  Imaging of Complex Structures with Seabed Seismic Despite convincing examples of using 4C seismic to ‘see through’ gas clouds and to confirm the existence of fluidrelated flat spots, the major practical use has been to image

Figure 1.48: Comparison between normal PP-seismic (left) and PS-seismic data (right) from the Alba field. Note that the oil-water contact (dashed blue line) appears clearly on the PP seismic data, while the circumference of sand channel (solid yellow line) is best viewed on the right. In this way we can say that the PP- and PSseismic data complement each other (MacLeod, Hanson, Bell and Hugo, SPE 56977, 1999.)

PP waves image fluid contact

32

PS waves image sands

Inge Grødum/Geco

Figure 1.49: Seismic data are colourful, not to make data pretty, but to convey information. Variable-intensity colour is an essential tool for interpretation of stratigraphy, hydrocarbons, porosity and reservoir properties. Variations in amplitude, phase, frequency, velocity, or any other trace-related attribute can be represented by different colours or shades of grey to black. Inge Grødum’s 1980 view is: ‘Leiv’ has fired a shot with his seismic gun; to his surprise, the sea looks like the colourful seismic displays which geophysicists produce in seismic data processing.

33

complex reservoirs, where the acquisition layout used in the 4C survey is important. Normally, the receivers are spread in a grid at the seabed, and the shooting grid (using a single source vessel) is typically 50m by 50m. This means that each receiver will be illuminated from a wide variety of azimuthal angles, unlike in a conventional streamer survey, when the subsurface is illuminated using a single azimuth angle. For a complex subsurface object this is similar to using several torches to reveal the structure of an object compared to using one single torch: the object will have fewer shadow zones the more torches you use. Figure 1.50 from the East Flank of the Statfjord field illustrates this: the staircase shape of the fault blocks are more precisely imaged on the OBS (Ocean-Bottom Seismic) PP data compared to the conventional streamer and more information of the internal structure of the geology is seen in the OBS data. 4C seismic and its benefits are discussed in greater detail in Chapter 2: Elements of Seismic Surveying.

Streamer 1997

OBS-data 2002

4D seismic is time-repeated seismic. The phrase 4D refers to the fact that time is the fourth dimension, and it is the calendar time between the first and second acquisition that represents the time axis. 4D seismic probably started in the US in the 1980s when several fields undergoing thermal stimulation were monitored. For such fields, it was common to either ignite the oil by adding oxygen, or to inject hot steam into the reservoir so that oil was less viscous and easier to flow

Statoil

1.3.8  4D Seismic

Figure 1.50: Comparison of OBS seismic (PP) (bottom) and normal surface seismic (top) for the East Flank of the Statfjord field.

into the producer wells. Greaves and Fulp showed in 1987 that such thermal recovery methods could be monitored using 3D seismic, and this was probably one of the first practical

Figure 1.51: Time lapse seismic data (middle) from the Gullfaks field in 1985 and 1999, and the corresponding interpretation of changes in the oil column (right). The 1999 survey clearly shows the effect of production when compared with the baseline survey of 1985. The change in the seismic reflection strength of the top of the reservoir is related not only to the saturation change but also to the original oil-column height. When water replaces oil, the acoustic impedance in the reservoir increases, causing a dimming effect on what used to be a strong response from the top of the reservoir. The strong seismic response from the oil-water contact in 1985 has also been dimmed due to production. Red and yellow colours represent a decrease in acoustic impedance, while blue represents an increase. Structure and fluid content are shown in the cross-sectional models on the right. Interpretation shows that the smaller oil accumulation (to the left of the fault) has been drained by 1999, while much oil is still to be recovered from the main accumulation (to the right of the fault).

1999

34

Statoil

1985

Comparison (1985–1995) • after 9 years of oil production

1.3.9  Passive Seismic Sensing

Di erence (1985–1995) – vertical pro le Top reservoir

MicroSeismic Inc.

Since the beginning of seismology, passive seismic methods have been developed and OWC used to determine Di erence (1985–1995) – map view at oil-water contact Top reservoir the hypocentres of red indicates high production between seismic surveys earthquakes. Furthermore, "light" indicates minimal drainage in the production period rupture microseismic events are used in order to image seismically active Oil-Water faults or locate fracture Contact (OWC) systems in the subsurface. The techniques are called ‘passive’ because the geophysicist does not activate any seismic source; what is exploited are vibrations from natural Figure 1.52: Comparison of two single traces (left) – notice that the top reservoir is unchanged, while the oil-water contact is earthquakes or rupture substantially changed. Top right is the 4D difference along a vertical profile and the lower right shows amplitude differences processes (e.g., due to along the oil-water contact (red means large changes in amplitude). hydraulic stimulation). examples of the use of 4D seismic data. Today, passive seismic is a rapidly growing technology used The first large-scale commercial 4D survey on the to monitor oilfield completion and production processes. Stress Norwegian continental shelf took place over the Gullfaks field, changes induced by man-made activities in the reservoir, such where in 1995 Statoil decided to cover the whole northern part as hydraulic fracturing, water injection or fluid extraction, of the field with repeated surveys. The first 3D seismic data may result in failure of the rocks with a concurrent release of over the Gullfaks had been collected in 1985, just before the seismic energy in the form of P- and S-waves. The events are field was put into production. The main objective of using 4D recorded and their arrival times used to estimate the location seismic was to identify remaining pockets of untouched oil. of the failure events. Specifically, passive seismic is used to map By comparing data acquired over the same area before and fracture growth during hydraulic fracture stimulations. after production, it is possible to detect changes, and in this Passive seismic may also be an alternative for example (see Figure 1.51) we clearly see changes at the top exploration in areas where conventional seismic of the reservoir, especially at the flank. We also acquisition is not an option, such as notice that the seismic signal at the tropical jungles or in thrust-belt original oil-water contact is zones and harsh mountainous not visible on the monitor areas. For passive sensing survey, another indication methodology to be that water has replaced oil applied, the area of in this area. interest has to be By taking the difference tectonically active. between data for two The vast amount of different vintages we observe natural seismicity from that some areas have large microearthquakes of amplitude changes and other magnitudes of -1 up to areas have minor ones or 2.0 Richter is recorded none. By finding segments that continuously for a period of a have had little change, one can few months on specially designed use the information gained to networks of seismometers on the drill wells into these segments. At surface. Special data processing Gullfaks, the use of 4D seismic data has methods and tomographic inversion been a great success and more than 20 wells are then used to obtain 3D volumes have been drilled using 4D seismic information. of P-wave and S-wave velocities, and A more comprehensive attenuation properties of the Figure 1.53: In shale gas development, an array of geophones introduction to 4D seismic can subsurface. Together, this deployed across a field enables the monitoring of multiple wells be found in Landrø (2015) and information may be used to predict simultaneously over the life of the field, ensuring operators Chapter 4: Reservoir Monitoring the presence of fluids in potential understand the variability in shale over space and time. Technology. reservoirs. • amplitude change at OWC is caused by water replacing oil

35

1.4  Electromagnetic (EM) methods 1.4.1  Introduction to EM Electromagnetic (EM) methods are used to map and interpret subsurface resistivity – how strongly a rock opposes the flow of electrical current. (The reciprocal of resistivity is conductivity.) The resistivity of geologic units ranges over many orders of magnitude and is primarily controlled by porosity, pore fluid saturation and pore fluid salinity, but also temperature, mineralogy (the presence and proportion of quartz, feldspar, clay, carbonate, etc.), and grain fabric (e.g., grain shape and alignment). EM methods include a wide variety of techniques, each involving an EM source (natural or artificial) and a way of measuring one or more of the electric or magnetic field components. In Sections 1.4.2 and 1.4.3 we discuss the basics of controlled source electromagnetic (CSEM) and magnetotelluric (MT) methods, but first we present some introductory comments. EM waves can travel indefinitely through space or the atmosphere, but when they pass into rocks, which are conductors, they are progressively absorbed; the more rapidly a rock absorbs EM waves, the higher its conductivity, or the lower its resistivity. In rocks, the wave’s amplitude decays exponentially with distance from the source as: where is frequency, μ0 is the magnetic permeability in vacuum (4π x 10-7 Hm-1) and is resistivity (measured in Ohm-meters: Ωm). Note that attenuation is greater for higher frequencies; therefore, lower frequencies penetrate deeper than higher ones. Furthermore, attenuation is lower for larger resistivities; therefore the EM field survives longer in highly resistive bodies than in less resistive materials. Because of attenuation, lower frequencies are needed to reach deeper targets, as well as powerful sources and sensitive receiver antennas. In the subsurface, EM wavelengths are much longer than seismic wavelengths, so the vertical EM resolution is significantly lower than the vertical seismic resolution. A consequence of the low resolution is that EM inversion algorithms, unless constrained, cannot produce resistivity images with sharp boundaries, and the images appear vertically defocused. Furthermore, since the resolution is low, the true values of the subsurface resistivity cannot be recovered (see Figure 1.54). Typically, if inversion is used to analyse CSEM data from a thin

(e.g. tens of metres thick), moderately resistive (e.g. 10–100 Ωm) layer, the inverted resistivity image of the layer appears as vertically diffuse or spread out (hundreds of metres thick) and less resistive (1–10 Ωm). Fortunately, when the thickness of the geologic body is known (from seismic), the true subsurface resistivity can be estimated by scaling the resistivity-thickness product, called transverse resistivity, of the low-resolution resistive body to that of an equivalent thin body using the principle of conservation of resistive-thickness, where denotes thickness.

1.4.1.1  Resistivity and Anisotropies Electrical resistivity describes how a material allows electric currents to flow through it. Of all the geophysical properties of rocks, electrical resistivity is by far the most variable, and the resistivity of individual rock types can vary by several orders of magnitude. An electric current can pass through a subsurface formation if the rock contains water with enough dissolved ions to be conductive. Nearly all rock formations have finite, measurable resistivities because of the water in their pores, adsorbed on their grain surfaces, or absorbed into a clay structure. As stated above, the resistivity of geological units is largely dependent upon their fluid content, pore-volume porosity, interconnected fracture porosity, and conductive mineral content. Formation resistivities are usually in the range of 0.2 to 1,000 Ωm. Resistivities higher than 1,000 Ωm are uncommon in most permeable formations but are observed in formations that are impervious to water and have low porosity, such as evaporites. Fluids within pore spaces and fracture openings, especially if saline, can reduce resistivities in what would otherwise be a resistive rock matrix. Resistivities can also be lowered by the presence of electrically conductive clay minerals and metallic mineralisation. Fine-grained sediments such as clay-rich marine shales are normally conductive from a few Ωm to a few tens of Ωm, while non-graphitic (non-graphitisable carbon) metamorphic rocks and unaltered, unfractured igneous rocks are normally moderately to highly resistive (a few hundreds to thousands of Ωm). Carbonate rocks have high resistivities depending on their fluid content, porosity and impurities. Higher subsurface temperatures cause higher ionic mobility, reducing rock resistivities. A rock formation containing hydrocarbons often has

Causse et al., 2015 EAGE

Figure 1.54: Inversion of noise-free synthetic CSEM data from a model (left) with three resistive anomalies (red) gives estimated resistivites that spread out vertically.

36

significantly higher resistivity than its surroundings. Therefore, hydrocarbon-saturated reservoir rock, unless its resistivity is significantly reduced by, for example, the presence of clay matrix or a salty water component, can be directly detected by transmitting CSEM fields into the subsurface and recording the returning signal that has been ‘guided’ in the thin resistive hydrocarbon layer. (Guiding means that the energy’s direction of propagation is effectively parallel to the layer boundaries.) Similar to the way seismic velocity depends on direction, electric resistivity depends on the direction of current flow in the rock. The earliest observations Figure 1.55: Approximate range of electrical resistivity of rocks, taken from several sources. From petrophysical measurements in oil wells, it is well known that the resistivity increases when hydrocarbons are present in of electrical anisotropy were sandstones. The challenge for the CSEM interpreter is to discriminate between resistive layers caused by the presence noted in the 1930s when surface of hydrocarbons and other layers that also might have high resistivity. measurements using electrodes laid in different directions were seen to give different results decreasing frequency. In travelling one wavelength , depending on whether layers were dipping or flat. In most the amplitude drops by the factor or 55 dB. situations the resistivity is quite uniform in the horizontal directions, but larger in the vertical direction. 1.4.2  Controlled Source Electromagnetics (CSEM) Today, surprisingly large anisotropies have been observed Marine controlled-source electromagnetics (CSEM) uses from log measurements – 10:1 or greater – and not just in electrical resistivity contrasts to investigate the subsurface. shales and laminated sand-shale sequences. Some of the largest CSEM techniques play an important role in oil and gas anisotropies are found in what in the past had been assumed exploration since the physical property used, resistivity, is to be clean, homogeneous sands. In these formations, the higher in hydrocarbon-bearing sediments than in wateranisotropy is believed to be caused by variations in grain size bearing ones. As such, CSEM is a complementary tool and irreducible water saturation. to seismic – it provides another key parameter which Electrical anisotropic resistivity must therefore be taken into characterises the subsurface. CSEM is sensitive to resistivity account in EM exploration where the goal is to investigate the and can thus be indicative of hydrocarbon saturation. nature of sedimentary formations and is especially relevant to The first commercial application of CSEM for hydrocarbon understanding water saturation in hydrocarbon reservoirs. exploration was performed by Statoil in 2002 (Ellingsrud et al., 2002; Røsten et al., 2003), and spurred a period of extensive 1.4.1.2  Velocity and Skin Depth data acquisition and development of the technology. It became In a vacuum, EM waves travel at the speed of light, 299,792,458 clear that successful use of CSEM in exploration requires m/s, often rounded to 300,000,000 m/s. In air, they travel a thorough understanding of the CSEM method. Different slightly slower at 299,702,534 m/s. In rocks, however, the resistivity models can explain the same measured CSEM data, velocity is much slower, and depends strongly on frequency so one has to select a model that fits the seismic interpretation and resistivity. The phase velocity – the velocity at which the and geological understanding from among the resistivity phase of any one frequency component of the wave travels – is models that explain the data. It helps to have well calibration . At a frequency of 1/4 Hz, the phase velocity in data, which enables the selection of resistivity models that have sea water, where resistivity is 0.3 Ωm, is 922 m/s. In a layer with the best match to well-log resistivities in the surveyed area. resistivity 2.5 Ωm, the phase velocity is 2,500 m/s, whereas in a In Section 1.3.1.2 we presented the spectacular seismic layer with resistivity 50 Ωm, it becomes as high as 11,180 m/s. example from the Barents Sea, the Johan Castberg (previously Based on electromagnetic theory, the EM amplitude is, as we Skrugard) field, found in 2011. The history of mapping have noted, exponentially damped with distance. Skin depth is Skrugard by CSEM is reviewed in Løseth et al., 2014. CSEM a measure of how far into a material an EM wave can go. It has multiclient data were acquired over the area in 2008 for the quite practical applications in geophysics, where low-frequency 20th Norwegian Offshore licensing round. In 2010, before the EM waves are sent through water and rocks. The skin depth first exploration well was drilled at the Skrugard prospect, is given as , showing that, for fixed resistivity, these data were revisited, together with a variety of other data skin depth, and thus EM depth, penetration increases with analysis and preparation work. New data analysis software

37

Figure 1.56: Vertical resistivity section from constrained CSEM inversion over Skrugard, co-rendered with seismic data. The structure interpreted from seismic was used to constrain the resistivity anomaly to the Skrugard reservoir. The white squares denote receivers and the white polygon illustrates the identified reservoir container down to the oil-water flat spot. (Løseth et al., 2014). (Statoil and licence partners Eni Norge and Petoro. Data courtesy of EMGS.)

and workflows had recently been developed to estimate the probability of discovery, and also to predict pre-well reservoir resistivity and hydrocarbon saturation using CSEM data. It was concluded that a hydrocarbon-filled reservoir with high resistivity was highly likely. Early in 2011, the Skrugard exploration well 7220/8-1 was drilled with an advanced logging programme. In particular,

Resistivity (Ωm) 1 10 100 1000 500

Depth (m)

1000

1500

2000

38

Figure 1.57: Skrugard exploration well 7220/8-1: Comparison of resistivity from well log measurements and constrained inversion. The red curve is the logged horizontal resistivity and the blue curve is the logged vertical resistivity. Observe reservoir resistivities in the 1,000 Ωm range. Black lines represent resistivity extracted from CSEM inversion. The red colours emphasise the difference between CSEM resistivities outside and inside the reservoir Top Res. geometry, for 2.5D inversion (light red OWC colour) and 3D inversion. In thin layers relative to its wavelength, CSEM is mainly sensitive to the vertical component of the resistivity, but is less sensitive to the horizontal component. Furthermore, observe that CSEM inversion gives no information about the vertical resistivity below the highly resistive reservoir. (Løseth et al., 2014). (Statoil and licence partners Eni Norge and Petoro.)

the horizontal and vertical resistivity was logged from the seabed down to total well depth. The match between the estimated CSEM resistivities at the well location and the corresponding measured well resistivities was very good, in particular throughout the overburden above the reservoir zone. The transverse resistance computed from the CSEM inversion result matched the logged transverse resistance well. The hydrocarbon saturation estimated from the well logs is approximately 95%. Also in 2011, a dedicated CSEM survey supported the pre-well CSEM conclusions. With new high-quality CSEM data a clearer anomaly at Skrugard was achieved by using newly developed 3D high-resolution inversion (Nguyen et al., 2013; see Figure 1.58).

1.4.2.1  CSEM Acquisition In CSEM surveying, electric and magnetic field recorders are deployed on the seafloor, or electric recorders are towed in a streamer. A powerful horizontal electric dipole, 200–300m long, is towed at a height of 25–100m above the seafloor emitting 1,000–10,000A of current into the seawater. Alternatively, the source is towed at about 10m depth. The transmission currents are typically binary waveforms with 0.1 to 0.25 Hz fundamental and higher harmonics. Today’s trend is to shape the waveform to have desirable frequency content in the range 0.05–10 Hz, depending on the geophysical objective.

1.4.2.2  Principle of Marine CSEM The principle behind marine CSEM for hydrocarbon mapping in sedimentary basins is relatively simple. Its basis is that EM energy is rapidly attenuated in conductive sediments, but it is attenuated less in more electrical resistive layers such as hydrocarbon-filled reservoirs. More specifically, depicted in Figure 1.63 as events I to V, the major pathways for propagation of the CSEM signal are through seawater (event II), through the subsurface (events III, IV and V), and through the air (event I). The direct field through the seawater (event II) is the signal that is transmitted directly from the horizontal electric dipole source to the receiver. This field dominates in amplitude at short source-receiver separations, but is strongly damped due to the skin depth-related exponential

Figure 1.58: Resistivity anomaly across Skrugard obtained by high-resolution 3D inversion of 2011 CSEM data (Nguyen et al., 2013). (Statoil and licence partners Eni Norge and Petoro. Data courtesy of EMGS.)

Figure 1.59: Principle of CSEM surveying. The vessel is towing an electromagnetic source such as a horizontal electrical dipole after receivers have been placed on the seafloor. The source emits a low frequency current signal that penetrates below the water bottom. Hydrocarbon-bearing layers, characterised by elevated electrical resistivity, can be detected by the receivers. Mathematical inversion is used to estimate subsurface resistivity values from the CSEM data. (Figure from S. Constable, Geophysics, 2010.)

Air (resistive)

CSEM Transmitter

Natural-source Magnetotelluric Fields

Seawater (very conductive) 25-100 m

100-300 m dipole, 100-1000 amps

Electric and magnetic field recorders

Oil, Gas (resistive)

Seafloor (variable conductivity)

Salt, carbonate, volcanics (resistive)

Figure 1.60: Frequency spectrum of square-wave signal of fundamental frequency f =0.25 Hz. The signal has harmonics (‘overtones’ or multiples of the fundamental frequency) of various amplitudes.

39

EMGS

EMGS

Figure 1.61. Left: Source towfish (yellow frame) with source dipole antenna (horizontal electric dipole, or HED). Typical distance between antenna electrodes is up to several hundred metres. Right: CSEM receiver, consisting of electric antennas and magnetic coils measuring both electric and magnetic fields in two horizontal, orthogonal directions. The receiver sinks freely to the seabed to a pre-planned location. The orientation of the antennas is arbitrary, but is estimated and rotated into inline and broadside components during data processing.

attenuation in addition to the geometrical spreading associated with the source dipole geometry. Subsurface structures are normally more resistive than seawater. As a result, skin depths in the subsurface are longer than those of seawater, and thus the EM fields propagating in the subsurface are less attenuated than the direct field. A hydrocarbon-saturated reservoir may have relatively high resistivity compared to the shales and water-filled sandstones of the subsurface. Field propagation in hydrocarbon-saturated reservoirs will then experience lower attenuation than field propagation in water-bearing sediments. Thus, when the EM field propagates a relatively long distance in hydrocarbon reservoirs (see event V, Figure 1.63), the detected

signals can dominate in amplitude those signals that have propagated in the sediments. It is the enhancement in EM field amplitudes at long source-receiver separations (compared to the depth of the reservoir) that is the main basis for detecting hydrocarbon reservoirs. In shallow water, the lateral field propagation in the air half-space (event I) becomes significant, and may dominate the recorded signals at the receivers. The signal that travels through air is often referred to as the source-induced ‘airwave’. During the survey, the EM receivers measure the energy that has propagated through the sea and the subsurface. Data processing and advanced inversion is performed to produce

Figure 1.62: Principle of simultaneous CSEM and seismic surveying with both EM and seismic sources and receivers in towed streamers. (Courtesy of PGS.)

40

(II) hs (III)

hr

d1 (IV)

d2

(V)

z Figure 1.63: Sketch of the signal propagation in an ideal layered model. The conductivity is in seawater (thickness ), in sediments (thickness ), and in the thin resistive layer (thickness ). In the water column the source (receiver) is at height ( ) above the seabed. The signal paths sketched are (the most important ones): the lateral ‘airwave’ wave on the sea-surface interface (I), the direct wave between the source and receiver (II), the lateral wave on the seabed interface (III), the reflected wave from the thin layer interface (IV), and the guided wave in the thin layer (V).

is one of the few techniques capable of sensing through the earth’s crust to the upper mantle. Silicate minerals comprise 95% of the crust, and they are very resistive. In the crust, MT has been used to locate underthrust sediments, regions of metamorphism and partial melting, and fault zones (fractured, fluid-filled rock), while in the mantle it has been used to identify partial melt and variations in lithospheric temperature. The source of MT fields is mainly world-wide thunderstorm activity, usually near the equator, generating frequencies of 10,000 to 1 Hz, and geomagnetic micro-pulsations at frequencies of 1 to 0.001 Hz. This source wavefield propagates vertically into the earth due to the large resistivity contrast between the air and the earth, causing a vertical refraction of the electromagnetic wave transmitted into the earth. The natural fields are recorded in the x-y-z-directions for the magnetic field and the x-y directions for the electric field at the earth’s surface. The fields are sensitive to both vertical and horizontal variations in resistivity of the underlying earth, which can be determined from the relationship between the components of the measured electric (E) and magnetic field (H) variations. The ratio of electric and magnetic field intensity (E/H), termed as the impedance, is a characteristic measure of the EM properties of the sub-surface medium, and constitutes the basic MT response function. The apparent resistivity, which is an average resistivity of the volume of the subsurface sounded by a particular frequency, is given as the ratio of the electricfield strength to the magnetic-field strength,

3D resistivity volumes of the subsurface. These datasets are integrated with other subsurface information to enable the geoscientist to make a model of the subsurface for prospectivity evaluation. CSEM surveying has been in commercial use for hydro­ carbon exploration since 2004. The physics of the method Furthermore, the MT method is capable of establishing dictates, firstly, that CSEM is very Figure 1.64: Lightning strike. sensitive to high electrical resistivity, which, although not an unambiguous indicator of hydrocarbons, is an important property of economically viable hydrocarbon reservoirs. Secondly, the physics shows that CSEM lacks the resolution of seismic since it is a low-resolution method due to the low frequencies and hence large wavelengths involved in the investigations. However, CSEM has a much better intrinsic resolution than potential field methods such as gravity and magnetic surveying.

Ingjerd Landrø

dw

0

Lars Ole Løseth

(I)

1.4.3  Marine Magnetotellurics (MT) The abbreviation MT does not mean the ‘empty’ method, but ‘magnetotellurics’, derived from ‘magneto’ for magnetic and ‘telluric’ for earth currents. MT is a passive-surface electromagnetic geophysical exploration technique that measures variations in earth’s naturally occurring, time-varying magnetic and electric fields to investigate the electrical resistivity structure of the subsurface from depths of tens of metres to hundreds of kilometres. MT

41

www.spaceplace.nasa.gov

whether the electromagnetic fields are responding to subsurface rock bodies of effectively 1, 2, or 3 dimensions by computing skew and ‘tipper’, which are used to infer lateral variations in resistivity. If the measured resistivity response to the geology beneath an MT station truly is 1D or 2D, then the skew will be zero (if no noise in the data). Higher skews are an indication of the resistivity response to 3D structures. The ‘tipper’ is calculated from the vertical component of the magnetic field and is a measure of the ‘tipping’ of the magnetic field out of the horizontal plane. For 1D structures, the tipper will equal zero. The tipper responds primarily to vertical and sub-vertical structures. Figure 1.65: Magnetotelluric signals from lightning strikes. The electrical discharges from thunderstorms In marine MT, the electric and magnetic radiate powerful electromagnetic fields that propagate in a wave guide between the Earth’s surface fields are measured at seafloor receiver and the ionosphere, with a small part of the energy penetrating into the Earth. Figure from Martyn stations. The fields are small, and vary Unsworth’s course notes (GEOP424, University of Alberta). in strength over hours, days and weeks. Geophysicists exploiting MT for greater depths have to take MT data can be acquired as part of CSEM surveys when the measurements for many hours at each station in order to controlled source is inactive. The low-frequency and thereby get a good signal to ensure high-quality data. To record low deep-sensing nature of MT surveying makes the technique frequencies, say 0.001 Hz, or 1 cycle per 1,000 seconds, we need useful for mapping regional geology. to record for 16 min (1,000s) to get a single sample of data. To get enough samples for a decent statistical average of the data 1.4.4  Induced Polarisation Geoelectric Method we need to record for several hours. While recording conventional resistivity measurements in Unfortunately, the MT method lacks the sensitivity required 1920, Conrad Schlumberger noted that the potential difference to resolve the details of thin resistive structures such as between electrodes did not drop instantaneously to zero when hydrocarbon reservoirs, but it is sensitive to the bulk (largethe current was turned off. Instead, the potential difference scale) background resistivity structure, and so is a natural dropped sharply at first, and then gradually decayed to zero complement to the CSEM method in many situations. after a given interval of time. The study of the decaying potential difference as a function of time Figure 1.66: Below ~1 Hz, the MT source originates from the interaction of the solar wind with the Earth’s is the study of induced polarisation (IP). magnetic field, which, when strong, are known as geomagnetic storms. The interactions create small The IP method is extensively used in variations in the magnetic field (dH/dt), which induce electric currents in the Earth. Because of the large conductivity contrast between space and the ionosphere, EM waves are bent to vertical incidence. These the search for disseminated metallic ores vertically incident waves impinge on the surface of the Earth. Part of it is reflected back and the remaining and, to a lesser extent, ground water and part, which may have frequencies down to 0.00001 Hz, penetrates deep into the Earth. They induce geothermal exploration. More recently, currents in the Earth called telluric (earth) currents, which tend to flow in the more conductive rocks, in turn it has been applied to hydrocarbon producing a secondary magnetic field. exploration. There are two main mechanisms through which the IP effect can be de­ tected (Kearey et al., 2002; Sharma, 1986). By applying an external current, the electrical conduction in most rocks (other than ores) is electrolytic by positive and negative ion transport through water in interconnecting pores. However, when metallic minerals such as pyrite or magnetite are present, blocking the pores, the ionic conduction is hindered by these mineral grains, in which the current flow becomes electronic. More specifically, negative ions reaching the mineral grain blockage will lose electrons and become neutral. The electrons pass through the mineral grain to the other side, where they combine with positive ions on the

42

Potential difference

opposite side of the blockage. Since the exchange of electrons to and from ions is a relatively slow process, there is a queue or accumulation of ions on each side of the grain, causing a build-up of charge – as in a battery when energised with an electric current. When the external current is turned off, ions slowly diffuse back to their original locations and cause a decaying voltage; the layer gradually discharges, but does not drop instantaneously to zero, as observed by Schlumberger. The form of the decay curve is used as a measure of the concentration of metallic minerals in the rock. This IP effect – known as electrode polarisation or overvoltage – is most pronounced when the minerals are Figure 1.67: East-west resistivity transect through Mt. St. Helens and Mt. Adams showing the disseminated throughout the host rock. The resolution possible with a broadband MT survey. Note the regional conductor in the lower crust and effect decreases with increasing porosity as the discrete conductors beneath each volcano. (From Hill et al., 2009.) alternative paths become available for the more efficient ionic conduction. Figure 1.68: The IP phenomenon is a potential There is a second mechanism that produces IP, independent difference that sometimes exists shortly after the of any minerals in the rock, which is evident particularly in current in an electric survey is switched off at time, say, . The measured potential difference, after an clay-bearing sediments. The surface of a clay particle has a initial large drop from the steady-state value net negative charge and will attract positive ions to it when decays gradually to zero. The chargeability is an electric current flows past in the electrolyte present in the where pores. The distribution of ions becomes polarised, and the is the area under the decay curve over the time current flow through pore throats is impeded. Negative and interval . Pyrite has M=13.4 mV positive ions build up on either side of the blockage. When over a time interval of 1s, and magnetite M=2.2 mV over the current is turned off, the ions redistribute themselves to A the same interval. an equilibrium position over a finite period of time causing a (Modified from Kearey et gradually decaying voltage, manifesting itself as an IP effect. al., 2002.) This IP effect – known as membrane or electrolytic polarisation – is most pronounced in the presence of clay Two different survey methods are available to investigate IP minerals where the pores are particularly small. The effect effects. decreases with increasing salinity of the pore fluid. It also The first is time domain IP, in which the voltage decay decreases with very high (>10%) clay content due to few pores over a specified time interval is recorded after the induced and low conductivity. voltage is removed. To make the measurements, the current is switched on for a couple of Rock seconds. This is sufficient to allow Direction of (a) the charge to build up to a steadystate value, where the current and current ow potential difference at potential Figure 1.69: (a) Electrical electrodes becomes constant. conduction in most rocks is The current is then switched off. electrolytic, with ions (atoms Rock The most commonly measured with net positive or negative parameter is chargeability, defined electrical charge) in the pore fluids being the dominant as the area beneath the decay (b) charge carriers. The positive curve over a certain time interval and negative ions migrate normalised by the steady-state to different points when an potential difference. Different electric current flows. (b)- (c) IP mechanisms: electrode minerals are distinguished by polarisation or overvoltage Rock characteristic chargeabilities. in the case of a conductive (clay) + The second method is frequency grain blocking the pore (b) domain IP, using alternating and membrane or electrolytic (c) polarisation (c). (Modified currents (AC) to induce electric from Kearey et al., 2002.) + + charges in the subsurface, where the apparent resistivity is measured

+

-

-

-

+

- +

-

-

43

inverted parameters. Bayesian approaches have also been developed. In recent years it has been proposed that IP measurements could be used as an indirect indication of potentially hydrocarbon-filled reservoirs. It has been suggested that significant changes in the IP-response could be attributed to hydrocarbon reservoirs. Pyrite beneath the seabed is one among several explanations that have been proposed for generating IP-anomalies. The mineral pyrite, or iron pyrite, also known as ‘fool’s gold’ due to its golden glittering, is an iron sulphide with the chemical formula FeS 2 . By using an active electric dipole source separated by several hundred metres, the electric voltage can be measured at separate Figure 1.70: Pyrite’s name comes from the Greek, pyrites lithos, ‘the stone which receivers, as explained by strikes fire.’ Known as “fool’s gold”, it glitters brighter than true gold. The expression is said to refer to Queen Elizabeth I of England. During colonial times, groups of The model depends on four Veeken et al., 2009. The merchant adventurers in London were encouraged by the Queen’s government to fundamental parameters: acquisition setup for the look for gold in the Americas. Many of the American colonies were founded in the the DC-resistivity , the marine case is somewhat pursuit of finding gold. Some British explorers wanted to establish a new colony on the coast of Labrador. However, they were denied funding because they had chargeability of the rock, similar to seismic, with two found no gold there. Not to be deterred, they collected and sent back pyrite samples the mean relaxation (decay) powerful electrodes that emit as proof of a gold discovery. The Queen is said to have taken a brief look at the time of the IP potential, and currents of typically 500A, samples before she then approved the building of the colony. It took a long time before the trick was discovered. In the section on source rock mapping (Section the exponent . In general, separated by several hundred 1.7.1.4) the significance of pyrite for oil-finders is discussed further. in mineralised rocks, and metres, used to transmit depend on the quantity of the electric signal into the polarisable elements and their size, respectively. The exponent subsurface. Normally the sources are towed at a depth of depends on the size distribution of the polarisable elements. 1–3m. The receiver array is towed approximately 1–3 km From IP measurements, the Cole-Cole parameters can behind the vessel, and the vessel speed is adjusted so that gas be inverted. IP inversion is an ill-posed problem, and many bubbles in the water close to the receiver array are avoided. By combinations of IP parameters and resistivity can give the analysing the received signals, the decay properties, and the same data. Classic methods minimise a least-squares cost amplitude and phase information, it is possible to extract the function to determine each of the parameters, with an induced polarisation signal and map the chargeability of the estimate of the uncertainty associated with the values of the near subsurface. JJ Harrison/Wikipedia

at two or more AC frequencies. IP depends on a small amount of electrical charge being stored in the exploration target when a current is passed through it, which can be measured when the current is switched off. A material is classified as ‘dielectric’ if it has the ability to store energy when an external electric field is applied, so IP is therefore the measurement of the dielectric properties of formations. The dielectric relaxation response is usually expressed in terms of the complex resistivity of the rock as a function of the applied field’s frequency . Pelton (1977) demonstrated that the Cole-Cole relaxation model (Cole and Cole, 1941) represents the typical complex conductivity of polarised rock formations. In this model, the complex resistivity is given in terms of frequency as:

Figure 1.71: Acquisition of marine IP-data: A and B are the source electrodes and M1, M2 and M3 are receiver arrays (a) and corresponding currents (b). (Figure from Veeken et al., 2009.)

∆ u2

∆ u1

a)

S A

B

II I O Gnd

III M2 M1

M3

O3 O2

N3 N2

O1

N1

b)

t

44

1.5  Gravimetry and Magnetometry Gravimetry and magnetometry are potential field methods with a long history of use in the oil and gas industry, dating back to the 1920s. They are even more common in bedrock and mineral resource mapping. With the rapid advances in seismic in the 1990s, the petroleum industry partially lost interest in these techniques. Today, with the availability of global maps, gravity and magnetics are in the resurgence, and are typically used in frontier exploration areas, in the search for new basins, and to investigate large, potentially prospective areas before undertaking more detailed and expensive seismic work. In addition, gravity and magnetic methods are extensively used in areas where seismic acquisition is difficult or impossible, such as inaccessible land areas and exploration underneath volcanic rocks. Gravimetry and magnetometry are used to determine the geometry and depth of geological structures expressed by lateral changes in density and magnetic parameters. Common applications include distinguishing between crystalline crust and sediments and the mapping of crustal highs and basins, faults, igneous intrusions and basalts. In particular, gravity and magnetics are used in combination with seismic for exploring sub-salt plays. Seismic is effective for imaging the top of the salt but has difficulty imaging below it due to the high velocity of the salt. In this case, gravimetry integrated with seismic and electromagnetic data can be used to find a model that images the base of the salt.

Δg Δgmax

X/Z

R z

R

r

Ѳ

M,

Figure 1.72: Normalised gravity anomaly across a sphere.

1.5.1 Gravimetry The major objective of gravimetry is to measure and map density variations in the subsurface. As such it is used to derive the shape and density of buried bodies and structures, like igneous intrusions, which form a high density contrast to the surrounding rocks. Gravity can also indicate the rock types that form and fill a sedimentary basin, based on their density contrasts. Gravity is a low-resolution exploration tool when compared to seismic. The resolution is highest near to the surface and decays rapidly with increasing depth. Without constraints, a gravity model will be coarse and will have an uncertainty in shape and density, meaning it can theoretically be modified in an infinite number of ways in order to match the observed gravity signal. Therefore seismic, magnetic, well log and geological data will need to be used to constrain gravity-derived models. The simplest model to compute is the gravitational anomaly at the surface due to a buried homogeneous sphere of radius and density in a layer of density ; the density contrast is then and its mass is . The acceleration at the surface towards the centre of the sphere acts as if the whole mass of the sphere were concentrated into a point mass at the centre. The gravity anomaly at the surface is defined as the vertical component of the attraction. Introducing the notation in Figure 1.72, the gravity anomaly produced by the sphere with centre at depth becomes a function of distance : where

at

It should be noted that even if the geophysicist knows that the

.

cause of an anomaly is a spherical body, its radius and its density contrast to its surroundings cannot be separately determined from the measurement of gravity anomaly alone. This is because the formula above depends on the product . All spheres with the same centre and having the same value for this product will produce the same gravity anomaly at the surface. In practice, more complex models are used to model and invert field gravity data. However, due to the non-uniqueness of gravity measurements, the geophysicist has to choose some plausible simple structure and adjust its parameters until its calculated gravity anomaly fits satisfactorily with the observed anomaly at all locations. Gravity data are acquired by very accurate instruments placed either on the land surface or seabed, or on boats (seismic vessels), aeroplanes and satellites. These gravimeters measure the acceleration force of the earth’s gravity field at their location, in a unit called a milligal (mGal), which equals 10 -5 m/s2. The aim of gravimetric measurements is to obtain a geological interpretation of how the Earth’s crust is composed. The spatial change in gravity can also be measured and this study is called Full Tensor Gravity (FTG) gradiometry. We notice from the rock density figure (Figure 1.73) that the potential for discriminating between sand and shale (clay) is not huge using gravimetry. However, there is a significant difference in density between limestones, granite and basalt on one hand, and sand and shale on the other hand. Therefore, a mass of relatively dense rocks near the surface of the earth

45

can be detected by increased gravity values, a so-called positive gravity anomaly. Similarly, a mass of relatively light rocks such as salt domes can be detected as a negative gravity anomaly. Prior to interpreting gravimetric data, several processing steps need to be taken. For instance, the radius of the Earth decreases as we go from the equator to the poles (the difference is approximately 21 km). This leads to a significant change in g (gravitational acceleration on Earth) as a function of latitude. The acceleration caused by the Earth’s rotation is also changing with latitude and the actual shape of the Earth’s surface or terrain also needs to be corrected for prior to interpretation. Figure 1.74 shows gravity anomaly maps of Norway and the oceans to the west. The left map shows anomalies where the terrain-effect has been corrected for, assuming an average Figure 1.73: Typical rock densities. density of 2670 kg/m3. The right map shows the same data after correction for isostasy effects. Isostasy refers changes in crust densities. The offshore data were acquired to the equilibrium state between the Earth’s crust and the by ships. We notice that the variations in g onshore Norway mantle, and isostasy correction means to correct for the effect is small. The gravity anomaly (prior to isostasy correction) is caused by the fact that the mantle is denser than the crust. approximately -70 mGal onshore, and up to 70 mGal for some Under large mountains the interface between the crust and of the offshore areas. 1 mGal is equal to 10 -5 m/s2 , meaning the mantle is generally deeper, and hence we need to correct that the range for the colour bar in Figure 1.74 represents for these depth variations if we want to estimate lateral 10 -4 times g. Furthermore, we notice that the negative gravity Figure 1.74: Estimated gravity anomalies (blue) over Norway and the sea offshore Norway. (a): Terrain-corrected data (free-air anomalies are used for marine areas, and Bouguer anomalies on the mainland). (b): Isostasy-corrected data. (Data courtesy of the Geological Survey of Norway, NGU)

a

46

b

a anomalies observed over mainland Norway (Figure 1.74, left) have disappeared after isostasy correction, as expected. Since the spatial resolution of gravimetry is limited, it is used in the early stages of hydrocarbon exploration to map basin structure and composition, here by using satellite and ship survey gravity data. High resolution gravity and FTG data can also be used for reservoir mapping, for instance in combination with seismics to map subtle hydrocarbon traps close to complex salt structures. For a good quality sandstone reservoir the porosity (the percentage of rock volume comprising open pores which can be filled with gas, oil or water) is around 20–30%. If the pore fluid is changed from, for example, gas to water, this can lead to significant density changes which can be measured or monitored by repeated gravity measurements during production. This is called repeated gravimetry, or 4D gravity. Figure 1.75 shows how this can be used to monitor the storage of CO 2 in sandstone layers in the subsurface. In the shallow subsurface near to the Sleipner field in the North Sea, Statoil has stored approximately 1 million tons of CO 2 annually in the Utsira formation since 1996, and this can be observed both on repeat gravity data and seismic data. The Utsira sand is situated at 850m depth and is approximately 200m thick. The density of the injected CO 2 is around 700 kg/m 3, which is significantly lower than the water (1,000 kg/m3) that was originally in the pore space. The figure shows very accurate measurements of changes in density estimated from repeated gravity measurements at

b

Figure 1.75: Time-lapse gravity measurements before and after injection of CO2 in the Utsira formation at Sleipner (a) and time-lapse seismic difference data over the same area (b). There is a good correlation between the two time-lapse geophysical methods. (Statoil.)

the seabed above the injection site at Sleipner. It can be clearly observed that the density has changed close to the injection point. This is also confirmed by the time-lapse seismic data shown on Figure 1.75 (b). Both gravimeters and seismometers are based on the same geophysical principle – the prolongation of a mass hanging on a spring which moves either by seismic waves or a change in gravity attraction. The main difference is that a seismometer is designed to measure relatively rapid movements on the Earth’s surface while gravimeters measure over longer time windows. When Apollo 11 landed on the moon, a seismometer was placed close to the Eagle, and the corresponding seismogram is shown below. Recording of seismic activity at the moon lasted for several years, and brought interesting information about the strength of moonquakes. The seismogram shown below includes the signatures of Armstrong, Aldrin and Collins, and the signal caused by Aldrin’s footstep is shown by the arrow to the upper right. The seismogram recordings on the moon were recorded from 1969 to 1977, and the largest moonquake measured 5.5 on the Richter scale. Four types of moonquakes have been observed: deep quakes probably caused by tidal effects, quakes caused by meteorites, thermally induced quakes, and shallow quakes occurring 20–30 km below the moon surface, with the shallow quakes being the strongest and most harmful. It was found that moonquakes last for longer than compared earthquakes, typically more than 10 minutes.

Figure 1.76: First lunar seismogram, Apollo 11, 21 July 1969. Aldrin’s footfalls are shown by the arrow to the upper right. The signatures of Armstrong, Aldrin and Collins are between the two first arrows. The document was found in an Australian storage site, and shows clear indications of coffee spill. (Figure provided by Yosio Nakamura, University of Austin.)

47

1.5.2 Magnetometry In the same way that density varies for different rocks, there are also variations in the degree of magnetisation (characterised by magnetic susceptibility and remanence) between rock types. Susceptibility is a measure of the degree of magnetisation when a material is exposed to an external magnetic field (dimensionless constant). Magnetic remanence denotes the residual magnetism in a rock after the external magnetic field is removed. The magnetised rocks cause a magnetic field which is locally modifying the Earth’s magnetic field. For exploration purposes such data are commonly acquired from planes or helicopters by using magnetometers that measure the strength of the Earth’s magnetic field at the location, in units called gauss or nanoTeslas (nTesla). Like gravimetry, magnetometry is often used to map sedimentary basins at an early stage in hydrocarbon exploration. It is primarily applied to detect variations in crystalline basement, depth and crustal rock type. If a large mass of magnetic rock occurs near the surface, it is detected by a larger magnetic force than the normal, regional value. However, magnetic fields are dipole fields and as such more Figure 1.77: (a) Illustration of how the mid-Atlantic rift zone might have developed. (b) A world magnetic map showing a ringing pattern from Iceland extending all the way to Africa along the mid-Atlantic rift zone. Credit: Magnetic Anomaly Map of the World at 1:50 000 000 scale, 1st edition. Authors: Juha V. Korhonen, J. Derek Fairhead, Mohamed Hamoudi, Kumar Hemant, Vincent Lesur, Mioara Mandea, Stefan Maus, Michael Purucker, Dhananjay Ravat, Tatiana Sazonova and Erwan Thébault. © CGMW-CCGM 2007, Paris. The WDMAM (World Digital Magnetic Anomaly Map) is an international scientific project under the auspices of IAGA (International Association of Geomagnetism and Aeronomy) and CGMW (Commission for the Geological Map of the World), aiming to compile and make available magnetic anomalies caused by the Earth’s lithosphere, on continental and oceanic areas, in a comprehensive way, all over the world.

b

48

complex than gravity fields. A very important scientific result of analysis of magnetic measurements was obtained at Cambridge University by Jan Hospers (later professor at NTNU) in 1950 when he gathered 22 rock samples from Iceland and measured the direction of the remanent magnetic field in them. Approximately half of the samples showed a direction opposite to the present direction of the Earth’s magnetic field. Based on his analysis he concluded that the Earth’s magnetic field has gone through multiple reversals, since neighbouring samples had opposite directions of the permanent magnetic field. This work was later an important contribution to the development of plate tectonics. Figure 1.77 (a) shows the development of the mid-Atlantic ridge, demonstrating how rocks have been magnetised by reversals of the earth’s magnetic field. Figure 1.77 (b) is a map

a

of measured magnetic anomaly, showing a previous periods). The strength of the ringing pattern that is somewhat similar to Earth’s magnetic field is decreasing; the cartoon in (a). over the past 100 years, this decrease Magnetometry is used for more has been measured to be approximately detailed mapping, particularly near 6%. If one extrapolates this trend then the surface, including in the fields of the earth’s magnetic dipole moment mineral exploration, archaeology, waste will be zero about 1,500 years from now. management and other environmental Geological observations suggest that such studies. an extrapolation is not necessarily very The magnetic field of the Earth is very realistic, since the earth has had many different from the gravity field. Its strength minima in the magnetic field without a is approximately double (60,000 nTesla) corresponding reversal. The last reversal at the poles compared to Equator (30,000 occurred approximately 780,000 years nTesla), so it is common to describe the ago. By measuring the magnetisation of Earth’s magnetic field as a dipole. Exactly clay minerals from Roman pottery, it is how the magnetic field is created is still estimated that the earth’s magnetic dipole a research topic, but there is general moment was twice as strong in Roman agreement that it is a product of circulating times as it is today. Figure 1.78: Estimated locations for the north currents in the iron-rich outer core of the The area around the earth that makes magnetic pole since 1831 to 2001. Earth. up the dominant part of the earth’s (Source: Wikipedia.) The earth’s magnetic field is not quite magnetic field is called the magnetosphere. parallel to its axis, and there is a difference at present of It protects us from charged particles from the sun, but not approximately 12 degrees. This means that the earth’s magnetic completely. Many high-energy particles from the sun are hardly poles do not match the earth’s north and south poles. The influenced by the magnetic field and therefore it is a common magnetic poles are not fixed and the magnetic north pole is view that the radiation risk on Earth will not be affected greatly gradually changing its position, as can be seen in Figure 1.78. as the Earth’s magnetic field weakens over time. The most This movement is getting faster (note that the pole important shield against high-energy particles from space is the changed a lot between 1994 and 2001 when compared with atmosphere, equivalent to a 4m-thick concrete wall. Figure 1.79: Simplified sketch of Earth showing the inner and outer core, mantle, crust and atmosphere. Unlike a classic bar magnet, the matter governing Earth’s magnetic field moves around. Geophysicists are pretty sure that the reason Earth has a magnetic field is because its solid iron core is surrounded by a fluid ocean of hot, liquid metal. The flow of liquid iron in Earth’s core creates electric currents, which in turn creates the magnetic field. Kelvinsong (Own work) [CC BY-SA 3.0 (http:// creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons.

49

NASA/Wikipedia

1.6  Satellite Data

Figure 1.80: A satellite orbiting Earth. Figure 1.81: Estimated global sea level changes from 1993 to 2015 using InSar data from various satellites. (Courtesy of University of Colorado, Boulder.)

50

Interferometric synthetic aperture (InSar) is today commonly used to measure minor movements on the surface of the earth or on the oceans. Movements of the order of millimetres can be detected by using radar waves emitted from earthbound satellites. One application is to measure changes in the ocean level over time. Since radar signals from satellites cannot penetrate deeply into the earth, the method is limited to studying movements on the earth (or ocean). Areas without vegetation are easier to characterise than heavily vegetated ones. The University of Boulder in Colorado has been using such data to monitor average sea level changes for several decades now, and a recent illustration showing the results since Figure 1.82: Modelled sea level changes using various climate models from 1990 to 2100. (Source: Climate Change 1980 is shown in Figure 1.81. For 2001: The Scientific Basis IPCC report, 2001.) the time period 1990 and 2015 (25 approximately 800 km from the earth. years) the InSar measurements gives a total sea level change Another application for this type of geophysical surveying is of approximately 8.3 cm, which is in agreement with the the monitoring of various types of geohazards, for instance to modelled sea level changes presented by IPCC in 2000. follow movements close to fractures or faults. Satellites used for this purpose have a typical distance of Still another application is to Figure 1.83: InSar measurements of uplift caused by CO2 injection at the In-Salah field, Algeria. Wells KBmonitor production of hydrocarbon 502 and KB-503 are CO2 injectors, while KB-5 is an appraisal well and KB-11 and KB-14 are producers. Blue fields or storage of CO2 in the colours indicate subsidence and yellow-red indicate uplift. (Ringrose et al., 2013.) The In-Salah Gas Joint subsurface. In Figure 1.83 the surface Venture partners, BP, Statoil, and Sonatrach are acknowledged for permission to show this figure. uplift caused by injection of CO2 into a porous sandstone layer at the In-Salah field in Algeria is shown. Red colours indicate an uplift of 20mm. The drawback with such surface monitor measurements is that they cannot be directly linked to changes within a deeper reservoir. However, by coupling the surface strain changes to a geomechanical model, it is possible to obtain a better understanding of reservoir changes as well. This has been done for the CO2 example from In-Salah and, together with conventional timelapse seismic, this has enabled a good understanding of subsurface changes caused by the CO2 injection. Several hydrocarbon reservoirs have also been monitored by satellites.

51

1.7  Rock Physics Simply stated, the purpose of rock physics is to translate geophysical observables into reservoir properties. The focus for all exploration is to achieve a better understanding of the rock itself, and therefore a crucial discipline within all exploration is rock physics. It provides the links between the rock’s elastic properties measured at the surface of the earth, within the borehole environment or in the laboratory, with the rock’s intrinsic properties such as mineralogy, porosity, pore shapes, pore fluids, pore pressures, permeability, viscosity, stresses and overall architecture like laminations and fractures. The elastic properties are Young’s modulus, shear modulus, bulk modulus, and Poisson’s ratio. Furthermore, rock physics is used to couple and connect the different geophysical disciplines based on interrelationships between their diagnostic physical properties (see table, right). A basic element in rock physics involves the understanding of how grains are squeezed together, if there is any kind of ‘glue’ or cement between them, and what the composition of the open space between the mineral grains is: water, oil, gas or a mixture of the three. Figure 1.84 shows thin sections from Gullfaks and Statfjord, two large oil fields offshore Norway. Both have sand reservoirs with porosities around 30%, and both fields are excellent oil producers. Both examples represent unconsolidated rocks, meaning that most of the grains are in direct contact, without any cement or glue between them. Rocks that are buried deeper are often more consolidated due to the fact that both temperature and pressure increase with depth, meaning that the likelihood of the rock undergoing a chemical process that changes it is larger. In section 1.3.4 we gave the relationships between P-wave velocity (Vp), S-wave velocity (Vs), the bulk modulus (k) and the shear modulus (μ). The bulk and shear moduli tell us how easy it is to compress a rock sample and how easy it is to change the shape of the sample, respectively. From these equations we

Geophysical Method(s)

Physical Properties

Seismic

Acoustic velocities with anisotropies, density, attenuation

Electromagnetic (CSEM, MT)

Electric resistivity (or conductivity) with anisotropies

Gravimetry

Density

Magnetometry

Magnetic permeability

Geothermal

Thermal conductivity

Altimetric methods (satellite)

Surface displacements

clearly observe two important issues: first, the seismic velocities increase with increasing moduli. Second, since both moduli are positive numbers, the P-wave velocity is always larger than the S-wave velocity. A major property that is closely linked to the bulk modulus of a rock sample is the porosity (relative volume fraction of open pore space). The Hashin-Strickman upper and lower bounds are often used to show this strong link between bulk modulus (and also P-wave velocity) and porosity, as shown in Figure 1.85. Most rock samples follow the average curves shown in this figure, and hence we observe a general trend: the bulk modulus and hence the P-wave velocity decreases as the porosity increases. This is regarded as a first order effect in rock physics. Both for reservoir engineers and geophysicists it is important to understand as much of the microstructure of the rock as possible, and therefore rock physics and petrophysics are crucial disciplines and enablers in achieving this micro-cosmos perspective. Typical parameters that might be derived from rock physics models or measurements include seismic velocities (P and S), density, resistivity and Figure 1.84: Two thin section images from two fields offshore Norway: Gullfaks (left) and Statfjord (right). The anisotropic parameters. Last but red bar is 0.5 mm, the quartz grains are white, while open space (porosity) is shown in blue. Other minerals have not least, the effect of replacing different colours. (Figure from Duffaut and Landrø, Geophysics 2007.) the pore fluid from oil to water, for instance, can also be estimated using rock physics models. An example for P-wave velocity versus water saturation using a calibrated Gassmann (1951) model is shown in Figure 1.86. This model is widely used to calculate seismic velocity changes because of different fluid saturations in reservoirs. This method is increasingly used for reservoir monitoring. If the fluid pore pressure in a rock is increasing, it will try to push the grains further apart, leading to a somewhat looser contact between them. This leads to a decrease in both P- and S-wave

52

velocities for the rock. It is practical to use the effective or net pressure to model the pressure dependency of a rock. The effective pressure is simply the overburden pressure (the weight of the rocks above) minus the pore pressure. If the effective pressure is approaching zero, the rock will start to fracture and this is referred to as the fracking pressure. In Figure 1.87 two slightly different curves using the Hertz-Mindlin model are shown. This model predicts that the P- and S-wave velocity increases as . However, it is often found from laboratory measurements that the exponent might be less than 1/6. When electromagnetic data is used, it is important to know the relation between the formation resistivity and the hydrocarbon saturation. One such example using Archie’s law is shown in Figure 1.88, where a distinct increase in resistivity is observed when the hydrocarbon saturation is above 60–80%. The modelled resistivity curve is based on Archie’s law, which in its original form reads: where is the conductivity of water, brine, or the fluid filling the pores, and is the cementation or form factor. This equation is designed for clean sands. In the presence of clay minerals, which are good conductors, the law has been modified. This means that it is possible to discriminate between a water-filled and an oilfilled rock by, for instance, measuring the P-wave velocity and/or the resistivity of the rock. The challenge, however, is to perform such measurements prior to drilling a well. We need efficient ways to map and measure important parameters that can reveal the inner secrets of subsurface rocks from a large distance – typically, 2–5 km below the surface. This is possible today, and it is indeed one of the main focal points of this book: to illustrate recent advances, possibilities and limitations in mapping and characterising the subsurface prior to drilling wells. An introduction to rock physics and references for a more advanced understanding are contained in Avseth (2015).

Figure 1.85: Upper and lower bounds for the relation between bulk modulus (k) and porosity. Most rock samples are close to the average curves shown as dashed lines. (Avseth et al., Geophysics 2010.)

resistivity in shales and sandstones. The key to this is porosity, since both resistivity and velocity are functions of porosity. Also, porosity is a common parameter controlling velocity, density and heat conductivity. Therefore, rock physics can be used to develop relationships between seismic velocities and heat conductivity.

1.7.1  Rock Physics Examples We present four examples of the use of rock physics. Rock physics can be used to develop relation­ ships between seismic velocity and electrical

Figure 1.86: Gassmann model: the seismic P-wave velocity increases if the pore fluid is gradually changed from oil (left) to water (right), depending upon the porosity and the stiffness of the rock.

Figure 1.87: Typical change in P-wave velocity versus effective pressure (overburden minus pore pressure) using the Hertz-Mindlin model. Here it is assumed that the effective pressure is initially 6 MPa. The black line corresponds to a Mindlin exponent of 1/6 and the red line to an exponent of 1/10.

Figure 1.88: Resistivity increases as the hydrocarbon saturation is increasing.

53

Finally, we show that an understanding of rock physics has led to the discovery of how to map source rocks from seismic.

1.7.1.1  Seismic Velocity and Porosity The control that porosity exercises on acoustic velocity was already reported in the 1950s (Biot, 1956; Gassmann, 1951). Since then, numerous studies have been conducted in order to determine a relationship between seismic velocity ( ) and porosity ( ) for a variety of sediments and rock types. One of the earliest and most widely used transforms (Wyllie et al., 1956) is: Here, is the velocity of the fluid in the pores and is the velocity of the matrix or the solid grains constituting the porous rock. Wyllie changed this equation into transit times (which are what is actually measured): which is referred to as the Wyllie time-average equation. It is mostly used for consolidated rocks, while for unconsolidated rocks a correction factor is applied, or empirical relations are used. Han (1986) presented empirical relations for sandstones with various degrees of shale content.

7.1.2  Seismic Velocity and Electric Resistivity It is of interest to relate seismic velocity and electrical conductivity. This is important because if one property, e.g., seismic velocity, is measured, the electrical conductivity can be predicted by using a cross-property relation (see, e.g., Carcione et al., 2007). A common property between seismic data and electromagnetic data is porosity, which can be used to derive electrical conductivity/seismic velocity relations for overburden shales and the reservoir sandstones. Assume that the conductivity and velocity have the form and then, the relation is given by: A simplistic example, which illustrates the principle, is to combine Archie’s law with the time-average equation. First, the porosity is derived from the time-average equation, and then the porosity is substituted into Archie’s law. This gives:

The reader may consult Carcione et al. (2007) to find more advanced models.

1.7.1.3  Seismic Velocity and Thermal Conductivity Generally, temperature increases with depth in the earth. The rate of increase, the temperature gradient measured in boreholes, is typically 25–30°C per kilometre, restricted to the upper part of the crust. If the gradient continued at this rate from the surface, the temperature would be greater than 2,000°C within the lithosphere, well above the melting point of rocks. Since the crust and upper mantle are solid and brittle, this gradient cannot extend to these depths, and it is probably about 1°C/km in this area. Recent studies indicate that the temperature is about 4,800°C at the base of the lower mantle and about 7,000°C in the inner core. The temperature is controlled by how much heat is coming

54

out of the hot interior, just as heat comes out of a hot oven. In addition, the heat depends on how much heat is produced by radioactive processes. Heat tends to move from higher to lower temperature locations. The spatial distribution of subsurface temperature through geological and present time is of major importance for the petroleum prospectivity of a sedimentary basin. The thermal history controls the maturation of source rocks, as it has been established that crude oil generation occurs between approximately 60°C and 120°C, and gas generation between 120°C and 225°C. In addition, temperature influences the quality of reservoir rocks, in particular sandstones. The porosity loss is due to compaction, and to later quartz cementation which is related to time and temperature (onset at approximately 70°C). In the subsurface, temperatures cannot be directly measured, apart from in wells, so to predict temperature, numerical thermal basin models have been developed (Allen and Allen, 2005). The temperature in these models is a function of boundary conditions, including surface temperature and basal heat flow, and thermal properties such as thermal conductivity and radiogenic heat production. Thermal conductivity is the property of a material’s ability to conduct heat. In frontier areas, where one has mainly seismic information, and few, if any, wells, rock physics can contribute to heat and temperature prediction. In a landmark study from 1982 that led to the discovery of the Norwegian Tyrihans field, Conoco showed that geothermal gradient ( ) or heat flow ( ) could be estimated from seismic interval velocity ( ). Heat flow is the amount of heat that flows out of the earth in a unit of time; it describes the rock’s ability to transport thermal energy by conduction. Several physical parameters affect the thermal conductivity ( ) of rocks, of which the most important are porosity, density and fluid saturation. The factors that affect the thermal conductivity of the rock similarly affect the velocity ( ) of seismic waves in the rock. By using the knowledge that thermal conductivity has a linear relationship to velocity for clastic rocks, , or more generally , Conoco suggested a way to solve the heat flow equation for either heat flow ( ) or geothermal gradient ( ) depending on which is known (Leadholm et al., 1985). The equation straightforwardly leads to the calculation of given the rock physics relation and . Assuming that is a given and constant over time (steady-state approximation), it is straightforward to find ; furthermore, when the surface temperature is known, the present-day temperature with depth is: The reason that thermal conductivity and seismic velocity are nearly linearly related is they are both highly sensitive to porosity. For clastic rocks, Duffaut et al. (2018) presented a simple linear relationship between thermal conductivity and seismic interval velocity that also accounts for lithological variability. For carbonates, Ozkahraman and Isik (2003) found a nonlinear relationship. For thermal modelling in frontier settings, Hokstad et al. (2014) presented an integrated approach where emphasis is put on the use of geophysical observations, typically 2D regional

Statoil/Duffaut et al., 2018

Figure 1.89: (a) Velocities from seismic (black curve) and well (red curve). (b)-(d) Comparison of thermal properties (thermal conductivity, thermal gradient and temperature vs depth) computed from seismic velocity (black curves) and borehole data (red curves). The green circles are static BHT measurements.

seismic lines and stacking velocities, gravity and magnetics, and preferably one or more wells for local calibration. They use the fact that geophysical parameters like seismic velocity, density and heat conductivity are controlled by a few underlying parameters, the most important being porosity. This allows them to establish rock-physics relations between heat conductivity, seismic interval velocity, and density. Hence, using seismic velocity fields, they assess the lateral variation of heat conductivity along seismic lines, and compute present-day temperature and maximum paleo-temperature. Furthermore, Hokstad and co-workers suggest the extrapolation of seismic velocity and heat conductivity backward through geological time, by an extension of standard geological back-stripping methodology. Porosity versus depth trends for decompaction can be computed from seismic velocities or gravimetric data or a combination of both, using the rock physics relations mentioned above. Finally, given basal heat flow and conductivity through time, they forward-model thermal history by a numerical solution to the time-dependent diffusion equation. This proposed methodology can be demonstrated with an example from the North Sea. Figure 1.89 shows a comparison of thermal properties (thermal conductivity, thermal gradient and temperature) computed from well logs, and corresponding properties computed from a seismic velocity function extracted from a velocity model at the well location. The computed temperatures are in agreement with static bottom-hole temperature (BHT) measurements.

1.7.1.4  Source Rock Mapping from Seismic and/or CSEM and MT There is always one magic ingredient that must be present for conventional oil to exist: the source rock. A source rock is any rock that has generated or generates hydrocarbons. Most

source rocks are grey or black shales that when originally formed were full of organic material like algae or marine planktonic organisms and bacteria, which generate oil when heated. The higher the percentage of organic material in the rock, the more oil it is capable of forming. When temperatures rise substantially higher they will eventually generate gas. Geologists assess the quantity of organic material in source rocks in terms of the total organic carbon (TOC) content, which is measured by laboratory techniques. TOC is reported in weight percent (wt.%) carbon. e.g, 1.0 wt.% carbon means that in 100 grams of rock sample there is one gram of organic carbon. TOC is useful as a qualitative measure of the petroleum potential. Most geologists consider rocks with less than 1.0 wt.% TOC as organically lean and unlikely sources for commercial hydrocarbon accumulations. The average TOC of source rock shales is 2.2 wt.% – a much higher value than the minimum of 1.0 wt.%. Traditionally, the oil hunter’s approach to identifying and quantifying these rocks has been to use well or seepage data, but well data can be sparse or non-existent in deep basins or frontier areas. Imagine, therefore, having a remote sensing tool that examines the subsurface and potentially maps the extent, thickness and richness of source rocks basin-wide, worldwide. The ‘Source Rock from Seismic’ technology originally developed by Statoil aims to do exactly this. A complete account of the method is published in Løseth et al. (2011). It is noted that elements of this principle were published in Carcione (2001), but were never put into practice at that time. This technology improves our ability to predict source rock presence and quality in frontier exploration regions, which is needed in order to identify new exploration terrain as well as to analyse existing terrain. Source rock properties extracted from seismic data can be used directly in basin and hydrocarbon generation modelling, thereby improving the ranking of petroleum basins and prospects.

55

56

A)

Statoil/Løseth et al. 2011

‘Source Rock from Seismic’ technology is based on simple observations: thick, organically rich source rocks (TOC > 3–4%) have a strong negative seismic amplitude response at the top, at least in subsiding mud-rich basins like the North Sea and the Gulf of Mexico. The amplitude dims with sourcereceiver offset, because of velocity anisotropic effects, giving what experts call a class 4 amplitude versus offset (AVO) response. Their acoustic impedance (AI) is inversely and nonlinearly correlated to total organic carbon (TOC) in shales. Thus, an AI inversion can be transformed to TOC%, given that the local relationship is known. When the source rock is very heterogeneous, AVO analysis will be more challenging. In the future, CSEM may also be applied provided that the EM field is able to penetrate sufficiently deeply to where the source rock is located. CSEM measures both horizontal and vertical subsurface resistivities and may be combined with seismic to determine the source rock properties. Increased TOC due to kerogen in mature source rocks may change the horizontal electrical resistivity of the rock. In addition, the vertical resistivity is sensitive to the source rock’s maturity, and as such, may become a diagnostic technique for maturity. Magnetotellurics (MT) is another potential tool to map source rocks since MT, which has much deeper penetration than CSEM, reveals the large scale distribution of the electrical resistivity in the subsurface. In organic-rich source rocks, the presence of conductive minerals such as pyrite and other conductive materials such as kerogen will affect the electrical resistivity of the rock. It is believed that highly mature, organic-rich rocks might be quite electrically conductive due

B) Sand

Sand

3.0 s Top source rock

+ -

3.5 s 1 km

Base source rock

Figure 1.90: Seismic sections: the significant drop in acoustic impedance together with the clear dimming of amplitude with offset from near offset stack (A) to far offset stack (B) is a characteristic top source rock behaviour (i.e., AVO class 4 seismic response). Dark grey line shows position of well and red curve is gamma ray log. Source rock has high gamma ray readings, while sand has low.

to formation water, clay, and pyrite and other mineral phases being present. The pyrite-sulphur content correlates directly with the TOC content of the rock, which, as discussed, is a measure of petroleum source rock richness.

Chapter 2

Elements of Seismic Surveying Whoever is ignorant of the past remains forever a child. For what is the worth of human life, unless it is woven into the life of our ancestors by the records of history?

Image courtesy CGG

Marcus Tullius Cicero (106–43 BCE, Roman statesman, lawyer, scholar and writer)

An artist’s impression of a modern marine seismic survey seen from below. The seismic vessel is towing several long streamers. The two sources, fired in a flip-flop manner, consist of three subarrays each. A deflector (door) is visible on the right-hand side of the picture. Such a door is shown in more detail in Figure 1.27.

57

Figure 2.2: All marine seismic surveys use a source (S) to generate seismic sound and a sensor system. Here, ‘1’ illustrates cables or streamers filled with sensors, ‘2’ ocean-bottom sensors, ‘3’ an array of sensors buried in the seabed, and ‘4’ a vertical seismic array positioned in a well. In reflection seismology, subsurface layers and formations can be mapped by measuring the time it takes for the acoustic reflections to return to the sensors. The amplitudes and frequencies of the reflections also yield information about the Earth and its subsurface structures. Courtesy Jack Caldwell: OGP Report No. 448 An Overview of Marine Seismic Operations; www.ogp.org.uk.

58

Figure 2.1: A model of the seismoscope invented by the Chinese scientist Zhang Heng (78–139) in AD 132. The instrument was two metres in diameter, with metal balls balanced in each of the eight dragons’ mouths. Strong seismic tremors caused a ball to be released from the mouth of a dragon into the mouth of the frog below, sounding an alarm. The position of the ball falling indicated the direction of the shock wave.

Here we give a brief history of seismic surveying. Readers with more appetite for the history of seismic technologies are referred to Sheriff and Geldart (1995) and Lawyer et al. (2001). The remaining sections in this chapter will focus on the link between seismic acquisition geometry and seismic imaging, where we will focus on imaging from ocean-bottom seismic surveys, subsalt imaging developments in the Gulf of Mexico, and will suggest new surveying geometries that could significantly improve seismic imaging.

© Jack Caldwell: IOGP

The origin of the term ‘seismic’ is the Greek word ‘seismos’, which means shaking. Earthquakes were in Greek called ‘seismos tes ges’, literally shaking of the Earth. The first known use of the word seismic is from 1858, and the terms seismic and seismology started to be used around the middle of the 19th century. In a broad sense, seismology is defined as the science of the study of earthquakes: their generation (source) and the vibration and propagation of seismic waves inside the Earth, and the various types of instruments used to measure seismic motion. The earliest known seismic instrument to detect earthquake motion was the seismoscope, invented by the Chinese scholar, Zhang Heng, as early as AD 132. The seismoscope was cylindrical in shape with eight dragon heads arranged around its upper circumference, each with a ball in its mouth. Below were eight frogs, each directly under a dragon’s head. When an earthquake occurred, its motion would cause a pendulum fastened to the base of the vase to swing. The pendulum would knock a ball, which dropped and was caught in a frog’s mouth, generating a sound. Which mouth it dropped into indicated the direction from which the earthquake came. Exploration seismic deals with the use of man-generated acoustic or elastic waves to locate mineral resources, water, geothermal reservoirs, archaeological sites, as well as hydrocarbons. The different methods of seismology are applied and have been refined to serve special purposes. This is especially true for seismic exploration for oil and gas, where technology has developed rapidly. Broadly speaking, the major elements of seismic exploration technology that enable geoscientists to ‘see’ underground many kilometres below the surface in order to explore, develop and produce hydrocarbons are: 1. Seismic acquisition technologies: a variety of survey techniques are available to address specific geophysical and geological objectives. 2. Seismic imaging technologies: complex algorithms are used to process and analyse the vast amounts of acquired data utilising massive computing power (supercomputers). 3. Visualisation and integration with other data.

Kowloonese/Wikipeda

2.1  A Brief History

© Jack Caldwell: IOGP

2.1.1 1920–1980

During World War I, Ludger Mintrop had invented a portable seismograph for the German army to use to locate Allied artillery. In 1921 he founded the company Seismos to do geophysical exploration. Seismos was hired to conduct seismic exploration in Texas and Louisiana. One of the crews pinpointed the Orchard Dome in Texas in 1924, making this the first discovery of commercial quantities of oil using seismic. Large-scale marine seismic surveying did not appear until 1944 when Superior and Mobil began refraction shooting for salt domes offshore Louisiana with geophones planted on the sea bottom. The floating streamer was first used in 1949–50. In the marine environment, dynamite was used as source until 1965 when Figure 2.3: Illustration of the basic difference between the 2D and 3D seismic survey geometries. The dashed lines suggest subsurface structure contour lines. For 2D surveys, the spacing between adjacent sail lines will alternatives were developed, in typically be 1 km or greater. The reflections from the subsurface are assumed to lie directly below the sail line, particular the airgun seismic source. providing an image in two dimensions (horizontal and vertical) – hence the name ‘2D’. A subsurface picture In the 1960s the digital revolution took of the geology thus has to be painstakingly reconstructed by interpreting and intelligently guessing what analogue data to digital recording, goes on in between the lines. 3D surveys in the beginning were acquired by towing one cable and making many parallel passes with short spacing through the survey area. Next, multiple streamers, separated followed by large-scale application of by 25–200m, were used while shooting closely spaced lines. Because of this close spacing, it is possible to the computer in the processing and represent the data as 3D seismic cubes. Cubes can be viewed as they are or analysed in greater detail by interpretation of seismic data. computer-generating vertical, horizontal (time-slices), or inclined sections through them, as well as sections Prior to the 1970s geophysicists along interpreted horizons. Courtesy Jack Caldwell: OGP Report No. 448 An Overview of Marine Seismic Operations; www.ogp.org.uk believed that noise in the seismic recordings was so much stronger than the seismic signal that only structural information could Friendswood Field near Houston. In 1972, Geophysical Services be extracted. However, in the early 1970s, oil companies International (GSI), supported by six oil companies, launched a recognised ‘bright spots’ – strong seismic reflection major research project to evaluate 3D seismic at the Bell Lake amplitudes associated with hydrocarbons, predominantly field in south-eastern New Mexico. The first marine 3D survey gas. Furthermore, in the mid-1970s it was shown that was acquired by Exxon in 1974. depositional patterns could be interpreted in the seismic data. Some of what had previously been regarded as noise was actually geological signal! In 1967, Exxon conducted the first 3D seismic survey in the

Figure 2.4: With 3D seismic acquistion the geophysicist can image, in detail, the faults, layering and sedimentary structures that can trap hydrocarbons. (Ship image from B. Dragoset, 2005. The Leading Edge, 24:S46– S71, and Western Geophysical. ’3’ seismic data and analysis from F. Aminzadeh, and P. de Groot, 2006, Neural Networks and Other Soft Computing Techniques with Applications in the Oil Industry, EAGE Book Series, ISBN 90–73781–50–7. Visualisation by J. Louie.)

59

2.1.2  The Early Days of North Sea Seismic Surveying

a

Teesside branch of the World Ship Society – Albert Weller Slide Collection

In April 1965, Norway announced the country’s first concession round for oil and gas exploration on the shelf: Norway’s oil age had started! The first exploration well was Esso’s well 8/3-1, drilled in 1966. The well confirmed reservoirs, cap rocks and potential source rock but no hydrocarbons were found and it was abandoned as a dry hole. The second well, 25/11-1, drilled in 1966–67 by Esso, was located on the Utsira High in the northern North Sea. It encountered gas and live oil in early Eocene sands, and the first litres of oil from the Norwegian continental shelf were brought to the surface by a formation interval test. However, the reservoir rocks were too thin and the geology too complex, so the well was abandoned as a non-commercial oil discovery. Seismic data at the time was inadequate to investigate the lithology of this portion of the b North Sea Basin. Five years later, in 1974, well 25/11-5 Figure 2.5: Photos from discovered the Balder Field when it encountered a a 1966 article by Birger 25m oil column in the Paleocene, demonstrating Rasmussen (Institute of the potential for a commercial accumulation, Marine Research) published in the journal Norsk Natur but it took a total of 18 exploration and appraisal (Norwegian Society for the wells and two seismic 3D surveys (1979 and 1988) Conservation of Nature). before Esso decided to develop the field. With the (a) The recording vessel is completion of the 10th well, 7/11-1, in June 1968, the to the left and the shooting vessel in the middle. The high first Norwegian commercial discovery was made – spurt of the water column the Cod gas-condensate field – but further drilling (typically 15–45m) is a result of the explosive being detonated at quite a shallow depth disappointments led to a reduction in drilling in 1969, (1–1.5m). (b) The explosive was contained in a pail which hung from a plastic balloon with as by then companies had already drilled 33 dry holes electric cord to the vessel. It was fired approximately one–two vessel lengths behind the ship. The explosives varied from 2–25 kg, and were detonated at shallow depths to avoid bubble and spent NOK 750 million on the Norwegian shelf. pulses. When high-explosive dynamite and TNT were forbidden due to the steep and high But then, in August 1969, Phillips started drilling well pressure front of the pressure wave, they were replaced by ammonium nitrate, which burns 2/4-1, which eventually led to the discovery of Ekofisk less rapidly and produces a smoother pressure front with smaller amplitude than dynamite. – the first billion-barrel oil field in Western Europe. This was also the first commercial discovery of oil in Drilling for oil requires acquiring seismic, which was mainly the whole of the North Sea, despite the drilling of more than done by specialist service companies like Geophysical Services 200 exploration wells. International (GSI), Seismograph Service Ltd (SSL), Prakla, Compagnie Générale Géophysique (CGG), Figure 2.6: Havbraut I sailing from Middlesbrough Dock in August 1966. She was completed in Flensburg and Western Geophysical Co. in 1949 as a steam trawler. In 1942 she was bought by Austevoll Shipping and rigged as a purse seiner In the North Sea the first commercial fishing boat. Her length was 157 feet and she had cabins for 20 men. Havbraut I can be considered to seismic surveys took place in 1962. GSI be the first cultural meeting place between American oil explorers and Norwegian seamen. Onboard Norwegian waffles with brown cheese and coffee would be served. acquired data offshore eastern Great Britain and Scotland, whereas Prakla undertook the first ones on the German shelf. From 1963 to 1965 almost 20,000 km of seismic lines were shot over the Norwegian sector of the North Sea for the oil companies Shell, Phillips, BP, American Overseas Petroleum Limited (Norsk Caltex Oil), Superior Oil, Petronord (a joint venture between French oil companies and Norsk Hydro), and The Norwegian Oil Consortium (a group of Norwegian industrial companies). More than 50 vessels were involved. In the whole of the North Sea, it has been estimated that around 210,000 km of seismic lines were acquired during the same time span (Johansen, 2012).

60

Kjell Laugtug

Michael Cassar

Figure 2.7: The trawler MV Petrel, here called Malta, was one of the first Norwegian fishing boats to operate as a seismic vessel in the world. From 1967–71 she was operated by Shell International, The Hague. The geophysical systems set up by Shell during this period were sold to SSL in 1971 when SSL operated the vessel exclusively for Shell.

Figure 2.8: Dynamite ready for use as seismic source in 1966. The dynamite inside the yellow boxes was mounted on a balloon filled with compressed air. One of the deck crew let it go from the poop deck on command from the vessel’s operation room. The balloon floated in the sea until it was detonated (Figure 2.9) by the operator. Figure 2.9: In two-boat operations with one source boat in front of the recording vessel, the people on the latter vessel could catch stunned cod for dinner using long bamboo rods equipped with a hook. Kjell Laugtug

The effects of the explosives on fish and fisheries in the fish-rich North Sea interested fishermen and marine biologists significantly. To avoid conflicts between seismic surveys and fishermen the Norwegian and British authorities kept close contact and gave their instructions. Seismic surveys needed permission from the authorities. All seismic surveys had to be reported to the Norwegian Directorate of Fisheries. Rasmussen discussed at length the effects of underwater explosions on marine life in an unpublished article written in 1964. But the service companies had insufficient seismic vessel capacity in the North Sea. In October 1964 GSI contacted the Norwegian shipbroker Mikkel R. Forland to find shipowners who could charter fishing boats for seismic exploration. Together with his wife Ellen M. Forland, he established a shipbroking firm and, in the mid-1960s, they were the first in Norway to operate vessels for the seismic industry. During the winter of 1964/65 several charter parties were established for the Norwegian exploration season. The first vessel to be prepared for oil exploration at GSI’s base in Middlesbrough, where she was equipped with GSI’s instruments, was the purse seiner Havbraut I. Norwegian ship owners chartered a number of vessels to the seismic companies from 1965 onwards. The vessels operated in the North Sea, but some were sent to Canadian waters, Venezuela and West Africa; others to the Persian Sea and the Red Sea. Some sailed even longer distances – all the way to Australia and South America. As the seismic companies asked for vessels that could operate year round and both shoot and record, towards the end of the 1960s ship owners remodelled some of their boats for this purpose. In 1973 the biggest investment to date was done by Ellen Forland when she assumed ownership of her first vessel with the conversion of the MV Ara into the MV Seis Mariner for SSL. In 1973, the Geophysical Company of Norway AS (Geco) was formed. Its first seismic vessels were the rebuilt trawlers Longva and Longva II. In 1978 Geco established its own shipping department and for a short period of time the company ran a large fleet of special purpose vessels. During the 1970s, seismic technology developed rapidly,

61

Figure 2.10: In marine seismic exploration, cables called streamers are towed or ‘streamed’ behind a moving vessel. Inside the streamer are hydrophones that detect the pressure fluctuations in the water caused by reflected sound waves returning from the subsurface. The signals are converted into electric signals, which are digitised and transmitted along the streamer to a recording system on board the vessel. Instruments nicknamed ‘birds’ are used in streamer steering to control the depth and any lateral cable movement of streamers. It is important for the success of a seismic survey to know precisely the location of sources and receivers.

62

Figure 2.12: MV Seis Mariner in Immingham on the east coast of England in March 1981. The banner on the stern says ‘Two mile cable in tow’ (but not in the harbour, we believe!).

imposing special requirements both on the vessels and the crew. Drilling in the North Sea became more expensive, and the oil companies needed the best seismic data possible for derisking prospects. As more and more exploration licences were offered, the ship owners saw that the seismic market was a way of making money. They invested in more advanced vessels, and around 1980 the first contracts were made for purpose-built seismic vessels.

2.1.3 1980–2000 In the early 1980s, most 3D surveys were acquired by towing one cable and making many parallel passes through the survey area. However, soon it became common to use two cables and two sources with the benefit of shorter acquisition time and reduced cost. Capabilities steadily increased, and multiple streamers and multiple sources made 3D more effective. In 1991, Petroleum Geo-Services (PGS) was founded in Norway. The company had two seismic vessels, and its vision was to provide the most efficient acquisition of 3D marine seismic data. In 1995, its Ramform seismic vessel technology was introduced with the Ramform Explorer, which was the first vessel to tow eight streamers (1995), subsequently becoming the first to tow 12 streamers (1997). In the early nineties, Shell stated that they would no longer

Figure 2.13: The trawler Longva II was hired by Geco in 1975 and rigged for seismic acquisition. Later it was bought by Geco and renamed Geco Kappa. www.skipet.no

Per Sundfær John Jones

Figure 2.11: MV Ara, a cargo ship, sailing along the coast of Norway in the 1950–60s. She was rebuilt as the Seis Mariner in 1973 (see Figure 2.12). In 1993 she was converted into a fish-processing barge under the name Cruz Del Sur, and caught fire and sunk in Chile in 1995.

acquire 2D seismic data, and that they would use 3D for both exploration and production. During that decade, 3D surveys took off, helped by workstations that provided new ways to interpret and visualise seismic data in the 3D world. 4D seismic is 3D seismic repeated over time over the same oil or gas field in order to monitor production. 4D seismic started in the early 1980s, but only became commercial in the late 1990s. In the North Sea, 4D seismic was first investigated on a full field scale at the Gullfaks field in a joint StatoilSchlumberger project around 1995. Detectable time-lapse signals were indeed measured, and soon proved to be of economic value in identifying drained and undrained areas (see Figure 1.51). Soon afterwards, 4D surveys were acquired over several Norwegian and UK North Sea fields, including Schiehallion, Foinaven, Draugen, Troll, Oseberg, Norne, Statfjord, Forties and Gannet. In the early 1990s, Berg (see Berg et al., 1994) at Statoil led the development of the SUMIC (SUbsea seisMIC) system, whereby both shear and pressure waves were recorded by sensors implanted in the seabed. The 1993 pilot survey over the Tommeliten structure in the North Sea demonstrated that the SUMIC technique could successfully image subsurface structures through and below gas chimneys through the use of shear waves (Figures 2.14 and 2.19). Shortly after the introduction of the SUMIC technique, it was discovered that ocean-bottom surveying (OBS) was useful for detailed seismic imaging of complex geology due to sampling of all azimuths, analogous to taking photographs of an object from all directions to image all sides of it. To use a football analogy: the problem of collecting seismic data is like attending a football match. Your view of the game depends not only on the lighting system of the stadium but also on where you are sitting. For example, a journalist may prefer to be in the stands where he or she will have a good view of the entire game, which is necessary for analysing and reporting all of the moves and tactics. A photographer, however, may prefer to be near the touchline where he or she can immortalise the goals, even at the expense of not seeing the rest of the game. The ticket prices for these special positions may be more than that of a standard seat, but the extra cost will pay off handsomely. As in football matches, the view of the subsurface is determined by the location of the sound sources for ‘illuminating’ the area of interest. Equally important is

63

Åsmund Sjøen Pedersen

obtaining the very best reservoir images, especially in geologically complex areas. In the ‘old days’ acquisition wisdom suggested that it was preferable to shoot a survey in the dip direction of the main structural trend. Several examples other than OBS, however, showed that a new seismic survey in a different orientation improved the image in some areas of complex structure, while the image in other areas was better in the original data. These reports suggested to the geophysicist that the survey direction is an important part of the imaging equation. Ideally, it may be necessary to shoot from ‘all’ directions. In the late 1980s, Texaco attempted to develop vertical hydrophone cables for full-azimuth illumination of subsalt targets in the Gulf of Mexico. The vertical cable method did not survive, maybe because the horizontal separation between cables was too large – a cost issue. In 1996 Elf carried out a test in Gabon in which it acquired four surveys at different azimuths across a salt body. The result of this experiment showed the incomplete but complementary information extracted from the various acquisition directions. Between 1997 and 1999 an extensive research programme led by Statoil concluded that detailed structural imaging of complex geology requires the acquisition of high-fold seismic data from all directions (full-azimuth, or FAZ). Careful planning of the 1997 3D Statfjord field ocean-bottom seismic (OBS) cable survey offshore Norway rendered possible an evaluation of image quality versus acquisition geometry by emulating both OBS and Figure 2.14: SUMIC nodes ready for deployment with the help of ROVs over the Tommeliten field in 1993. streamer surveys. Since 2000, Statoil has acquired high-fold, FAZ-OBS data over most of its fields offshore Norway. Nearly all surveys have given superior image the location and types of sensors used to capture the ground quality compared to conventional streamer seismic (see motion caused by the passage of seismic waves. Standard 3D Figure 1.50). towed-streamer seismic surveys may, in fact, be unsuitable for

3-D: 1 source +1 3-km streamer

1980

2 sources + 2 streamers

1985

Figure 2.15: Seismic acquisition 1980–2000: a few breakthroughs.

64

2 sources + 4 streamers

1995 Gullfaks 4D

1990

1993 Tommeliten SUMIC 4C

1997 Statfjord 4C-OBS

1995

2000

1996 Gabon MAZ

Figure 2.16: Illustration of a permanent seismic installation at the Valhall Field. The receiver system is trenched into the seafloor and tied back to the platform for power, control and data transmission. (Image courtesy BP, van Gestel et al. 2008 TLE 27 1616–1621; OGP Report No. 448 An Overview of Marine Seismic Operations; www. ogp.org.uk)

2.1.4 2000–2015 Since OBS surveys over large areas (200 km2 or more) were prohibitively expensive at that time, E&P companies started to investigate if new streamer configurations could provide a cheaper alternative. Two well-documented surveys are those acquired over the North Sea Varg and Hild fields. The Varg reservoir has a combination of relatively thin sands and complex faulting due to salt-related tectonics, making the reservoir challenging to interpret and produce. PGS created a new image of the Varg field in 2002 by making passes in two different directions with one conventional cable vessel and combining the data with existing data acquired in a third direction. The survey demonstrated the benefits of multiazimuth (MAZ) surveying on the seismic image, in particular by improving the signal-to-noise ratio. The Total-operated Hild discovery, located along the border of the UK and Norway, turned out to be difficult to interpret due to strong reservoir compartmentalisation and the presence of gas in the overlaying Cretaceous series. As the optimal imaging solution, ocean-bottom cable seismic, proved too expensive, a streamer vessel multi-azimuthal solution was chosen. In 2003 the Hild structure was covered by two surveys designed so that the two datasets, together with the existing survey from 1991, made an angle of 60 degrees with respect to each other. The combined surveys resulted in an improved seismic image with better continuity of the reflectors in the reservoir section. The poor signal-to-noise ratio due to the gas cloud was partially compensated for. In the Gulf of Mexico, from 1998–2002 BHP, BP, Chevron and Texaco participated in the SMAART joint venture that addressed routes to full-azimuth marine seismic acquisition.

Modelling studies showed that all source-receiver azimuths were needed for the best possible illumination below complex salt. Early wide-azimuth survey developments in the Gulf of Mexico will be discussed in Section 5. The examples above demonstrate that seismic illumination and imaging of complex structures clearly depend on the acquisition geometry. High fold and multi-azimuth seismic is key to the high quality detailed seismic imaging required to improve our understanding of complex fields. OBS seismic systems were transformed into permanent seismic installations by trenching cables into the first metre of the seabed. They are often referred to as Life of Field Seismic (LoFS) or Permanent Seismic Monitoring (PRM) systems, as they can be used for 4D measurements at short time intervals. The first version of a permanent seafloor seismic array was made by BP at the Foinaven Active Reservoir Management (FARM) system in 1995 in the UK sector of the North Sea. Later, during the summer of 2003, BP trenched 120 km of seismic cables into the seafloor covering an area of 45 km2 above the Valhall field. In 2010, ConocoPhillips installed a recording system for permanent seismic monitoring at the Ekofisk Field, and in 2014, Statoil installed two PRM systems at their Snorre and Grane fields. In 2015, the Norwegian government decided that the recently discovered Johan Sverdrup field should also use a PRM system for monitoring.

65

2.2  4C – Ocean-Bottom Surveys Do not seek to follow in the footsteps of the men of old; seek what they sought. Matsuo Basho (1644–1694, Japanese poet)

The arguments in favour of ocean-bottom seismic (OBS) surveying are well known. OBS produces superior seismic images compared to those from conventional 3D streamer seismic. It offers the prospect of combined interpretation of pressure and shear waves, full illumination and high multiplicity of signals from the same subsurface points (high data fold). The history of OBS to date has not been especially happy, partly due to the high cost, and partly due to a lack of interest in shear wave exploitation. But it seems that the OBS market is expanding, driven by the need for higher quality data to make better reservoir development decisions. This growth has been accelerated by a stepchange improvement in OBS acquisition efficiency. Although conventional streamer seismic data serves exploration purposes well in many cases, the quality may not be sufficient to support an adequate model for reservoir development, in particular below complex overburden. As mentioned in Section 1, this experience led to the development of innovative ways of acquiring seismic data. Here, we address the four-component (4-C) OBS technique, present its benefits, and show a few of the early successful results. The history

is in part based on the SEG book Introduction to Petroleum Seismology by Ikelle and Amundsen (2005).

2.2.1  A Brief History of Marine 4C-OBS Experiments In the 1970s and 1980s attempts were made to extract shearwave information from marine seismic data acquired in surveys with standard towed-streamer experiments. The attempts relied on double-mode conversions at or just below the water bottom,

Magseis

Figure 2.17: Illustration of an OBS survey. The sensors are stationary on the seafloor in cables or nodes, measuring both pressure and shear wave data, while the source vessel shoots on the sea surface. The uses of 4C-OBS data can be divided into three broad categories: firstly, lithology and fluid prediction by the combined analysis of pressure and shear waves; secondly, imaging in geologically complex areas; and thirdly, time-lapse (4D) seismic monitoring.

66

cable connector

hydrophone

geophone housing

spike 30 cm

Statoil

giving so-called PSSP reflections. For large angles of incidence, mode conversion from P-wave to S-wave is quite efficient in ‘hard’ water-bottom environments. The water-bottom shear velocity is the most critical parameter affecting the generation of observable PSSP reflections. Their amplitudes can be comparable to normal P-wave reflections when the S-wave velocity is greater than one-third of the water-bottom P-wave velocity. In most areas, however, the sea-floor shear velocity is much lower. Therefore, the use of PSSP reflection data is not a viable technique to record high-quality shear-wave data in the marine environment. Other solutions were thus investigated. Unless shear waves were generated by source devices on the sea floor, the methods necessarily had to rely on mode conversion by reflection from P-wave to S-wave at reflectors in the subsurface, so-called PS-reflections. In the late 1980s, Eivind Berg at Statoil led the development of the SUMIC (SUbsea seisMIC) system, whereby both shear and pressure waves were recorded by sensors implanted in the seabed. The sensor system was called 4C, with one hydrophone to measure P-waves and a three-component geophone to measure the particle-velocity vector. In 1992, after the development of the prototype SUMIC sensor array, several extensive field equipment tests were carried out. The data quality from the SUMIC sensor layouts was judged to be remarkably good, and demonstrated that SUMIC was a viable system for acquisition of high-fidelity 4C data. The first full-scale SUMIC data acquisition of a multifold 2D seismic line was conducted in late 1993 over Statoil’s Tommeliten Alpha structure in Block 1/9 in the southern part of the Norwegian sector of the North Sea. The principal objective of the survey was to demonstrate the potential of the SUMIC technique for imaging subsurface structures through and below gas chimneys (see Figure 2.19). The chosen exploration target had a reservoir which lay

Figure 2.18: The original SUMIC sensor used in the 1993 Tommeliten 2D-4C survey. It consists of a 30cmlong, 6cm-diameter ‘spike’ which was planted into the sea bottom by an ROV (remotely operated vehicle) to achieve good coupling. Above the spike is the geophone housing, approximately 40 cm long. On the top left is the hydrophone housing. The top right device is a cable connector. An array of 16 SUMIC sensors was used in the acquisition. During the Tommeliten survey, 375 common receiver gathers were recorded. The ROV crew took approximately half an hour to plant each SUMIC detector (see Figure 2.14).

6 cm

beneath gas rising in a chimney within the overlying shales. Previous conventional seismic surveys, which rely on P-wave propagation only, produced unusable images in some regions because of the distortion and misfocusing introduced as the P-waves passed through the gas chimney. A small percentage gas saturation in the chimney introduces strong attenuation and heavily distorts the P-ray paths. Because shear waves are much less affected by fluids than compressional waves, it was expected that the 4C technology would be suited to ‘seeing through’ the distorting gas chimney, enabling a reliable image of the target to be produced from shear waves. A continuous and regular 2D-4C profile of 12 km length passing over two wells was acquired in late 1993. In general, the quality of the 4C data was excellent at all locations along the line as the sea bottom, geological conditions, and water depth varied. The processed PS-wave time section showed a good-quality image of the

Figure 2.19: (a) The structure of the Tommeliten Alpha structure is obscured by escaping gas (gas chimney) on the conventional pressure wave section. The top of the structure over a horizontal distance of 3 km between well 1/9-1 (left vertical line) and well 1/9-3R (right vertical line) cannot be mapped due to the gaseous overburden. The reflector disruption is so severe that no stratigraphic or structural interpretation can be made between the wells. The targets of interest are the reflections between 3 and 3.5s in the Top Ekofisk chalk interval and possible Jurassic prospects below 3.7s in the mid-part of the section. In the flank areas, the seismic data quality is considered to be very good. The PS-mode-converted shear-wave section (b) considerably reduces the area of uncertain structural interpretation, especially in the deeper part of the section (for the Top Ekofisk Fm and the Top Lower Cretaceous Fm). The reservoir zone lies between 5.5 and 6s on the PS-wave section, and the faulted pattern can be partly followed across the crest of the dome. (Adapted from Ikelle and Amundsen, 2005.)

P-data Top reservoir

a

(a)

PS-data

Gas chimney

?

1/9 – 1

3 km

1/9 – 3R

b

3 km

67

Kim Gunn Maver

Figure 2.20: OBS acquisition with one vessel. Data are recorded in buoys.

Alpha structure, with a minimum of distortion from the gas chimney. Interpretation supported the suggestion that the Tommeliten Alpha structure is a faulted dome. The Tommeliten field study demonstrated the viability of the 4C technology to image below shallow gas. For this achievement, Berg and his colleagues received the prestigious Kauffman Gold Medal from the Society of Exploration Geophysicists. Since 1996, four-component marine seismic data acquisition has been a commercial service offered mainly by niche seismic players, which have developed different acquisition systems to acquire P- and S-wave data. Today, the trend seems to be moving from ocean-bottom cables (OBC) to nodes.

2.2.2  Acquisition Geometries and Sensing Systems The towed-streamer experiment records P-waves, but no S-waves are directly recorded, although the wavepath below the sea floor may include some S-wave paths. The S-waves are not directly recorded because the receivers are in seawater, and water, like all non-viscous fluids, supports only P-waves, not S-waves. In a marine 4C-OBS experiment the receivers are located at the sea floor. Every receiver station is a 4C sensing system; three components of the particle velocity field are recorded from a three-component geophone, and the pressure field is recorded

68

from a hydrophone. One geophone component is oriented vertically, and two are oriented horizontally, perpendicular to each other. With the sensing system stationary on the seabed, a source vessel towing a marine source array shoots on a predetermined grid on the sea surface. Several 4C sensing systems are in use at present. The first system consists of cables, with sensors inside at a distance of 25 or 50m. The cables, deployed on the seabed 300–500m apart, are connected to a vessel that records the seismic as the source vessel traverses the surface. A third vessel can be involved in retrieving and moving sensor cables and setting up the next receiver lines ahead of the source vessel. This three-vessel configuration allows the source boat to run a nearly continuous operation. Another solution to reduce the acquisition costs is to operate with one seismic vessel. This vessel then obviously will be a receiver deployment and shooting vessel. For every cable, data are recorded into recording buoys. The second system is the ‘node’ system, which is ideal in deep water, where nodes are planted into the seabed by remotely operated vehicles (ROVs). Here, the 4C sensors are housed in special units. Recording takes place locally, inside the sensor unit. Node surveys typically require both a node-handling vessel and a source vessel. When a sufficient amount of nodes have been deployed, the seismic crew can begin shooting.

Sonardyne International Ltd

Figure 2.21: An artist’s impression of nodes on the seabed. Figure 2.22: Magseis offers small and light nodes inserted into a steel cable to enhance the efficiency of OBS operations. Magseis

The third system is a combination of the first two: ‘nodes on a rope’, or steel wire, to increase the speed of deployment. The nodes can be accurately placed on the seafloor anywhere from the shoreline to deeper waters, with any desired interval. One of the major requirements in 4C-OBS experiments is that the geophones are well coupled to the sea floor in order to record both high-quality P-waves and S-waves. Because shear waves do not travel through water, the geophones must be in direct contact with the seabed to capture the motion of the seabed and not the change of pressure in seawater. The process of ensuring direct contact between the geophones and the seabed or any other solid material is called coupling. Another important requirement of OBS surveys is the so-called vector fidelity. This can be defined as that property of a three-component geophone sensor system where a given particle-motion impulse applied parallel to one of the sensor components registers only on that component, and wherein the same impulse applied parallel to the other components gives the same response, so that the various components can be combined according to the rules of vector algebra.

69

2.3  4C-OBS Opportunities We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special. Stephen Hawking (1988)

The initial interest in 4C-OBS surveying was the combined use of pressure and shear data to bring new insights into seismic exploration and reservoir characterisation. The principal investigators wanted to improve seismic lithology and fluid prediction, as shear waves carry additional information about lithology, pore fluids, and fractures in the subsurface. To date, this opportunity has not really been exploited. It is our hope that fresh students starting on a career in this industry will get up from their seats, open the door, and grab the opportunity when it knocks. So in this section we tell you what you may want to know. PP reflections can be quite similar for a number of different rocks with different saturants. From PP reflection data (so-called P-wave data) alone, it may be difficult to quantify lithology and pore saturants. However, since P- and S-waves are affected differently by variations in rock properties and pore fluids, marine multi-component seismic should become a key tool for lithology and fluid prediction in both exploration and reservoir evaluation. In addition, pressure and shear data reduce the ambiguity of structural and stratigraphic interpretation. Although 4C-OBS was commercially available from 1996, it took a decade before everyone in our industry realised that shear waves were not the major feature of interest in 4C-OBS surveying. In a summary from the 2005 EAGE/SEG research workshop on multi-component seismic it was stated: ‘Surprisingly, the driver behind the multi-component business was not shear waves but better pressure wave data’.

© Oilfield Review, used with permission

Figure 2.23: The incident P-wave reflects into a P-wave (called PP) and S-wave (called PS). Geophysicists have known for almost 100 years that combining the information from P-waves and S-waves can provide significant insight into the Earth’s properties. The P- or primary wave is the fastest travelling wave. It can move through solid rock and fluids. It pushes and pulls the rock and hence is known also as a compressional wave. The rock particles move in the same direction as the wave is moving in. The S-wave or secondary wave moves slower than the P-wave and only through solid rock. It is this property of the S-wave which led seismologists to conclude that the Earth’s outer core is a liquid. The S-wave moves rock particles perpendicular to the direction of the travelling wave.

Why are pressure recordings from 4C-OBS surveying better than pressure recordings from conventional streamer surveying? The key point that was realised by a few major players is that seabed seismic data recording offers the generic advantage of flexibility of the acquisition geometry. As virtually any pattern of shots and receivers is possible, data acquisition can be optimised to provide the most revealing subsurface image. The major benefit of OBS thus turned out to be detailed structural imaging by the exploitation of pressure reflections with all azimuths.

2.3.1  Applications of 4C-OBS Applications of 4C-OBS technology include: • Imaging below gas-invaded sediments; • Imaging of reservoirs with low P-wave reflectivity but high PS-wave reflectivity; • Revealing reservoirs obscured by multiples on conventional seismic data; • Discrimination between various lithologies (e.g., sand and shale) by VP/VS estimation; • Quantification of amplitude anomalies, such as P-wave ‘bright spots’ and ‘flat spots’; • Mapping of overpressure zones; • Mapping of stress-field orientation; • Mapping of fractured reservoirs (fracture orientation, density, and fluid content); • Reservoir characterisation and monitoring; • Imaging of complex structures by multi-azimuth true 3D surveys. We discuss the most important of these applications in more detail below. Imaging below gas-invaded sediments: Today, imaging in gas-affected areas is well proven by using shear waves to image reservoirs where gas saturation in the overburden can distort imaging based on conventional seismic data. In Section 2.2, we presented the Tommeliten study, which was the first application of 4C-OBS. Imaging of reservoirs with low P-wave reflectivity but high PS-wave reflectivity: The Alba field in the central UK North Sea consists of channel sands sealed by shales. The channel

70

a

b

Courtesy Chevron

sand is roughly 9 km long, 1.5 km wide, and up to 100m thick. On conventional P-wave seismic data one observes a weak, inconsistent reflector at the top of the reservoir and intra-reservoir shales, but a strong oil-water contact response. The reason that the top of the Alba reservoir is almost seismically invisible is that there is little or no contrast in P-wave reflectivity between the reservoir sands and the shale cap rock. However, wireline sonic logs show an increase in shear-wave impedance at the shale-sand interface that is significant; hence it was expected that PS converted reflection data would properly image the Alba reservoir (see Chapter 1, Figure 1.48). In early 1998, Chevron acquired a 67 km2 3D-4C survey at Alba. The PS data provided a clear and high-quality image of the reservoir body. By comparing the OBS data with streamer data, reservoir fluid changes after four years of production and water injection can be directly imaged. Chevron’s study improved the reservoir characteristics of Alba, allowing better placing of new development wells, and it furthermore documented the usefulness of PS converted data to image low P-wave impedance contrast reservoirs displaying large shear-wave impedance contrast. Even though reservoirs with low P-wave reflectivity due to the small contrast in P-wave impedance are well imaged by PS waves, due to the large contrast in S-wave impedance, large-angle stacks of conventional P-wave towed-streamer data can potentially provide a satisfactory image. P-wave AVO predicts that shale-sand interfaces with small P-wave impedance contrast, but high S-wave impedance contrast should have significant P-wave reflectivity for large angles of incidence. Therefore, before deciding on 4C-OBS acquisition for imaging of low P-wave impedance contrast reservoirs, the petroleum seismologist should evaluate whether towed-streamer AVO sections can solve the problem at hand. In some cases, however, the reservoir may be an obstructed area limiting the access for towedstreamer operations. In this case, 4C-OBS is definitely the best solution.

Figure 2.24: (a) Conventional streamer P-wave image of the Alba channel. Note the weak top sand event in the mid of the section at around 2s travel time. The converted wave PS image shows dramatically improved imaging of the sand channel due to the high PS reflectivity between shale and sand. (b) The dipole sonic log through the Alba reservoir sand shows a large contrast in shear-wave velocity (left) and a small contrast in P-wave velocity (right) with the surrounding shales. The green curves represent velocities in sand. The red and blue curves represent velocities in shales above and below the sand channel, respectively.

Quantification of amplitude anomalies: Marine 4C seismic – the use of concurrent, combined pressure and shear wave seismic data – unfortunately has not gained true acceptance in the E&P industry as a tool that can reduce risk by providing information about subsurface rocks and distributions of pore fluids. This is partly a cost issue. Nevertheless, 4C seismic technology can solve many seismic and geological challenges which cannot reliably be solved by the use of P-wave data information alone. When interpreting P-wave data from towed-streamer surveys, as is common industry practice, it is extremely difficult to find out whether P-wave amplitude anomalies such

as ‘bright spots’ and ‘flat spots’ (discordant events) are related to hydrocarbons or related to lithology. P-waves are not only influenced by rock types but also by fluids and it is difficult to discriminate between these effects. By including information on S-waves from ocean-bottom seismic surveys as well, it is possible to say whether bright spots and flat spots are most likely to be related to lithology or fluid effects since, unlike their P-wave counterparts, S-waves are almost insensitive to a rock’s fluid content, seeing mainly the rock’s matrix. Seismic flat spots are caused by the interface between two

71

types of fluids in a reservoir. They are recognisable if the quantification of gas hydrates, regardless of the existence of reservoirs are more than twice the seismic tuning thickness and a BSR. In order to test what information PS data can provide also relatively soft. This phenomenon is frequently used as a about gas hydrate sediments and their characteristics, PGS direct hydrocarbon indicator (DHI) in conjunction with seismic acquired a 4C line profile over a location in the Norwegian amplitudes and AVO techniques in exploring for hydrocarbons. Sea, where a BSR has been identified on conventional streamer Flat-spot recognition has been particularly successful in the P-wave data. Parts of the P- and PS-wave migrated stacks North Sea, where it has been applied to both exploration and from this multi-component line are shown in Figure 2.26. reservoir monitoring (e.g., the Gannet-C Field). Sometimes, By comparing the migrated stacks, we observe that events at however, seismic reflections caused by other phenomena, the BSR area and below are quite different on the two data such as remnant multiples and lithology variations, have been sections. The BSR is clearly visible on the P-wave data but is misinterpreted as fluid contacts simply because they look flat not observable on the PS-wave data. seismically and show up in the downdip location, which has led PS reflections are not masked by the gas effects, as PP to the drilling of unsuccessful prospects. reflections are, thus providing better stratigraphic and Through integrating the P- and S-wave data information, structural information, while PS reflections seem to follow petroleum geoscientists can distinguish between rock and the sediment layers. In this area, the fact that the BSR is fluid property effects, thus making a significant, predictive not detected on the PS-wave section suggests that the gas contribution to decreasing exploration risk. We give two hydrates in the sediments above the BSR have not stiffened examples of the potential of integrating P-wave and S-wave the sediment framework, indicating that the hydrate is not information below. cementing the grain contacts. By contrast, if the hydrate had In the marine exploration context, one of the first formed at grain contacts, it could have acted as a cementing applications of the joint interpretation of P-wave and S-wave agent. The sediment framework would then become stiffer, data was to verify the P-wave bright-spot amplitude anomaly resulting in increased P-wave and S-wave velocities above seen in the P-wave section in Figure 2.25 as a direct indication the BSR. PS-wave data in this particular situation could of gas saturation. potentially show the BSR and could potentially give more At a reservoir that changes laterally from water-saturated to detailed information about the stiffness of the sediment and gas-saturated, the bright spot represents a strong increase in gas hydrate concentration (Andreassen et al., 2003). P-wave reflectivity resulting from a significant drop in P-wave velocity in the presence of the gas. Since the S-wave velocities Quantitative VP/VS velocity ratio: The VP/VS velocity ratio are not very sensitive to differences in water and gas saturation, (or Poisson’s ratio) is recognised as a key indicator of the there should be no S-wave bright spot associated with gas presence of hydrocarbons in clastic formations. This ratio, saturation. When this is observed on the S-wave section, Figure 2.25: Comparison of P-wave and converted PS-wave images through the same geological section. The promising amplitude anomaly (or bright spot) on the P-wave section may be caused either by hydrocarbons or by rock effects it suggests that the reservoir is (lithology). The disappearance of the bright spot from the PS section strongly indicates that it is caused by hydrocarbons gas-saturated. Had there been alone (courtesy Statoil. Adapted from Ikelle and Amundsen, 2005). an S-wave amplitude anomaly coinciding with the P-wave amplitude anomaly, the bright spot would not have been associated with gas saturation. The second example is related to characterisation of time gas hydrates. (See Chapter 8 for an introduction to this topic.) The bottom simulating reflection (BSR), as observed on conventional P-wave Bright spot records, such as towedstreamer data, is the most commonly used indicator of the presence of gas hydrate accumulations below the sea floor. P-wave data alone, however, seem to fail to time detect gas hydrates when the BSR is absent. On the other hand, PS data integrated with P-wave data enable a better interpretation of the nature, No bright spot structure, distribution and

PP-data

PS-data

72

Figure 2.26: Comparison of P-wave and converted PSwave (right) images through the same geological section. The BSR is clearly visible on the P-wave data (left) but is not observable on the PS-wave data. Note that the PP and PS data are plotted in opposite direction, to ease the comparison. (Courtesy PGS).

PP which principally changes in response to variations in lithology, porosity, pore fluid and stress state, can be estimated by prestack P-wave AVO analysis. However, the record of hydrocarbon detection by AVO analysis is mixed, and AVO analysis is used nowadays with great care. The VP/VS ratio can also be estimated from post-stack P- and PS-wave reflections. A key question is whether VP/VS can be found with higher accuracy and reliability using 4C data. Can it be quantitatively estimated from pressure and shear records? Theory predicts that this can be done. However, we need several case studies emphasising the quantitative aspects of measuring elastic subsurface parameters before this question can be fully answered. Overpressure zones: Except near the surface (typically the first kilometre), the values of VP, VS and therefore VP/VS are quite insensitive to changes in differential pressure. (Differential pressure is the difference between overburden pressure and pore fluid pressure.) However, when an overpressured zone with anomalously high pore fluid pressure is encountered in the deeper section, anomalies in the VP/VS ratio can be measured. Overpressure implies a decrease in differential pressure, which tends to decrease both P- and S-wave velocities but increases the VP/VS ratio. Hence, from VP/VS analysis from pressure and from shear-wave sections, overpressure zones may be identified. Duffaut and Landrø (2007) found that the VP/VS ratio increased from 1.9 to 7 using time-lapse seismic data acquired at the Gullfaks field. Due to water injection, the pore pressure in the reservoir had increased by approximately 5–7 MPa, and the increase in VP/VS ratio is interpreted as being caused by this pore pressure increase. For the Statfjord field, where the sand is more consolidated, the corresponding change in VP/VS ratio was estimated to be 1.9 to 2.0, significantly less. This difference is not only attributed to the difference in rock consolidation, but also to the fact that for the Gullfaks case the pore pressure in 1996 was close to the fracking pressure. This means that the contact between the rock grains weakens, and the shear wave velocity drops essentially to zero, while the P-wave velocity approaches the fluid velocity (1,500 m/s), and hence the VP/VS ratio can be abnormally high. If we assume

PS that the P-wave velocity is close to the water velocity then it means that the shear velocity is close to 215 m/s (assuming Vp/Vs=7), which is extremely low. It should be noted that the uncertainty in this estimate is significant. Anisotropy and fractured reservoirs: Any presence of oriented fractures and/or directional horizontal stress fields can create azimuthal anisotropy in the subsurface. Predicting directions of oriented fractures and fracture density and possibly obtaining quantitative information about stress state underground (stress orientation and relative magnitude) may be critical for understanding how fluids and gases flow through a reservoir, and for determining drilling locations and optimising reservoir productivity. Two effective medium models which describe azimuthal anisotropy are transverse isotropy with a horizontal axis of symmetry (HTI), and orthorhombic anisotropy. HTI can be caused by a system of parallel penny-shaped vertical cracks embedded in an isotropic matrix. Orthorhombic anisotropy, which is believed to be a more realistic model of fractured reservoirs, may result from a combination of thin horizontal layering and vertically aligned cracks. From P-wave data, azimuthal anisotropy is often subtle and quite difficult to recover. Shear waves, however, are more sensitive to azimuthal anisotropy. Therefore, by estimating azimuthal anisotropy from an analysis of mode-converted shear waves, fracture parameters and/or unequal stress fields can be predicted. Two methods have been proposed to extract this information from marine multi-component data: • the study of shear-wave splitting for near-vertical propagation; • the study of reflection amplitude variations as a function of offsets and azimuths. Both methods have the potential to determine the fracture orientation and density of the vertical fractures. The second method has attracted particular interest, as it is also sensitive to the fluid content of the fracture system. From amplitude analysis it may be possible to find whether the fracture network is fluid-filled or gas-filled.

73

Duffaut and Landrø, 2007

Time (s)

Reservoir monitoring (4D): To date, most W EW E 2.0 marine monitoring studies have been NE AR 1 9 8 5 N E AR 1 9 9 6 based on seismic pressure recordings, from which it is possible to attempt to map fluid distribution changes pre­dominantly from TOP RESERVOIR changes in P-wave impedance between 2.2 surveys. MID1985 MID1996 The combination of pressure and shear-wave data should theoretically provide better reservoir characterisation and monitoring of hydrocarbon-bearing reservoirs during production. FAR 1996 FAR 1985 One of the concrete benefits which can be derived from 4D-4C analysis is the fact that bypassed reservoir zones can be more reliably detected by time-differencing the 1,250 m VP/VS ratio obtained from the correlation of P- and PS-wave data. Figure 2.27: Near, mid and far offset stacks from the Gullfaks time-lapse seismic data. Notice the clear A second benefit is because the bulk-rock brightening caused by the pore pressure increase, and that the amplitudes are slightly dimming as a property changes due to fluid variations function of offset (right panel). and those due to effective stress (pressure) variations can potentially be separated by time-differencing Imaging complex structures by full-azimuth 3D surveys: two elastic parameters (e.g., VP/VS velocity ratio and P-wave In many geologically complex areas, towed-streamer impedance, or S-wave and P-wave impedances) calibrated seismic surveys may not be the best geophysical solution for at wells in a multi-disciplinary approach. The separation of delivering optimal reservoir imaging. A stationary acquisition these two effects can be used to obtain reservoir permeability system, like an OBS experiment, permits true 3D acquisition, information. The advantage of using multi-component data is meaning that there will be complete offset and azimuth that the two elastic parameters can be directly estimated from distributions within the data. This opportunity will be post-stack P- and PS-data. addressed in Section 2.4.

74

2.4  4C-OBS: Superior Imaging The real voyage of discovery consists not in seeking new lands but seeing with new eyes. Marcel Proust (1871–1922, French novelist)

4C-OBS surveying offers the generic advantage of flexibility of the acquisition geometry. As virtually any pattern of shots and receivers is possible, data acquisition can be optimised to provide the most revealing subsurface image. In 1999 an extensive research programme led by Statoil’s R&D group concluded that detailed structural seismic imaging of complex geology required the acquisition of highfold seismic data from all directions around a reservoir. This result was obtained by careful planning of the world’s first dedicated 3D imaging OBS cable survey over the Statfjord field offshore Norway, followed by thorough and consistent evaluation of image quality versus acquisition geometry (Amundsen, 1997; Thompson et al., 2002).

2.4.1  Elucidating the Statfjord Field, Norway Discovered in 1979 in approximately 150m water depth, Statfjord is one of the oldest producing fields on the Norwegian continental shelf. The reservoir units are sandstones located in the Brent group and in the Cook and Statfjord Formations. Structurally the field is dominated by a single rotated fault block dipping towards the west, with a more structurally complex area on the East Flank characterised by small rotated fault blocks and slump features. Previously, imaging from conventional 3D streamer acquisition had been difficult in this field, due to gas in the overburden and multiples in the lower reservoir zones. Therefore, an OBS cable pilot survey was shot in late 1997 with the main objective of improving the seismic imaging of the structurally complex East Flank. Once the 3D OBS survey was processed, it was possible to see that the definition of the Base Cretaceous unconformity and the Base Slope of Failure had improved over a large portion of the survey area. More accurate definition of faults

and improved resolution of small-scale structural elements were also achieved. The new interpretation resulted in more confident mapping of intact rotated fault blocks with a better understanding of the areal extension and the internal stratigraphic dip within the East Flank area. After the success of the 1997 pilot survey, a 120 km2 3D OBS survey was commissioned in 2002, which showed a consistent uplift in image quality compared to the existing conventional 3D marine seismic, as can be seen in Figure 1.50, Chapter 1, Section 1.3.7.

2.4.2  Imaging Cantarell’s Daughter The Cantarell Complex is an aging supergiant oil field in Mexico located in 40m of water. It was discovered in 1976 after oil stains were noticed by a fisherman, Rudesindo Cantarell Jimenez, in 1961 (see Chapter 1, Section 1.2). The field has five blocks bounded by faults, Akal being the most important. Geologically, it is one of the most interesting oil fields in the world, because the reservoirs are formed from carbonate breccia of Upper Cretaceous age – the rubble from the enormous Chicxulub meteorite that created the Chicxulub Crater. In addition, the region has a complicated tectonic history including major compressional and extensional episodes. As Cantarell’s production is dwindling, Pemex is dedicating billions of dollars to finding more oil and keeping up the production level. Cantarell is believed to hide a huge secret: a daughter reservoir immediately underneath the Sihil field, which itself is underlying the giant Akal field. But imaging difficulties are numerous – in addition to being a fractured

a

b

© Pemex/SeaBird Technologies

Figure 2.28: PP (a) and PS (b) images from the Cantarell Field. PS time is compressed to PP time. Note topsets now visible, upper right of PS image and the difference between PP and PS, indicating the possibility of seismic hydrocarbon indicators below the overthrust (not visible on PS-data).

75

Courtesy BP

carbonate reservoir with salt and an overthrust structure, Cantarell is populated by dozens of platforms, with lots of vessel activity, so that traditional seismic is difficult to acquire due to the risk of tangling streamers. Therefore, to better image the Akal field and map Cantarell’s daughter, Pemex turned to ocean-bottom seismic nodal acquisition. SeaBird Geophysical was awarded the contract in 2003/2004. More than 1,500 node sensor units were deployed into seven swaths in a 400m by 400m grid, covering a total of 230 km 2 , at the time the largest survey of this type ever conducted. Compared with the existing OBS cable 1996 data, SeaBird’s OBS node 2004 data demonstrated higher resolution, excellent reflector continuity, and improved structural Figure 2.29: Atlantis field cross-section. definition of both top Akal and Sihil levels below. The new data mapped structural compartments at the targeted Sihil level. northern flank, where the crest of the field is partially obscured The Sihil field is believed to contain between 1 and 15 billion below a thick sheet of salt, to be mapped. As a result, poor barrels of oil. seismic imaging has hindered development. To meet the Atlantis challenge, and obtain higher-quality 2.4.3  Deepwater Gulf of Mexico Survey seismic images, in 2005 BP contracted Fairfield Industries to Discovered in 1998, the reservoir structure of BP’s Atlantis deploy 900 autonomous recording units in a grid-pattern on field, the third-largest in the Gulf of Mexico, has posed the seafloor. The acquisition programme encompassed 240 particular imaging problems which called for major km 2 in water depths between 1,300 and 2,200m. Completed in innovations. Regular streamer seismic had not allowed its 2006, the Atlantis project is the deepest OBS survey acquired Figure 2.30: Imaging comparison at Atlantis of 3D conventional marine seismic (A) and 3D OBS (B). Subsalt reflectors are clearly improved (Howie et al., 2008).

5,000

10,000

15,000

25,000

76

Courtesy BP

20,000

by the industry to date. The OBS node survey produced significant image improvement of the Atlantis field over the existing towed streamer seismic. Both the hydrophone data (P) and the vertical geophone data (Z) were used in the processing. The improved images are impacting current extrasalt field development and laying the foundation for development expansion into the subsalt areas which make up two-thirds of the structure, previously viewed as high risk because of the very poor imaging from towed streamer data.

image of Oseberg Sør (Statoil)

a

2.4.4  Imaging Oseberg Sør Oseberg Sør is an offshore oil field in the North Sea, discovered in 1984 in water depth of 100m. The C-structure in the Oseberg Sør field was considered to be a fairly complex area, with very poor imaging, mainly due to the presence of hard sand bodies in the overburden, making it very difficult and time-consuming to interpret the reservoir horizons. It was a problematic area to develop. But with the acquisition of orthogonally shot OBC, the seismic image was greatly improved; just a few months after the acquisition, Statoil drilled a very successful well into the first Ness channel interpreted on the seismic in this area (30/9-F7). The improvement on the seismic quality caused Statoil to completely change their geological understanding of this block. New structural and stratigraphical interpretations changed the development strategy, resulting in the cancellation of several planned wells and the delineation of new targets, including new exploration prospects, increasing the potential of the area (Dangerfield et al., 2012).

b

Figure 2.31: Streamer (a) versus OBC (b) image of Oseberg Sør. Other technologies like broadband streamer seismic (see Chapter 5) can give similar improvements in imaging in the shallow subseabed region. But the most noticeable impact of OBC acquisition technology (beside the nice low frequency content) is the full illumination, allowing to undershoot below the Oligocene cemented sands in order to image the reservoir (Dangerfield et al., 2012).

East

West C-structure

Omega

G-Ost

G-Central

Tune Statoil

J-Structure

BCU Draupne/Heather Coal UTarbert

Figure 2.32: Geological cross-section through Oseberg Sør. It shows a series of down-stepping fault blocks. The reservoirs are thin, often with little or no seismic expression. (Dangerfield et al., 2012.)

MT2 MT1 LT UNess Channel Sandstone

LNess

77

2.5  Towards Full Azimuths There is nowhere else to look for the future but in the past. James Burke, 1978

The E&P industry faces huge challenges in exploring, appraising and developing new reserves in deepwater subsalt plays. One key challenge is to image subsalt reservoirs with sufficient accuracy to locate the oil and gas deposit and to reduce drilling risk in these high-cost, high-risk environments. Since the seismic image is inextricably linked to the seismic acquisition geometry, this calls for innovative survey design and better ways to acquire seismic. In this part we show how advances in seismic imaging of subsalt plays in the deepwater Gulf of Mexico are a result of the way the industry has embraced wide-azimuth and full-azimuth streamer survey techniques. Complex structures are illuminated through a wide range of azimuths, or by using longer offsets in order to undershoot them, using several seismic vessels working together. Traditional 3D seismic data acquired over geology hidden below salt canopies in the Gulf of Mexico (GoM) may not be of sufficient quality due to various kinds of noise, which makes detailed reservoir characterisation and interpretation a troublesome undertaking at best. The challenge is to image subsalt reservoirs with sufficient accuracy. To this end, improvements in seismic acquisition designs and imaging algorithms are key.

Azimuth is the angle at the source location between the sail line and the direction to a given receiver. Full-azimuth, long-offset data help illuminate complex subsalt structures better than narrow-azimuth and wide-azimuth data sets. Proper imaging of these data sets requires accurate velocity models. The building of velocity models is facilitated by the low frequencies, long offsets, and all azimuths present in the data. Figure 2.33: The Gulf of Mexico deepwater plays are characterised by considerable imaging problems due to thick, extensive salt sheets. Many companies are hunting hydrocarbons lying beneath salt bodies that were once part of Jurassic Louann salt. Sedimentation on top of the salt has caused it to flow laterally, before settling as allochthonous sheets within the surrounding sediment. Highquality multi-azimuth seismic reduces exploration risk and allows identification of previously hidden prospects. The real challenge surfaces when a discovery is made. With a few well penetrations only, the discovery possesses many unknowns. The structural geometry, including faults and compartmentalisation, which is necessary to determine new well placement, is often very uncertain. Better seismic is again the key to finding the details needed to make more informed decisions. (© Oilfield Review, used with permission.)

78

Narrow-Azimuth Geometry 0 330

Multi-Azimuth Geometry

Azim uth

0 330

30

300

90

270

120

150

210 180

90

270

120

240

150

90

270

120

240

150

210 180

0 330

30

300

60

180

Coil Shooting Geometry

0 330

30

300

60

210

Rich-Azimuth Geometry

0 330

30

300

60

240

Wide-Azimuth Geometry

300

60

90

270

120

240

150

210 180

30

60

90

270

120

240

150

210 180

Figure 2.34: Traditional and new acquisition geometries (bottom) and azimuth-offset distribution plots (top). Colours range from purple and dark blue for a low number of traces (low fold) to green, yellow and red for a high number of traces (high fold). (© Oilfield Review, used with permission.)

2.5.1  Seismic Surveying Techniques Today, we see that the industry is moving towards full-azimuth surveying techniques to explore, appraise and develop new reserves. Here we describe the main types of seismic surveying technologies used in the GoM, and show some of the early work. Narrow-Azimuth: In a conventional 3D marine seismic survey the vessel traverses the surface in a predetermined direction above the subsurface target. Since most of the recorded seismic signals travel nearly parallel to the sail line, at small azimuth, the survey is called a narrow-azimuth or NAZ survey. The target essentially is illuminated from one direction in NAZ surveys. Multi- and Rich-Azimuth: In a multi-azimuth (MAZ) survey the seismic vessel acquires 3D seismic data over the survey area in up to six directions, where the azimuth distributions are clustered along azimuths associated with the sail lines. A rich-azimuth (RAZ) survey is essentially MAZ surveys acquired in multiple directions. The first RAZ survey was acquired in 2006 by BHP Billiton over the deepwater Shenzi field (see Section 2.5.2). Wide-Azimuth: Wide-azimuth (WAZ) surveys are designed to widen the azimuth distribution over the target in

one preferred single direction. Different designs are available, but they require at least two source vessels in addition to the streamer vessel. Each source line is shot multiple times with increasing lateral offset. To improve acquisition efficiency, some WAZ surveys are acquired with multiple streamer vessels. WAZ surveys have been shown to deliver better illumination of the subsurface, higher signal-to-noise ratio and improved seismic resolution in several complex geological environments, such as beneath large salt bodies with irregular shapes. All-Azimuth and Full-Azimuth: An all-azimuth (AAZ) or full-azimuth survey (FAZ) is a survey where ‘all’ azimuths are acquired. FAZ surveys are best realised using OBS technology but streamer geometries may be a cheaper alternative. FAZ Coil Shooting: WesternGeco’s coil shooting single-vessel FAZ acquisition is a technique which extends conventional MAZ and WAZ survey capabilities by acquiring seismic data using a single vessel sailing in a series of overlapping, continuously linked circles. By comparison, WesternGeco’s dual coil shooting multi-vessel FAZ acquisition involves two recording vessels with their own sources and two separate source vessels providing ultralong offsets, all sailing in large diameter interlinked circles. The trace density of this design is more than twice that of current WAZ survey designs,

Image courtesy CGG

Figure 2.35: A WAZ fleet in action in the Gulf of Mexico.

79

Schlumberger

Figure 2.36: To acquire a coil shooting survey, the vessel sails in a series of overlapping, continuously linked circles. Dual-coil shooting involves two streamer vessels (typically, with 10 streamers of 8-km length), each with their own sources, and two additional source vessels, all sailing in large 12- to 15-km diameter interlinked circles. The source vessels are at the tail end of the streamers. (© Oilfield Review, used with permission.)

Figure 2.37: Imaging improvements achieved from NAZ to WAZ to dual-coil acquisition. The 2005 NAZ velocity model was built with ray-based tomography and imaged with WEM migration, whilst the 2008 WAZ and 2013 dual-coil data sets used FWI for velocity model building and RTM for imaging.

80

StagSeis: CGG’s StagSeis™ offers an effective solution for acquiring FAZ data. StagSeis is a multi-vessel acquisition technique, employing two multi-streamer vessels and three additional source vessels in a staggered configuration. Sail lines are acquired in an anti-parallel fashion in two orthogonal passes, allowing data to be acquired with four axes of ultra-long offsets up to 20 km and full-azimuths up to 10 km. Orion: PGS’s five-vessel Orion™ configuration consists of two streamer vessels towing 10 GeoStreamers™, each one 8 km long, along with three independent source vessels in a simultaneous long-offset configuration for total offsets of 16.5

SEG Kapoor et al, 2014

improving signal-to-noise ratio to further enhance the imaging of weak subsalt reflections. Another useful feature of coil shooting is that there is no down-time in acquisition due to vessels turning. Continuous shooting seems to be attractive because it reduces acquisition costs. Seismic illumination and the imaging of complex structures clearly depend on acquisition geometry. High-fold, multiazimuth and full-azimuth seismic, although much more expensive than traditional 3D streamer surveys, has been proved to yield the high quality detailed seismic imaging that is required to improve our understanding of complex fields.

PGS

Figure 2.40: PGS’s Triton survey utilises five vessels in the ORION configuration. The survey is shot in three directions. The five vessels cover 50 square miles from the front of the source to the back of the streamer.

Image courtesy CGG

Figure 2.41: PGS’s survey geometry enables a 360-degree view of the complex structures found in Garden Banks and Keathley Canyon, a challenging, deepwater area in the Gulf of Mexico. PGS

Image courtesy CGG

Figure 2.39: StagSeis improves illumination beneath complex overburdens compared with standard wide-azimuth. The StagSeis image shows improvements in clarity and detail. At the deeper levels, the image shows structure that was not previously visible, with improved continuity of subsalt horizons and better fault definition.

Figure 2.38: StagSeis configuration, with two multistreamer vessels and three source vessels in a staggered configuration.

81

Wikipedia/Mouser

km. It is called the ‘Orion’ configuration because it is arranged quite like the Orion constellation. Full-Azimuth Nodal: In 2015 TGS, in collaboration with FairfieldNodal, executed a Full-Azimuth Nodal (FAN) seismic survey in the Eugene Island protraction area. The expectation is that such surveys provide a new standard of imaging for exploration and development.

2.5.2  Early Seismic WAZ Developments in GoM In 2000, a conventional 3D streamer survey focusing on improved acquisition parameters over the BP-operated Mad Dog discovery did not deliver the needed improvements when compared to a previous traditional 3D streamer survey shot in a different direction. This result led several companies to investigate the possible benefits of widening azimuthal coverage. The first GoM WAZ survey was conducted by BP with the contractor CGGVeritas over the Mad Dog field in 2004–05, using one recording vessel and two source vessels. The WAZ data delivered a breakthrough in imaging, and initiated a broad WAZ acquisition programme to enhance the imaging of subsalt discoveries. In 2006, Shell shot a WAZ survey with WesternGeco, aiming at improved imaging of a deep target below a complex salt structure near the deepwater GoM Friesian discovery. The acquisition programme comprised two dual-source vessels and one eight-streamer vessel. Shell concluded that while not all areas below the salt were illuminated, the WAZ survey improved the image of the subsalt sedimentary structure. Furthermore, the dominant multiples were removed without specific processing. By combining MAZ and WAZ concepts, rich-azimuth (RAZ) coverage can be obtained. The first RAZ streamer survey was acquired by BHP in 2006 over the Shenzi discovery in the Green Canyon area, using one streamer vessel equipped with a source and two source vessels shooting along three sailing directions. Each shot location was repeated at least three times. The Shenzi RAZ survey took 88 days to acquire and delivered six times the data of a conventional survey. The RAZ survey significantly improved reflections deep below the salt when compared to a previous narrow-azimuth survey. Over the last decade the Gulf of Mexico has been covered by dense, high-quality 3D seismic data with the purpose of reducing the risk in hydrocarbon exploration and to allow imaging of previously hidden prospects. These data now blanket most of the deepwater GoM. But data coverage does not tell the full story. Seismic imaging methods like Reverse Time Migration (RTM) have greatly enhanced the interpretation capabilities in the deepwater GoM, particularly for areas hidden below salt canopies, and have created additional opportunities.

82

Figure 2.42: The Orion constellation. The three bright stars in a row constitute Orion’s Belt. In Greek mythology, Orion was a giant huntsman whom Zeus placed among the stars as the constellation of Orion.

2.6  The Ultimate Marine Seismic Acquisition Technique? … vether it’s worthwhile goin’ through so much to learn so little, as the charity-boy said ven he got to the end of the alphabet, is a matter o’ taste. Charles Dickens’ comic novel, The Pickwick Papers

Over the last decade we have experienced several step-changes in marine seismic acquisition technologies. Because acquisition and seismic imaging are inextricably linked, these changes have impacted positively on the seismic image of subsurface geology. But have we found the ultimate acquisition technique? In this section, we speculate on what the future may bring. In reservoirs within geologically complex settings, the seismic received attention. In 2012 Saudi Aramco teamed up with images are still not perfect. Advanced 3D imaging requires CGG on a mission to develop autonomous seismic nodes that huge amounts of data, densely recorded over a large area, to could largely eliminate the need to use remotely operated properly reconstruct the structures and stratigraphy at a high vehicles (ROVs) in seafloor seismic acquisition projects. At the resolution. The optimum seismic data collection technique EAGE Conference in Vienna in 2016, geophysicists discussed is yet to be found. The ultimate survey method may be novel strategies for seismic acquisition to accomplish the unattainable, but it should try to satisfy three criteria: full goal of efficient and high-quality acquired data as well as the illumination, full noise suppression, and low to moderate cost. possibilities of employing customised robotic solutions for Two routes of future development are envisaged. One is to seismic equipment deployment, retrieval, data harvesting, and significantly expand the number of streamers towed behind more. Robotic technology has the potential to seriously disrupt the vessel, so that the receiver coverage gets much larger, the offshore seismic market. preferably with the smallest receiver intervals possible, both Figure 2.43: AUVs gilding down to the sea bottom (a) to record seismic data. When the crossline and inline. survey is finished, the AUVs return to the mother ship (b). Another idea which has the potential to revolutionise seismic acquisition is the development and deployment of a autonomous underwater vehicles (AUVs). Thousands of these would float down to the seabed, in order to record seismic data over several periods of weeks, while a shooting vessel traverses the surface.

2.6.1  Autonomous Underwater Vehicles Over the last 20 years the petroleum industry has witnessed enormous changes, especially within seismic and drilling, many of which were not anticipated. There is every reason to believe that technological advances during the next 20 years may well see even greater changes than those that occurred over the previous two decades. We therefore await future developments in seismic acquisition and imaging technology with excitement. If we look back in history, we find that many excellent ideas have not always surfaced. Many years ago it was proposed that AUVs could be dropped from the back of a vessel one-by-one with an interval of only minutes. Driven by gravity and buoyancy, the AUVs would rapidly glide into a pre-planned grid on the sea bottom. When the survey is complete, they receive a return command, blow their water ballast, and ascend to the surface where they are picked up by the vessel. Unfortunately, this proposal did not get sufficient support from the industry at the time. There are several major hurdles though which AUV technology must pass. The single most important one will be to increase the life of the battery, while still keeping it small enough to fit into the AUV. But the idea was great, and it has slowly but steadily

b

83

2.7  Codes and Ciphers: Decoding Nature’s Disorder In many languages the word ‘cipher’ means ‘number’. However, the current meaning of ‘cipher’ in English – an ‘algorithm for performing encryption or decryption’ – can be traced back to the time when it meant ‘zero’ and the concept of ‘zero’ was new and confusing.

Information is the resolution of uncertainty. Claude Shannon (1916–2001)

Talk clearly and not so far-fetched as a cipher.

Guest Contributors: Dirk-Jan van Manen and Johan Robertsson, ETH Zurich

Medieval – origin unknown In the military barracks of Bletchley Park in Buckinghamshire, UK, one of the world’s first computers (the so-called ‘Bombe’) was constructed for the single purpose of cracking military codes generated by the German Enigma encryption machine. Combining three rotors from a set of five, each rotor setting having 26 positions, and a plug board with ten pairs of letters connected, the Enigma machine had 158,962,555,217,826,360,000 (nearly 159 quintillion) different settings. Even a very fast modern computer could not systematically go through all those settings. The British had access to an Enigma machine but did not know the settings, which varied daily. A team of highly talented people working

with the brilliant mathematician Alan Turing managed to crack the German codes in an ingenious way that involved realising that the Germans would always start their encrypted radio transmissions with a weather report. This observation combined with groundbreaking advances in statistics allowed Turing and his team to limit the possibilities and guide the ‘Bombe’. The story of Alan Turing is as fascinating as it is tragic; only recently has the true nature of his genius and his impact on the 20th century been recognised. Breaking the Enigma code is believed to have shortened WWII by two to four years and saved millions of lives. The goals for encrypting or encoding messages and signals have been and always will be as varied as they are interesting: from communicating securely with the front-line in war time, to increasing the communication capacity in cellular phone networks. A ‘bit’ of information represents the resolution of uncertainty between two distinct states/symbols. N bits of information resolve or distinguish between 2N different states or symbols.

Figure 2.44: Military Enigma Machine. The three (of five) rotors with 26 positions each are clearly indicated. (Inset) Replica of the ‘Bombe’ decoding machine built by Turing at Bletchley Park during WWII.

84

In 1949, Claude Shannon calculated that the maximum amount of information that can be reliably communicated between a single source and receiver is given by the formula: where S is the received signal power, N is the noise power, and information is measured in bits per second per Hertz of bandwidth available for transmission (Shannon, 1948; Simon, 2001). Note that Shannon’s equation is a statement about the channel through/over which the information is communicated. It does not say anything about the source that is producing the signals or messages. The average information produced by a source is called the entropy of the source and is also measured in bits. For a source producing n discrete symbols with probability , the entropy is computed as:

(a)

(b)

50˚

50˚

45˚

45˚

40˚

40˚

35˚

35˚

100s

200s

30˚ 235˚

240˚

245˚

0

255˚

250˚

15

25

30˚ 235˚

35

240˚

50

normalised amplitude

245˚

70

250˚

255˚

100

Figure 2.45: Snapshots of the normalised amplitude of the ambient noise cross-correlation wavefield with TA station R06C (star) in common at the centre. Each of the 15–30s band-passed cross-correlations is first normalised by the rms of the trailing noise and fitted with an envelope function in the time domain. The resulting normalised envelope function amplitudes are then interpolated spatially. Two instants in time are shown, illustrating clear move-out and the unequal azimuthal distribution of amplitude. (Reproduced from Lin et al., 2009, with permission from Profs. Lin and Snieder.)

The concept of a measure of the average information content produced by a source is extremely important and parallels concepts such as structure or sparseness, which we will briefly get back to in the next instalment. We will now look at what limited hopes we have when encoding signals if we ignore or do not know the statistics of the signals produced by the source.

2.7.1  Encoding, Decoding and the Welch Bound Often one does not have access to the source signals per se (i.e., one cannot encode them), but it is the structure in the source signals, together with a multiplicity of measurements, that enables the decoding or separation. In many other cases, however, one does have access to the source signals and the question arises as to whether one can do better when encoding the source signals and if one can do away with the requirement of multiple measurements. One strategy could be to encode the source signals with a unique time-series, each of which has both low autocorrelation (apart from the central peak) as well as mutual cross-correlation properties. The individual source wavefields can then be obtained from the encoded simultaneous-source signal simply by crosscorrelating the recorded data with the respective encoding sequence. In 1974, L. R. Welch derived a lower bound on the maximum cross-correlation of signals. This bound shows that, as one might expect, the maximum cross-correlation and off-peak autocorrelation values of a set of sequences cannot be arbitrarily low. Consider a set of M general, complex-valued sequences {an}, {bn}… {mn} of length N, the discrete aperiodic/nonperiodic correlation function between a pair of sequences is defined as:

When a=b, this equation also defines the discrete aperiodic autocorrelation function. Let denote the maximum outof-phase (i.e. off-peak) autocorrelation value and denote the maximum cross-correlation value. Then the Welch bound is derived as follows (Welch, 1974):

The Welch bound is a very useful result since it tells us, given the number of sequences in the set (M) and the length of the sequences (N), how low the maximum cross-correlation and off-peak autocorrelation values can be. Sequence sets which achieve the Welch lower bound exactly are known as Welch bound equality sets, the best-known of which is the Kasami sequence set.

2.7.2  Nature’s Way of Encoding It turns out that random noise can be an extremely effective natural encoder. While earthquakes are recorded only intermittently, a seismic background noise wavefield is present and being continuously recorded. Usually, the exact origins of this wavefield are unknown. One source can be attributed to atmospheric pressure variations, which induce water waves that convert into low-frequency microseisms (faint earth tremors caused by natural phenomena) when crashing onto the shore. Despite the relatively low amplitude level, the continuous and random nature of such a background wavefield makes it possible to correlate it between receiver stations. Although the existence of these noise bands have been known for more than half a century, it was only in the early 2000s that seismologists realised that, by cross-correlating such noise wavefields at different receiver stations, a wealth of

85

2.7.3  Multiple Scattering: Friend or Foe?

a

b

[sqrt(#) #] of realisations

information about the structure in between the correlated receivers can be extracted in the form of the inter-receiver Green’s function. This process of cross-correlating (noise) data recorded at different receivers is now known as interferometry and can also be thought of as turning one of the receivers into a virtual source (Figure 2.45). As noise data from any pair of receivers can be crosscorrelated, the number of virtual sources and receivers that can be created using this method is proportional to the square of the number of receivers. It turns out that such inter-receiver Green’s functions constitute ideal datasets for high-resolution surface wave tomography in regions where there is ample background noise and many receivers but few earthquakes. Thus, nature had conveniently encoded the information for us, but it took us some time to understand how to decode it! A similar strategy for interferometric modeling of Green’s functions between a large number of points in the interior of a model has been proposed by van Manen et al. (2005). In that case, however, when the data does not come for free the Welch bound predicts that the quality of the data after separation of the simultaneous simulation is proportional to the square root of the simulation time (Figure 2.46).

Multiple scattering adds significant complexity to the Green’s functions (impulse responses). Therefore, if one considers every impulse response to be a realisation from one or more stochastic information sources, one could say that the multiple scattering significantly increases the entropy of the information sources. time [s] Another way to say this is that the number of degrees of freedom in the source signals Figure 2.46: (a) Snapshots of the multiply-scattered reference wavefield when placing a source increases dramatically if we know that in one of the receiver positions (blue triangles). (b) Retrieved Green’s function (blue) obtained by the signals do not only consist of, e.g., cross-correlating and stacking an increasing number of 65,536 sample segments of noise-encoded wavefields recorded at the receiver locations versus the reference waveform (red). As expected from primary reflections. Fortunately, in the Welch’s bound, the quality of the decoding improves as the square-root of the number of segments interferometric applications, we do not have stacked (or the total length of the simulation). to worry about this additional complexity when reconstructing the virtual source responses in ambient in Shannon’s equation lowers the information capacity. It noise surface wave interferometry or when encoding Green’s turns out that the opposite is true. A first clue came in the functions between points in the interior, since interferometry classic paper by Derode and co-workers from 1995, who intrinsically reconstructs the full Green’s function. demonstrated that it is possible to time-reverse a high-order It is also interesting to consider the role of multiple multiply-scattered wavefield through the actual scattering scattering when it is present in the medium through which medium, to achieve super-resolution focusing at the original one wants to communicate, i.e. when the multiple scattering source location. The multiple scattering effectively enlarged is part of the communication channel and not the source the aperture of the linear array of transducers such that signals. Until the late 1990s, the reigning paradigm was that focusing well below the free-space diffraction limit was multiple scattering hinders communication and lowers the achieved. Around the same time, a researcher at Bell Labs maximum rate of communication, rather as the noise term named Gerry Foschini realised that the multitude of paths

86

in a scattering medium actually help to increase the rate at which information can be transferred. Conceptually this can be thought of as sending different messages over the different multiple scattering paths (Figure 2.47) (Simon et al., 2001). What is more, the encoding and decoding algorithm that Foschini proposed, which realises the higher rates of communication, does not require knowledge of the details of the scattering environment. When keeping the total transmitted power constant, but using MT transmitters and MR receivers, it turns out the channel capacity can be Figure 2.47: Scattering can enhance communication. In the absence of the scatterer, a single linear roughly MT as large. combination of the blue and the red signals is measured. The scatterer causes a second, different linear These developments come full-circle combination of the blue and the red signals to be measured. With the antennas displaced slightly, it becomes possible to separate the signals using an inverting linear transformation and increase the in a more recent contribution by Derode transmission rate. Modified from Simon et al. (2001). et al. (2003), which shows how to exploit multiple scattering when communicating with time-reversal antennas. Taking advantage of the super-resolution enabled by acoustic time-reversal, they can transmit random bit series to receivers that are only a few wavelengths apart. In contrast, in a homogeneous medium, the communication breaks down completely as individual bit series are no longer resolved (Figure 2.48). The transfer rate is directly proportional to the number of eigenvalues of the time-reversal operator at a given frequency and increases with increasing multiple scattering. Thus, to answer the question posed above: if you are trying to recover the multiply-scattered wavefield between two arbitrary receivers, and you are measuring in the presence of a strong and somewhat uniform ambient noise field, chances are that nature has already encoded the desired wavefield for you and all you will have to do is measure the ambient noise field for long enough at the receivers, and Figure 2.48: (Left): Experimental time-reversal setup (plan view) consisting of a forest of steel rods. Initially, decode it using interferometry (crossa source is placed at the location of each receiver (at x=-30m) and the impulse response recorded on a correlation), implicitly making use of 23-element array of transducers (at x=30m). (Right): The scattered signals are time-reversed and used to Welch’s bound. Similarly, if you want encode a bit stream. Through a multiple-scattering medium (top) five short pulses very well focused on each receiver can be observed. Through water (bottom), the pulses overlap, indicating a strong cross talk to communicate through a high order between the receivers. After Derode et al. (2003). multiple scattering medium, the multiple scattering can actually help you achieve higher transfer rates. Thus, in both these cases, multiple scattering is more a friend than a foe. In the next section, we will consider the case where we are not interested in an interferometric construction of seismic data but rather want to decode the original sources, including their multiply-scattered waves, directly as we consider the marine simultaneous source separation problem. As we will see, geophysicists have a few more tricks they can bring to bear on the seismic encoding and decoding problem.

87

2.8  Simultaneous Source Separation Marine seismic data acquisition is changing rapidly. One of these developments is the use of so-called ‘simultaneous sources’, where multiple seismic sources that intentionally interfere with one another are activated simultaneously. Proposed and demonstrated over 15 years ago, the subject of marine simultaneous sources saw long fallow periods when nothing seemed to happen until the uptake of wide-azimuth surveys and, more recently, when the industry needed to improve the way in which geophysicists deliver high quality data while maintaining viability in a cost-focused market. The goal is to challenge conventional thinking and explore what could be the solutions of the future. Truth is ever to be found in simplicity, and not in the multiplicity and confusion of things. Sir Isaac Newton

Traditionally, seismic data have been acquired sequentially: an impulsive source is fired and data are recorded until the energy that comes back has diminished and all reflections of interest have been captured, after which a new shot at a different location is fired. Being able to acquire data from several sources at the same time is clearly highly desirable, as the example acquisition configuration in Figure 2.49 suggests. Not only would it allow us to cut expensive acquisition time drastically, but it would better sample the wavefield on the source side, which typically is much more sparsely sampled than the distribution of receiver positions. It would also allow for better illumination of the target from a wide range of azimuths

Guest Contributors: Dirk-Jan van Manen and Johan Robertsson, ETH Zurich and would acquire more data from the wavefield in areas with surface obstructions. In addition, for some applications such as 3D VSP acquisition, or marine seismic surveying in environmentally sensitive areas, reducing the duration of the survey is critical in order to save costs external to the seismic acquisition itself (e.g., down-time of a producing well) or to minimise the impact on marine life (e.g., avoiding mating or spawning seasons of fish species).

2.8.1  Simultaneous Source Seismic Data Acquisition Simultaneous sources have a long history in land seismic acquisition, dating back at least to the early 1980s. Commonly

Polarcus

Figure 2.49: Example of using multiple sources on one vessel to increase cross-line sampling density. Top: In conventional sequential shooting, one source (green) is fired at a first location yielding the subsurface points dotted green and the second source (yellow) is fired at a second location, typically 25m away, giving the subsurface points dotted yellow. In simultaneous shooting, both sources are fired together, thus improving the cross-line trace density. Bottom: By extending the concept to a five source configuration a significant increase in data density is achieved. The challenge for the geophysicist is to intelligently ‘encode’ the sources so that the overlapping data measurements can be ‘decoded’ into data that would be acquired from each individual source.

88

WesternGeco

Figure 2.50: Four-vessel approach to wide-azimuth seismic utilises two recording vessels towing streamers with sources plus two source-only vessels. The distance between each vessel is typically 1,200m (Moldovenau et al., 2008).

used on land are vibroseis sources, which offer the possibility of illuminating the subsurface with source signal sweeps designed to ‘share’ the use of certain frequency bands, avoiding simultaneous interference at a given time from different sources. By carefully choosing source sweep functions, activation times and locations of different vibroseis sources, it is possible to mitigate interference between sources. Such approaches are often referred to as slip-sweep acquisition techniques. Moreover, it is also possible to design sweeps that are mutually orthogonal to each other such that the response from different sources can be isolated after acquisition through simple cross-correlation procedures with sweep signals from individual sources. The use of simultaneous source acquisition in marine seismic applications is more recent, as marine seismic sources (i.e. airguns) do not appear to yield the same benefits in providing orthogonal properties as land seismic vibroseis sources do, at least not at first glance. Western Geophysical was among the early proponents of marine simultaneous source seismic acquisition, and suggested that separation was carried out as a pre-processing step by assuming that the reflections caused by the interfering sources have different characteristics. Beasley et al. (1998) exploited the fact that, provided that the subsurface structure is approximately layered, a simple simultaneous source separation scheme can be achieved, for instance, by having one source vessel behind the spread acquiring data simultaneously with the source towed by the streamer vessel in front of the spread. Simultaneous source data recorded in such a fashion is straightforward to separate after a frequency-wave number ( fk) transform, as the source in front of the spread generates data with only positive wave numbers, whereas the source behind only generates negative wave numbers.

source, they can be lined up coherently for a specific source in, for instance, a common receiver gather or a common offset gather, where all arrivals from all other simultaneously firing sources will appear incoherent. To a first approximation it may be sufficient to just process the data for such a shot gather to a final image, relying on the processing chain to attenuate the random interference from the simultaneous sources. However, it is, of course, possible to achieve better results, for instance through using random noise attenuation to separate the coherent signal from the apparently incoherent signal (Stefani et al., 2007). In recent years, with elaborate acquisition schemes to, for example, acquire wide azimuth data with multiple source and receiver vessels (Figure 2.50), simultaneous source acquisition has become a hot topic of research and several authors have described method s that separate ‘random dithered sources’ through sophisticated inversion approaches which exploit the Figure 2.51: Simultaneous source acquisition separation result from a dataset from the Gulf of Mexico. (a) A CMP gather of input, primary estimate, secondary estimate and residual. (b) A stack section without secondary suppression. (c) Same section as in b), but with secondary suppression and subsequent diversity noise attenuation. (Reproduced from Akerberg et al., 2008. Chevron is thanked for the permission to show the dataset.) 0.2 km Input

6.1 km

Primary Est.

Secondary Est.

Residual

2s

a

5s 1 km

2s

b

5s 1 km

2s

c

2.8.2  The Stochastic Approach Another method for enabling or enhancing separability is to make the delay times between interfering sources incoherent (Lynn et al., 1987). Since the shot time is known for each

5s

89

Frequency (Hz)

Frequency (Hz)

sparse nature of seismic data in the timefm domain (i.e., seismic traces can be thought a of as a subset of discrete reflections alias fk signal alias with ‘quiet periods’ in between). Figure cone 2.51 illustrates the results using such an approach by co-workers at Chevron and presented in a paper by Akerberg et al. no signal no signal (2008). A different approach to simultaneous source separation has been to modify the source signature emitted by airgun -ks -kn 0 kn ks sources, which comprise several (typically three) sub-arrays along which multiple fm clusters of smaller airguns are located. alias alias b As, in contrast to land vibroseis sources, it is not possible to design arbitrary Only signal Only signal Overlapping source signatures for marine airgun from from signals sources, in principle one has the ability periodic periodic to choose firing time (and amplitude i.e., source source volume) of individual airgun elements within the array, meaning it is possible to choose source signatures that are dispersed as opposed to focused in a -ks -kn 0 kn ks single peak. Such approaches have been Wave Number (1/m) proposed to reduce the environmental Figure 2.52: (a) In geosciences, we plot time-offset data (not shown) in frequency-wave number (f k) impact in the past (Ziolkowski, 1987) but diagrams to examine the direction and apparent velocity 2πf/k of seismic waves. Here fm is the maximum also for simultaneous source shooting. frequency; k s the spatial sampling frequency; and kn = k s /2 is the spatial Nyquist frequency. Typically, Abma et al. (2015) suggested using a seismic data plots into the f k signal cone bounded by the dashed white line, determined by the water library of ‘popcorn’ source sequences speed. Observe that there is plenty of ‘space’ in f k that has no signal. This is utilised in seismic apparition by designing periodic source sequences that place seismic energy into this empty space. (b) For two to encode multiple airgun sources such sources shot simultaneously, the seismic apparition technique samples two source wavefields, sampled that the responses can be separated at spatial frequencies k s and k s /2 respectively. The cone centred around k=0 contains information about after simultaneous source acquisition both source wavefields (overlapping signals). The cones centred around kn , however, only contain by correlation with the corresponding information from the source that has been fired in a periodic way. By intelligent data processing, the data from the two sources can be decoded. source signatures, following a practice that is similar to land simultaneous source acquisition. The principle is based on the fact that the simple modulation functions from shot to shot (e.g., a simple cross-correlation between two (infinite) random sequences is short time delay or an amplitude variation), the recorded data zero whereas the autocorrelation is a spike. It is also possible on a common receiver gather or a common offset gather will to choose binary encoding sequences with better or optimal be deterministically mapped onto known parts of, for example, orthogonality properties such as Kasami sequences (discussed the f k-space outside the conventional ‘signal cone’ where in Part I) to encode marine airgun arrays (Robertsson et al., conventional data is strictly located (Figure 2.52a). The signal 2012). Mueller et al. (2015) propose to use a combination cone contains all propagating seismic energy with apparent of random dithers from shot to shot with deterministically velocities between water velocity (straight lines with apparent encoded source sequences at each shot point. slowness of +-1/1,500 s/m in f k-space ) and infinite velocity Recently, there has been industry interest in exploring the (i.e., vertically arriving events plotting on a vertical line with feasibility of marine vibroseis sources, as they would, among wave number 0). The shot modulation generates multiple other things, appear to provide more degrees of freedom to new signal cones that are offset along the wave number axis optimise mutually orthogonal source functions beyond just thereby populating the f k-space much better and enabling exact binary orthogonal sequences, which would allow for a step simultaneous source separation below a certain frequency change in simultaneous source separation of marine seismic (Figure 2.52b). Robertsson et al. (2016) referred to the process data. However, we believe the engineering challenges of a marine as ‘wavefield apparition’ or ‘signal apparition’ in the meaning vibroseis source are immense and the robustness and ability to of ‘the act of becoming visible’. In the spectral domain, the provide the broad spectrum of the marine airgun array are very wavefield caused by the periodic source sequence is nearly hard to match using other source technologies. ‘ghostly apparent’ and isolated. The word ‘spectrum’ was introduced by Newton (1672) 2.8.3  The Deterministic Approach: Seismic Apparition in relation to his studies of the decomposition of white light A recent development, referred to as ‘seismic apparition’, into a band of light colours, when passed through a glass suggests an alternative approach to deterministic simultaneous prism (Figure 2.53). This word seems to be a variant of the source acquisition. Robertsson et al. (2016) show that by using Latin word ‘spectre’, which means ‘ghostly apparition’. A

90

critical observation and insight in the ‘seismic apparition’ approach is that partially shifting energy along the k-axis is sufficient as long as the source variations are known, as the shifted energy fully predicts the energy that was left behind in the ‘conventional’ signal cone. Following this methodology, simultaneously emitting sources can be exactly separated using a simple modulation scheme where, for instance, amplitudes and/or firing times are varied deterministically from shot to shot in a periodic pattern.

2.8.4  Call for Revolution? It has been suggested that simultaneous source separation is an unnecessary step as all that matters is the spatial and temporal spectrum of the energy that illuminates the subsurface. It has been argued that as long as the conventional approach of regarding seismic data as a sequence of single shot records is abandoned, it would be possible to create images of the subsurface from the total energy that illuminates it, whether from surface-related multiples or simultaneously emitting sources. Although such a philosophy appears appealing it would call for a revolution in the way that seismic data are processed and imaged, quite different from the conventional approach of imaging based on the separation of the velocity model-building step from the imaging step, introduced by Jon Claerbout in the 1960s.

2.8.5  Other Applications Finally, simultaneous source acquisition has other applications of interest to seismic imaging. In particular, in modelling and inversion the computational cost can be reduced proportionally if it were possible to generate the response to multiple sources during a single finite difference simulation, for example. Although there are some promising developments on the horizon, the silver bullet to crack this problem has not yet appeared. The key to unlock this grand challenge may well lie somewhere amongst the approaches discussed in this chapter.

Figure 2.53: The word ‘spectrum’, from the Latin word ‘spectre’ which means ‘ghostly apparition’, was introduced by Newton (1672) in relation to his studies of the decomposition of white light into a band of light colours, when passed through a glass prism. Prior to Newton’s work, people believed that colour was a mixture of light and darkness, and that prisms coloured light. To show that they did not, Newton refracted the light back into single white light. ‘Seismic apparition’ is a buzz phrase, introduced by Robertsson and Amundsen in relation to simultaneous source separation. Engraving after a picture by John Adam Houston (1811–1884), ca. 1870.

91

92

Chapter 3

Marine Seismic Sources and Sounds in the Sea If learning the truth is the scientist’s goal… then he must make himself the enemy of all that he reads.

Sercel

Ibn al-Haytham (965–1040)

This chapter on marine seismic sources will summarise salient points for geoscientists who need to sharpen their rusty skills in seismic source technology and sound in the sea. It will also discuss the effect seismic sources have on marine life.

93

3.1 Airguns for Non-Experts The airgun has long been the most popular marine seismic source.

Double, double, toil and trouble; airguns fire and ocean bubble. With apologies to Shakespeare’s three witches in Macbeth, and thanks to our colleagues Bill Dragoset and Jan Langhammer.

PGS

A seismic source is defined as any device which releases energy into the earth in the form of seismic waves. The major source type in marine exploration is the airgun array, which since the 1970s has been by far the most popular. The airgun can be described as a chamber of compressed air that is released rapidly into the surrounding water to create an acoustic pulse. The airgun is the most commonly used source because the pulses are predictable, repeatable and controllable, it uses compressed air which is cheap and readily available, and it has only a minor impact on marine life.

3.1.1  Size and Geometry An airgun volume is measured in litres (l) or more commonly, by the conservative petroleum geophysicist, in cubic inches (in3). Typical volumes of individual airguns used by the exploration industry vary from 20 in3 (0.3 litres) to 800 in3 (13.1 litres), while academic seismic refraction studies can use volumes up to 1,600 in3 (26.2 litres). An airgun array consists of 3–6 subarrays called strings, each string containing 6–8 individual guns, so that the array usually involves between 18 and 48 guns, although in special cases as many as 100 guns an array can be used. The airgun array volume is the sum of the volumes of each gun, and is typically in the range 3,000–8,000 in3 (49.2–131.6 litres). The airguns hang in the sea beneath floats between 3m and 10m below the sea surface, generally at about 6m, except in refraction studies when a deeper deployment is needed. The gun pressure most commonly used by the seismic industry is 2,000 psi (138 bar). During a survey the guns fire every 10–15 seconds. It is common to arrange several (2–4) airguns in a cluster, with the guns so close together that they behave as a larger single gun. The main purpose of clustering is to improve signal characteristics, since the bubble motion (see Section 3.1.2.1) is reduced by this configuration. The energy sent out by airgun arrays is predominantly directed vertically downwards. The broad band of frequencies from the array form a pulse with peak-to-peak amplitude in the range 14–28 bar-m, corresponding to 243–249 dB re 1 μPa-m vertically downward. The amplitude levels emitted horizontally tend to be 15–24 dB lower. These numbers are frequency dependent. By filtering out high frequencies there is less deviation between amplitude levels vertically and horizontally. Here, ‘dB re 1 μPa-m’ means decibel value peak-topeak relative to the reference pressure one micropascal at a reference distance of one metre. Confusing units? Read the box on definitions and for a guide of the physical principles of airguns and the basic sound measurement units. Our focus for the moment is on the vertically downward travelling ‘farfield’ signature of an airgun array as this signature provides a quantitative measure of the array’s performance.

94

Figure 3.1: Deployment of airgun subarray (or string).

3.1.2  Bubble Oscillations When compressed air is suddenly released into the water an oscillating bubble forms. This process is described in Parkes and Hatton (1986): “Initially, the pressure inside the bubble greatly exceeds the hydrostatic (external) pressure. The air bubble then expands well beyond the point at which the internal and hydrostatic pressures are equal. When the expansion ceases, the internal bubble pressure is below the hydrostatic pressure, so that the bubble starts to collapse. The collapse overshoots the equilibrium position and the cycle starts once again. The bubble continues to oscillate, with a period typically in the range of tens to hundreds of milliseconds.” The oscillation is stopped due to frictional forces, and the buoyancy of the bubble causes it to break the sea surface. If

Definitions

(SPL) of a sound of pressure S is SPL (dB) = 20 log10 (S/S 0) where S 0 is the reference pressure. The standard for specifying airgun signal levels is the peak-to-peak (P-P) level, which is the maximum negative-to-positive measurement of the airgun signature. The seismic survey literature refers to P-P pressure amplitudes in bar-m. P-P can be converted to source level LS in dB re 1 µPa -m as follows:

LS (dB re 1 µPa -m) = 20 log(P-P)+220.

Acoustic pressures of 10–20 bar-m correspond to 240–246 dB re 1 µPa -mP-P. The source amplitude spectrum level gives source amplitude strength versus frequency. It is common to normalise the amplitude spectrum in dB relative to 1 µPa / Hz at 1m, abbreviated as dB re 1 µPa / Hz-m. Because the reference pressure 1 µPa is a small pressure, a moderately sized airgun array will have a spectrum that peaks 200 dB above this reference level.

Langhammer 1994

Hydrophones are sensitive to sound pressure, which is measured in micropascals (µPa). By tradition, the geophysicist uses a different pressure unit, microbar (μbar). One bar is equivalent to 1011 µPa. The far-field signature of an airgun array measured vertically beneath it is used to define the nominal source level. This is the acoustic pressure 1m away from its hypothetical point source equivalent that would radiate the same amount of sound in the far-field as the actual source. Units are in bars at one metre, abbreviated as bar-m. For example, 100 bar-m means that if the array were a point source and a hydrophone were 50m away, then the hydrophone would detect a pressure of two bar. In presenting sound measurements, acousticians use ratios of pressures; in underwater sound the adopted reference pressure is one µPa. Furthermore, acousticians adopt the decibel (dB) scale so that sound pressure level

Figure 3.2: Pictures of air bubbles 1 ms, 1.5 ms, and 7 ms after firing a small airgun in a tank.

we could stop this cyclic motion immediately after the first internal pressure (firing pressure) and volume of the air before expansion of the bubble, the airgun would create an ideal it is released and P0 is the hydrostatic pressure. C1 is a constant signal close to a single spike. However, due to the cyclic bubble that depends upon the airgun details. The dominant frequency motion, the signal from a single airgun is far from ideal. of the bubble motion, f=1/T, then decreases with increasing gun As explained below, the dominant frequency of the bubble volume, or with increasing gun pressure, or with decreasing oscillations decreases with increasing gun volume, Figure 3.3: Part of an airgun array onboard a vessel. or with increasing gun pressure, or with decreasing source depth. Therefore, small guns emit higher frequencies and big guns emit lower frequencies – like a bell choir. The geophysicist wants a broad band of frequencies. However, as we will explain, the source ghost (see Section 3.1.3) has a detrimental effect on broad-band spectrum. The physics of air bubbles from marine airguns is complex. The temperature inside the bubble can drop to around minus one hundred degrees Celsius, causing ice crystals to form. There is also evidence that the bubble comprises a number of small bubbles rather than one large one. Nevertheless, to a first approximation, the parameters that govern the time behaviour of the bubble oscillation are surprisingly ‘simple’: T = C1 P1/3 V1/3 / P05/6 . Here, P and V are the initial

© WesternGeco

3.1.2.1  Bubble Motion

95

3m 155 x 3

3m 235

3m 155

3m 90

3m 54

3.1.2.2  Airgun Strength

Courtesy Schlumberger

hydrostatic pressure (and thus, source depth). By tradition, the seismic industry quotes firing pressure in pound force per square inch, psi (psi=6894.76 Pascals; a Pascal is a Newton per square metre).

30

16m

© Statoil

The strength of an array is: 195 x 3 290 195 125 90 54 1. Linearly proportional to the number of guns Figure 3.4: Seismic vessel in the array. All else being equal, a 40-gun towing two airgun arrays at array generates twice the amplitude of a 6m depth. One array, consisting 20-gun array. of three strings with a total of 24 guns, is sketched in plan view. It 2. Close to linearly proportional to the firing measures 15m x 16m (inline x crosspressure of the array. A 3,000-psi array has 155 90 54 30 line). The numbers are gun volumes in 155 x 3 235 1.5 times the amplitude of a 2,000-psi array. in3. The total volume is 3,397 in3. 15m 3. Roughly proportional to the cube root of its volume. The number of guns provides the greatest influence on the In the marine setting, the time domain pressure pulse array strength, not the firing pressure nor the total volume. that is emitted by the single airgun in the vertical direction is called the pressure signature. However, another pulse 3.1.3  The Source Ghost simultaneously travels upward from the source, is reflected In the early days of land seismic exploration using dynamite down at the sea surface and joins the original downwardsources in boreholes, geophysicists noted a remarkable travelling pressure pulse. difference in the measured seismic data caused by differences This delayed pulse, reflected at the sea surface, is called the in source depth (Leet, 1937; Ricker, 1951; Sengbush, 1983). source ghost (see Figure 3.6). From a data processing point of Van Melle and Weatherburn (1953) showed that this was the view, the source ghost is often considered to be an intrinsic result of energy that travelled upward initially and then was feature of the source wavefield, and then is included in the reflected downwards at the base of the weathering and other definition of the source pressure signature. The physics of discontinuities that were present above the source. They dubbed the source ghost is simple. The upward-travelling part of the these reflections, calling them, by optical analogy, ghosts. source signal cannot escape into the air, as the sea surface acts as a mirror, and reflects the signal downwards with opposite polarity. This source ghost pulse is delayed in time Figure 3.5: The air bubble from a 600 in3 with respect to the initial downwards primary pulse from the gun at 3m depth. The source. Geometrically, the source ghost appears to originate released air produces from the mirror image of the seismic source. The time delay a steep-fronted shock therefore is given by: wave that is observed as ripples on the sea surface (first two pictures) before the bubble breaks the surface.

τ = 2d cos(θ)/c

where d is the source depth, c is the sound of speed in water, and θ is the offset angle of the initial downwards primary pulse measured from the vertical axis. The ghost function frequency spectrum of the composite Figure 3.6: The source ghost reflects at the sea surface to join the primary downwards pulse from the source. The ghost appears to originate from a virtual source at the mirror location of the physical source.

Mirror source d θ

d

2d

c

( os

θ)

/c

Sea surface

Seismic source θ Primary pulse

96

Ghost pulse

10

signal (primary and ghost) is:

5 0

fn =

nc

2d cos(θ)

, n = 0,1,2, …

Amplitude (dB)

|G(f)| = |1 – exp(2πifτ)| = 2|sin(2πfd cos(θ)/c)|

This spectrum has zeroes called ‘ghost notches’ at frequencies:

-5 -10

Amplitude (dB)

The first notch (n = 0) is always at f0 = 0 Hz. -15 Normally, the useable frequency band in seismic is -20 considered to be those frequencies that are inside -25 the first and second notch frequencies (inside f0 and f1) of the spectrum of the composite signal. -30 The source depth strongly influences the 0 25 50 75 100 125 150 175 200 225 250 frequency band of seismic data. In Figure 3.7 Frequency (Hz) we display amplitude spectra of two ghost Figure 3.7: Amplitude spectra of the ghost function G(f) in the vertical direction (θ = 0) for sources functions related to sources at depths d1 = 6m at depths d1 = 6m (red curve) and d2 = 15m (blue curve) below a free surface. The sound velocity and d2 = 15m below a free surface. Constructive is c = 1,500m/s. The deeper source ghost function has the strongest low-frequency spectrum but less bandwidth compared to the shallower source ghost function. and destructive interference is shown by positive and negative dB respectively; observe that amplitude of two (due to constructive interference) corresponds 3.1.4  The Airgun Signature to +6 dB. Due to the source ghost effect, it seems advantageous The airgun ‘signature’, recorded by a hydrophone below the gun, to deploy the sources deeper to enhance the low frequency has three characteristic features: the direct arrival produced content of seismic data. But, as we see, deep sources reduce when the gun fires; the source ghost; and the bubble pulse the seismic bandwidth caused by the destructive interference caused by the expansion-collapse cycle of the air bubble. The (notches) between the initially downgoing signal and the ghost. signature is characterised by two parameters: the primary pulse Deploying sources at shallower depth has the opposite effects. peak-to-peak (P-P) amplitude, or ‘strength’, the useful part A shallow source provides improved high frequency content at of the signal, and its bubble period. These two characteristic the cost of degraded low frequency content due to the ghosting parameters depend on the airgun’s size, initial firing pressure, effect. and depth. Deghosting – which is the process of ghost elimination – is a Two important parameters of an airgun array signature long-standing problem (see, for example, Robinson and Treitel, are its peak-to-peak (P-P) strength and primary-to-bubble 2008), and there is an extensive literature in this area with papers ratio (PBR). The PBR should be as high as possible so that the that provide background, motivation and methods. Source-side airgun array signature is close to an ideal pulse. To find the deghosting has largely been an unsolved problem and research in P-P strength of an array, its signature is measured at a distance this field is intensive today. from the source known as the far-field point, a point where the To enjoy the best of both worlds, with low frequencies output signals of the individual guns interfere constructively, and large bandwidth, one solution is to introduce over/ typically 250–300m beneath them. This far-field signature is under solutions, where sources are deployed at two different then used to define a nominal point-source level, at 1m from depths (Amundsen et al., 2017a). Assume that Figure 3.8: Amplitude spectra of the source ghost function from over/under sources at depths we combine a point source at 6m with the d1 = 6m and d2 = 15m below the sea surface. The black curve corresponds to firing without one at 15m, both firing at time zero. Then the synchronisation (both sources fire at time 0). Several notches are present. The red curve ghost function spectrum related to the over/ corresponds to synchronised firing with a delay (d2 – d1)/c of the deeper source so that the initially downgoing pulses align in time beneath the sources. No deep notches are present, except under source becomes that displayed by the at 0 Hz and 250 Hz. The over/under source response is rich in low frequencies without significant black line in Figure 3.8; it does not have a notch notching in the bandwidth of interest. Denotching is obtained by deghosting in data processing. at 50 Hz (as the deeper source produces) but 10 above, at 71.4 Hz, in addition to notches at 83.3, 142.9 and 214.3 Hz. Compared to shooting 5 with a single source at 6m, we have gained 0 lower frequencies but lost higher frequencies that will be challenging to deghost in data -5 processing. However, the industry standard is to -10 synchronise the two sources by applying a time delay to the firing time of the deeper source, -15 such that the initially vertical downgoing wave -20 is aligned beneath the sources. Then, the ghost function spectrum changes to that displayed -25 in the red line in Figure 3.8. There is no longer -30 any deep notch due to the source ghost for 0 25 50 75 100 125 150 175 200 225 250 frequencies below 250 Hz, except at 0 Hz. Frequency (Hz)

97

1.5

a

Langhammer 1994

Amplitude (bar-m)

Figure 3.9: Pressure signature of the sound pulse of a single 40-in3 airgun. The near-field signature (a) shows the measurement of the released air producing a steep-fronted shock wave followed by several oscillations resulting from the repeated collapse and expansion of the air bubble. The signal strengths of the direct wave and the first bubble are P and B, respectively. The near-field peakto-bubble ratio is PBR=P/B. The far-field signature (b) shows the effect that the source ghost has on the near-field signature. The peak-to-peak amplitude P-P (the distance between the positive peak of the primary and negative peak of the ghost) is 2.3 bar-m. The far-field peak-to-bubble ratio is PBR=P-P/B-B=1.9. The bubble period is τ=60 ms.

τ 1.0

P

0.5

B 0.0

-0.5

Amplitude (bar-m)

0 1.5

100

b

200

300

200

300

Time (ms)

400

τ

1.0 0.5 0.0

P-P

B-B

-0.5 -1.0 -1.5 0

98

100

Time (ms)

400

© WesternGeco

the centre of the array, by multiplying the signature by the radial distance from this point to the hydrophone. This nominal point-source level is a theoretical sound pressure level. Because of partial destructive interference between the signals of the individual guns, the actual level at this point in reality tends to be 10 times (20 dB) lower than the nominal level. The P-P strength (related to this nominal source level) is defined as the difference in absolute amplitude between the peaks of the primary and ghost arrivals. The P-P amplitude of, for example, the 3,397in3 array is 102 bar-m, corresponding to 260 dB re 1 µPa -m. Typical arrays tend to produce levels of 243-249 dB re 1 μPa-m, corresponding to 14–28 bar-m. The seismic industry sometimes gives output levels in rootmean square (rms) peak-to-peak amplitudes (rms P-P). The dB reduction of rms P-P compared to P-P is around 3 dB since the maximum peak and trough amplitudes are approximately equal. It is important to remember that the time-domain specifications of P-P strength and PBR depend strongly on the frequency bandwidth of the array’s signature. When high frequencies are cut the P-P strength and PBR both decrease. There are several reasons for deploying airguns in arrays. The first is to increase the power of the source. The basic idea

Figure 3.10: Far-field pressure signatures of individual airguns vary with gun volume. The tuned pressure signature is obtained when the six guns are fired simultaneously; PBR=8.6.

is that a source array of n single sources produces n times the power of the single source. The second is to minimise the PBR by tuning the array: guns with different volumes will have different bubble periods, leading to a constructive summation of the first (primary) peak and destructive summation of the bubble amplitudes.

3.1.5  An Introduction to Decibels The decibel (dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity relative to a specified reference level. The decibel is most widely known as a measure of sound pressure level. Decibels are measured on a base-10 logarithmic scale: an increase of 3 dB doubles the intensity of sound, 10 dB represents a ten-fold increase, 20 dB represents a one hundred-fold increase, and so on. Levels from continuous sources (like noise) are normally expressed on a root-mean-square (rms) pressure basis. For an ideal sinusoid, the rms level is 9 dB lower than the P-P value. It is difficult to compare levels from airguns with continuous sources, but a guide is to set the rms level 18 dB lower than the P-P value. It is difficult to compare underwater sound to that in air because of pressure differences, but by subtracting 61.5 dB from the underwater measurement, one roughly obtains the in-air equivalent of the sound intensity measured in dB. In air, a short exposure to 140 dB is seen as the approximate threshold for permanent hearing loss for humans. To set seismic signal levels in perspective, the pressure of low-level background noise from gentle wave action/little wind is above 60 dB re 1 µPa (spectral level, 10–100 Hz). In bad weather, low frequency background noise increases to 90–100 dB re 1 µPa. Marine vessels generate significant noise. Supertankers, for example, may have a source level of 170 dB re 1 µPa -m (spectral level), while the source level of active trawlers will be in the order of 150–160 dB re 1 µPa -m. Whales can generate signal levels exceeding 180 dB re 1 µPa at one metre. Signals from airguns range from 240–260 dB re 1 µPa -mP-P. Chemical explosives detonating in the sea will have peak pressure levels in excess of 270 dB re 1 µPa-m, for charge sizes of 1 kg. However, chemical explosives are not used in

marine seismic operations today. The computed source level depends on the frequency range over which the acoustic pulse is measured. Seismic arrays are frequently measured over 0–125 Hz or 0–250 Hz. There may be a slight underestimation of total energy by these bandwidths, but the error is small because output above 250 Hz is limited. It is known, however, that the output from airguns extends well into the kHz band but with much-reduced pressure level (see Section 3.4). For a longer discussion on decibels, see Section 3.7.

Figure 3.11: Source signature and amplitude spectrum for the 3,397-in3 airgun array. The peak-to-peak amplitude is 102 bar-m.

99

3.2  Airgun Arrays for Non-Experts An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them. Werner Heisenberg (1901–1976)

3.2.1  Source Directivity Figure 3.12 shows a seismic vessel with two airgun arrays towed 355m behind (measured from the navigation reference point). An example of an airgun array configuration with 28 active guns in three strings is shown in plan view in Figure 3.13a. The individual gun volumes in this example range from 20 in3 (0.3 l) to 250 in3 (4.1 l). The total volume is 3,090 in3 (50.7 l). The array contains a number of ‘cluster guns’, where two guns sit so close together that their air bubbles coalesce after the guns have fired. Cluster guns produce sound more efficiently than a single large gun with the same volume as the cluster. The source array dimension is 15m (inline) x 20m (crossline). Inline and cross-line refer to the direction the ship sails. The source signature from the array at 5m depth is displayed in Figure 3.13b. Observe that the bubble oscillations are strongly damped. The primary-to-bubble ratio is PBR=35.6. The amplitude spectrum is shown in Figure 3.13c. The notches at frequencies 0, 150, 300 and 450 Hz are caused by the source ghost. Note that the primary-to-bubble ratio is frequency dependent. In seismic surveying, airgun arrays are designed to direct a large proportion of the sound energy downwards. Despite this downward-focusing effect, relatively strong sound pulses will propagate in all directions. The radiation from an array will depend on the angle from the vertical, so that the radiated source signature is directional. This effect is called directivity. Each array has its own specific radiation pattern. This pattern, which will be different for different frequencies, varies relatively slowly from low to high frequencies. The radiation pattern will also be different for different array tow depths. For the gun array in Figure 3.13a, source directivity can be modelled. Figure 3.14 shows the radiation pattern for frequencies 0–150 Hz, in in-line (top) and cross-line (bottom) directions. Vertical direction is 0 degrees. At the edge of each circle 90 degrees corresponds to horizontally propagating energy. Observe that the radiation pattern is concentrated

100

Figure 3.12: Seismic vessel with two seismic sources (two airgun arrays) and 12 hydrophone cables equipped for a 3D investigation.

downwards, and that the source pulse gets attenuated for angles that differ from the vertical. The amplitude levels emitted horizontally tend to be 18–29 dB lower than the vertical. Note that the sound produced by the array is not distributed evenly across the frequency spectrum. The amplitude is largest in the 20–100 Hz interval but some energy will be present up to 500–1,000 Hz. The high-frequency components are weak when compared to the low-frequency components, but strong when compared to ambient noise levels.

3.2.2  The Effect of the Water Layer and Seafloor The seismic signal from the airgun array will be affected by the physical properties of the water layer and the seafloor. The sound travelling with small to moderate angles to the vertical axis will reflect at and refract into the water bottom. The reflection strength is given by the reflection coefficient for the interface between the water layer and the layered bottom. For small angles, the reflection coefficient is small, typically 0.2, so that most (~80%) of the sound enters the subsurface. However, sound hitting the seafloor at angles larger than a critical angle to the vertical, determined by the ratio of sound velocity in water and sea bottom, will be reflected back into the water layer (Figure 3.15). The water layer, bounded above by the sea surface and below by the water bottom, then forms

a

3.2.3  Sound Propagation with Horizontal Distance PGS

20 15 1 2

10

3 4

150 150

5

6

7

60 60

20

40

60

8 9

10 11

10 11

100

250 250

Y (metres)

5 1 2

0

14

15

16

17

18

100 100

50

60

20

40

70

250 250

21 22

23 24

25

26

27

28 29

30 31

70

40

20

-5 -10

150 150

150 150

70 70

250 250

-15

-20 -5.0

-2.5

0.0

2.5

5.0

7.5

10.0

12.5

15.0

17.5

X (metres)

Acoustic Pressure (bar-m)

b 50

0

-50

Amplitude spectrum (dB re 1 µPa/Hz-m)

0

200

Time (ms)

c

The signals from marine airgun arrays can be detected in the water column many kilometres away from the seismic vessel, sometimes 100 km or more. The sound levels from airguns at long horizontal distances from the seismic vessel are determined not only by the acoustic power output but, equally importantly, by the local sound transmission conditions. This is discussed in more detail in Section 3.5. Figure 3.16 shows the sound recorded 13 km away from an airgun array in a water depth of 70m. The signal has three important features at times T0=7.70s, T1=8.78s and T2=9.77s. These are the arrival times of three ‘wavelets’ travelling with apparent velocities c0=1,687 m/s, c1=1,480 m/s and c2=1,330 m/s, respectively. The low-amplitude, low-frequency wave starting at T0=7.70s is called the ground wave because it is closely related to the sediment sound velocity. The lowest frequency component arrives first (at T0 with velocity c0), followed by progressively higher frequency components travelling at progressively lower velocities. The highest frequency component of the ground wave arrives at time T2 with velocity c2. The high-amplitude, high-frequency wave which is superimposed on the ground wave at time T1=8.78s is called the water wave because it is mainly a function of the water sound velocity. In the water wave, with duration T=1.2s, higher frequencies travel fastest and arrive before lower frequencies. At time T2 the frequencies of the ground wave and Figure 3.14: In-line (upper panel) and cross-line (lower panel) directivity plots for the airgun array, at 60m depth observation point. Colours indicate different energy levels in dB. PGS

Inactive guns Single guns Cluster guns

Array: 3090T__50_2000_100 Total Volume: 3090.0 cubic inch

200

180

0

200

400

Frequency (Hz)

Figure 3.13: (a) Configuration (viewed from top) of an airgun array of total chamber volume 3,090 in3 (50.7 l). (b) Modelled far-field pressure signature of the sound pulse referred to 1m distance from the source centre. The source depth is 5m and the gun pressure is 2,000 psi. The strength (peak-to-peak amplitude) is 135.6 bar-m. The primary-to-bubble ratio is 35.6. The water velocity is 1,506.9 m/s. (c) Frequency spectrum of the modelled far-field pressure signature.

an acoustic waveguide where the sound propagates with significantly less attenuation than sound in an infinite water pool. The transmission properties of this waveguide depend on the geology of the seafloor and variation of sound velocity with depth and distance. For a soft seafloor, the critical angle is typically 60–70 degrees; for a hard sea bottom it can become 30 degrees. More sound enters the waveguide for hard seafloors than soft ones, producing a higher level of sound at large range from the source.

101

Figure 3.15: Graphic demonstrating sound propagation in a water layer over a layered seafloor. A large part of the sound travels downwards. Sound that hits the sea bottom at an angle greater than the critical angle (θ) will be totally reflected. This sound is then trapped within the sea layer, which in turn channels or guides it. This phenomenon is known as waveguide or normal mode propagation.

the water wave merge, at which point they form a single wave called the Airy phase. At this abrupt end of the wave train, energy has been transported in the water layer waveguide with the minimum group velocity. The onset of the water wave is sometimes used in marine refraction work to determine the source-receiver range (since the water speed is well known). We conclude that with increasing horizontal distance from airguns, the signal decreases in strength but increases in time duration during the guiding of the sound. The initially short airgun array signal, some 10 milliseconds in length, can become quite long. In the water wave, higher frequencies arrive before lower frequencies. This geometrical dispersion effect will be sensed as a frequency modulated tone or ‘hooting’ by anyone listening down there in the water column.

102

Figure 3.16: Seismic signal recorded in the water layer at a distance of 13 km away from a source vessel. The water depth is 70m. The initial source signal some 10 milliseconds long is significantly broadened in time since the signal frequencies travel with different velocities.

3.3  A Brief History of Marine Seismic Sources We know only too well that what we are doing is nothing more than a drop in the ocean. But if the drop were not there, the ocean would be missing something. Mother Teresa (1910–1997), beatified in October 2003 and canonised as a saint in 2016.

DeGolyer Library, SMU

3.3.1  Fundamental Work Shell crews first used driftwood and syrup cans as floats, In 1917 Rayleigh gave one of the first theoretical descriptions before advancing to balloons and ‘turkey bags’ inflated by air of an underwater air bubble. He was interested in the sounds compressors. Soon it became the practice to fire small charges, emitted by water in a kettle as it comes to the boil, and each tied to a balloon via a piece of string, fired even as shallow explained these noises as resulting from the partial or complete as 2 ft (0.6m) below the surface. This procedure fully eliminated collapse of bubbles as they rise through cooler water. The the explosion bubble pulse. water in Rayleigh’s theory was treated as an Figure 3.17: A seismic source (dynamite) is fired in a Shell Oil seismic survey in the Gulf of Mexico, April incompressible liquid. 1951. Here, the charge has been detonated at a shallow depth so that the gas bubble vents into the In 1942 Kirkwood and Bethe derived an atmosphere very soon after the detonation. The effect looks dramatic and was exciting to capture equation of motion for an exploding bubble on photos. When the detonated depth was large, the surface would just slightly rise and after a while vent into a water burst. in a compressible liquid. Later on, Keller and Kolodner (1956) derived a similar equation assuming adiabatic conditions that lead to a damping of the bubble oscillations. During WWII much classified research was done on producing and detecting underwater sound in the ocean, including creating and studying bubble pulses, and many advances in theory and instrumentation were made. Some of this work was subsequently declassified and parts that are of interest to marine seismic surveying were published by Ewing (1948).

3.3.2  The Early Seismic Source In the early days of offshore seismic exploration, after WWII, the sound source was dynamite or similar high-energy explosives detonated well below the sea surface or at the sea bottom. Aside from the physical dangers involved in using dynamite or TNT, the bubble effect was a major problem in these early surveys. In 1947, Shell ‘solved’ the bubble problem by floating the charges about four feet (1.2m) below the surface, rather than letting them sink to the bottom (Priest, 2007). The shallow depth explosion then gave a clean pulse since the gas bubble was vented to the atmosphere during its initial expansion, thereby improving the seismic records. This solution, however, had been published somewhat earlier by Lay (1945). He found that the bubble oscillations could be prevented by bringing the charge close enough to the surface of the water so that the gas bubble would burst through the surface on first expansion. He also noted the alternative solution: by increasing the charge, the radius of the bubble would increase so that the surface would break before the contraction could occur.

103

Normal practice until well into the 1960s was to fire 50 lb (22.7 kg) charges of nitro-ammonium nitrate 6 ft (1.8m) below the sea surface at intervals of 660 ft (200m). Often 15 tons of this type of explosive (with the trade name Seismex, Imperial Chemical Industries Ltd) could be expended in one day. While the bubble problem was solved by firing the explosive at shallow depth, it was estimated that approximately one third of the explosive’s energy was used to lift the water spout (Lugg, 1979). Experiments by Lavergne (1970) showed that the reflection amplitudes increased with charge depth to the extent that 30 times less dynamite was required at 30 ft (9m) water depth than at a shallow blowout depth, in order to produce the same level of seismic response. He explained the poor seismic amplitude obtained with shallow small 100g charges by a cutoff effect of the ghost reflection, which reduces the pulse amplitude in the lower seismic frequency band. The use of dynamite and other high-energy explosives caused environmental and political concerns as well as safety problems. In 1969, most governments prohibited their use near the surface due to observed high fish kill. This led to the development of other viable alternatives to explosive charges. The 1960s saw a rapid increase in the use of alternative types of energy sources. By the end of the decade the geophysicist could choose from a proliferating spectrum of sources, including dynamite, black powder, gunpowder, ammonium nitrate, PRlMACORD, AQUAFLEX, AQUASEIS, FLEXOTIR, electric sparkers, SSP, WASSP, SONO-PROBE, PINGERS, BOOMERS, gas exploders, DUSS, DINOSEIS, GASSP, AQUAPULSE, airguns, PAR, PNEUMATIC ACOUSTIC ENERGY SOURCE, SEISMOJET, AIRDOX, CARDOX, HYDRO-SEIN, etc (Kramer et al., 1969).

3.3.3  Airgun Developments In the mid-1960s Steve Chelminski began manufacturing and testing airguns for use in seismic surveys, and in 1970 he founded Bolt Technology Inc. The airgun soon became popular, and it rapidly became the most widely used source for marine surveys. While the airgun won the competition for most popular marine source, it was not without its problems. It was not yet the ideal marine source that the seismic industry wanted. Three specific improvements in airguns were sought: more reliability, more power and a wider signal bandwidth. Geophysical Services Inc (GSI) began a development programme to address these needs. A radical new design of airgun based on an external sleeve was produced to replace the conventional internal shuttle valve airguns. The first production unit was mobilised in 1984. Airgun development continued and improved models of the internal shuttle guns have been introduced so that both sleeve guns and internal shuttle guns are popular today. One other innovation in marine sources needs to be mentioned. In 1989 Adrien Pascouet and his company Sodera presented the GI airgun, which first generates a bubble and then (delayed by 10–20 ms depending on gun size) injects air into the bubble to prevent unwanted bubble oscillations. Introduced at the 2014 EAGE trade show, Bolt Technology Corporation demonstrated a new type of airgun designed to reduce the potential impact of seismic acquisition operations on marine life while also delivering optimal bandwidth for

104

subsurface imaging. Called eSource™, it is designed to reduce the high-frequency components believed to have most potential for causing disturbance to marine life while retaining the low-frequency components critical to seismic exploration (see Section 3.14).

3.3.4  Early Airgun Arrays Marine airguns (and other sources) were arranged from an early date into spatial arrays for spatial filtering purposes. Inline arrays were described by Newman et al. (1977), Lofthouse and Bennett (1978) and Ursin (1978, 1983). Source arrays were soon extended in the crossline direction to act as spatial filters in a direction across the survey line (Badger et al., 1978; Parkes et al., 1981; Teer et al., 1982; Tree et al., 1982). The source concept was extended to arrangements in the vertical direction (Cholet and Fail, 1970; Smith, 1984; Lugg, 1985) to suppress ghost reflections and to improve the peak-tobubble ratio. Smith (1984) proposed a system of four subarrays deployed at different depths and fired at different times. He noted that the idea of distributing the elements of a seismic source in the vertical direction, with time delays, was “not a new one” and he referred back to the patent by Prescott (1935). Variations on the same theme have been described for seismic surveys on land (Shock, 1950; Van Melle and Weatherburn, 1953; Musgrave et al., 1958; Seabrooke, 1961; Hammond, 1962; Sengbush, 1962; Martner and Silverman, 1962; Fail and Layotte, 1970). Van der Schans and Ziolkowski (1983) proposed applying angular-dependent signature deconvolution in the τ-p domain to correct for the source directivity pattern. Fokkema et al. (1990) suggested a frequency-space directional deconvolutional algorithm for common receiver domain data. They observed that the algorithm could be applied in the CMP domain under a plane layered earth assumption.

3.3.5  Developments in Airgun Arrays Recent advances in marine broadband seismic surveying (see Chapter 5) have spurred new interest in airgun source configurations. Conventionally, airgun arrays have been kept at a constant depth. However, in broadband seismic survey design, the geophysicist is often faced with the challenge of designing surveys that achieve two objectives: enhancement of low frequency energy to compensate for the effects of scattering and attenuation of the primary signal to penetrate the deeper formations, and securing the large bandwidth necessary for imaging of overburden and shallow reservoir targets. This challenge in survey design is a Gordian knot: in order to enhance low frequencies the source needs to be deep enough to reduce the effects of ghosting but this will be at the cost of reduced low frequency content in the bubble pulse. In order to improve the bandwidth the source needs to be shallow at the expense of reduced low frequencies due to ghosting. To attack the ghost, source solutions involving multiple source depths have been proposed, including over/under sources or vertical arrays (Moldoveanu, 2000; Egan et al., 2007; Kerekes, 2011); multi-depth level time-synchronised subarrays (Hopperstad et al., 2008a, b; Cambois et al., 2009; Bunting et al., 2011; Parkes and Hegna, 2011; Sablon et al., 2013); slanted arrays (Shen et al., 2014; Telling et al., 2014); and variable source depth acquisition (VSDA), in which the source depth

is varied between successive shots along a sail line (Haavik papers by Vaage et al. (1983) and Ziolkowski (1987). However, and Landrø, 2015). The common factor in these marine source we are not aware of published studies of airguns where the guns configurations is that they involve sources at more than one are fired at depths shallower than 2m. depth. We note that Cholet and Fail (1970), Smith (1984) and We note that shallow-towed big airguns can be used in VSP Lugg (1985) proposed the deployment of sources at different or site surveys. Since the signature from a large shallow gun depths and then depth-synchronising the firing such that has extremely little bubble energy, de-signature is considerably primary peaks align in phase. This design idea has since simplified in the processing of such data. Generally, in site been adopted and adapted by several authors. Huizer (1988) surveys a small volume airgun is used, typically 40 to 400 in3, advocated the use of extended arrays to achieve improved towed at a depth of 2–3m. primary-to-bubble ratio and signature shape of the seismic signals. 3.3.6.1  Fjord Test A number of authors have pointed out the importance of A source signature field test was done in the Trondheim Fjord considering not only ghosts but also the hydrostatic pressure in 2010 using a single stationary conventional 1,200-in3 airgun in broadband source depth design (e.g., Davies and Hampson, hanging from A5-buoys. The water depth at the test location 2007; Parkes and Hegna, 2011; Hopperstad et al., 2012; Landrø is approximately 390m. The weather conditions were excellent and Amundsen, 2014; Haavik and Landrø, 2016). The bubble during the test (calm sea). Ambient noise recordings did not time period (representing the fundamental frequency) and the indicate significant ocean noise at low frequencies. A TC4043 ghost spectrum represent two competing effects. In fact, the Miniature hydrophone offering a wide frequency range was reduced source ghost attenuation that can be achieved at low suspended by a wire, and a weight was used to keep it stationary frequencies by increasing source depth is counteracted by the at 80m depth. For each source depth, approximately 30.3, 20.3, increase in the fundamental frequency (decrease in bubble time 10.3, 7.3, 5.3, 3.3, 2.3, and 1.3m, several shots were fired, and period) of the airguns as they are towed deeper. The hydrostatic data showed good repeatability for the shots fired at the same pressure varies with depth below the free surface. At shallow depth. At 1.3m depth, however, the gun was fired once only. depth and low hydrostatic pressure, the bubble time period of Figure 3.18 shows two photos of the air bubble venting to the the source is longer and consequently richer in low frequencies. atmosphere. At deeper depth and higher hydrostatic pressure, the bubble Figure 3.19a shows all the data on which this analysis is period is shorter and thus less rich in low frequencies. It may be based. To equalise data each trace has been multiplied by possible to tune the periods of the gun bubbles from the higher the difference between the hydrophone and gun depths. This and deeper gun arrays so that they match in order to Figure 3.18: In a source test in the Trondheim Fjord in 2010, a 1,200 in3 airgun was fired improve subsequent data processing by firing the deeper just beneath the sea surface at 1.3m while hanging from A5-buoys. A shallow-fired sources at different air pressure to the upper sources. Amundsen et al. (2017b) present experimental findings from airgun source tests in 2010 in the fjord of Trondheim, Norway, using a 1,200 in3 gun, where the focus is on the gun firing at approximately 1.3m. Only one measurement was taken at this depth as the airgun tests were not designed to test shallow firing. We will see that its signature is broadband, has insignificant bubble energy (since the air bubble bursts out in the atmosphere at its first expansion), and interestingly, has higher amplitudes in the spectrum between 0.5 and 2 Hz than guns fired at 3.3m or 5.3m. The observations we present might be a step in the direction of designing more optimal lowfrequency sources for seismic exploration. The first observation on insignificant bubble energy is perhaps not very surprising, since this effect for explosives was shown by Lay in 1945. And as we have learnt so far in this chapter, whereas the bubble problem of explosives was overcome by putting the explosive at very shallow depth, the bubble problem of the airgun was handled by the tuned airgun array concept. Further, we note that the signatures of airguns fired at depths of 3m and deeper have been thoroughly investigated. When the airgun releases a volume of air into the water, the air produces a steep-fronted shock wave followed by several oscillations resulting from the repeated collapse and expansion of the air bubble (bubble pulse). We refer the reader to landmark

Statoil

3.3.6  Firing a Big Airgun at Very Shallow Depth

big airgun source gives a broadband pulse with no bubble effects since the air bubble is vented to the atmosphere during its initial expansion. The downgoing signal is significant and useful in seismic exploration.

105

a

30.3

20.3

10.3

7.3

5.3

3.3

2.3

1.3

b

Time (s)

Time (s)

0.5 -

1.0 -

1.5 -

Amplitude (dB)

c

Figure 3.19: (a) Recorded traces 80m below the sea surface. Source depths (from left to right): 30.3, 20.3, 10.3, 7.3, 5.3, 3.3, 2.3, and 1.3m. Notice that the deeper guns have a shorter bubble time period. (b) Zoom-in of the signature of the 1.3m depth gun. (c) Frequency spectra in the frequency interval 0–20 Hz. Gun depths are 30.3m (purple line), 5.3m (red line), 3.3m (light blue line), and 1.3m (dark blue line). Figures taken from Amundsen et al., 2017b.

Frequency (Hz)

scaling corrects the spherical divergence differences for the incident wave (the main signal going directly from the source to the receiver). In addition, data have been time shifted so that each signature starts at the same time. Figure 3.19b shows the recorded signature from the 1.3m gun test. The signature is broadband and the bubble effect is insignificant since the air bubble vents to the atmosphere during its initial expansion. The last part of the trace contains reflections from the sea bottom layers. Figure 3.19c plots amplitude spectra in the frequency range 0–20 Hz. The 5.3m and 3.3m signatures display the effect of the classic bubble oscillation periods, where the period is inversely –5 ⁄ 6 proportional to the hydrostatic pressure T α Phyd . The bubble oscillation period gets shorter with increasing hydrostatic pressure and hence increasing depth. The fundamental frequency, f = 1/T, then increases with increasing depth. We observe that the fundamental frequencies are approximately 5.5 Hz and 5 Hz for the 5.3m and 3.3m data, respectively. There is no clear fundamental frequency for the bubble generated at 1.3m (in agreement with the lack of bubble energy in the signature in Figure 3.19b). As a result, the spectrum on the low frequency side is relatively flat. It is of interest to study the source ghost effect at the very low frequencies. From 5 Hz and downwards, the 3.3m and 5.3m

106

signature spectra have significant roll-off, in contrast to the 1.3m depth signature which below 2 Hz down to 0.5 Hz has more energy than the 3.3m and 5.3m signatures. Therefore, to enhance the low-frequency output from an airgun array, we suggest that additional guns are deployed firing at very shallow depth (e.g., 1m). Finally, the experiment shows that when the airgun is placed at around 1m, the downgoing signal is not very much reduced when compared to the firing of an airgun at 3.3m. This observation could open new avenues of research within the established area of seismic acquisition and imaging.

3.4  High Frequency Signals from Airguns Only a few studies have been published which describe measurements of airgun signals in the kilohertz (kHz) frequency range. In this section we discuss models for the generation of these high frequencies from airgun arrays. If you want to find the secrets of the universe, think in terms of energy, frequency and vibration. Nikola Tesla (1856–1943)

Langhammer 1994

In 2003 scientists conducted a broad-band (0–80 kHz) study of various gun arrays at the Heggernes Acoustic Range near Bergen, Norway, an institution operated by the Norwegian, Danish, Dutch and German navies for noise measurements of military and civil vessels. The spatial dimensions and volumes of the source configurations were small, on average 10–20 dB lower than arrays used by the exploration industry. Additionally, the noise generated by the seismic vessel itself was recorded.

3.4.1  Ship-generated Noise

3.4.2  Airgun High Frequency Signals What causes high frequency signals from an airgun? The first and most obvious cause is that the rapid movement of the air escaping through the airgun ports (Figure 3.20) creates cavities in the water close to the gun. This effect is the same as cavities created on propellers, or those associated with turbulent flow in water, as shown on Figure 3.22. Since we believe that these cavities are created close to the source, it is reasonable to assume that the amount and strength of the cavities are dependent on the design of the airgun. Hence, there might be differences between high frequency noise from airguns produced by different manufacturers. It is also evident that the triggering mechanism, the so-called solenoid, which is an electrical coil

Amplitude spectrum (dB re 1µ Pa/Hz)

140

Modified from Breitzke et al, 2008

Most of the emitted source energy had frequencies below 150 Hz, far in excess of the low-frequency noise generated by the source vessel. Spectral source levels (measured in decibels) were highest below 100 Hz, dropping off continuously with range and frequency so that at 1 kHz they were approximately 40 dB, and at 80 kHz approximately Figure 3.20: Photograph of an airgun fired underwater. Notice the four bubbles emerging from the 60 dB lower than the peak level. ports of the gun. Above 1 kHz, spectral levels agreed almost completely with the noise generated by the vessel, meaning that if low-level, high-frequency (>1 kHz) Figure 3.21: Broadband-smoothed amplitude spectra from 38.2 litre gun spectral components were emitted, they were masked by shiparray at 550m range, received by a hydrophone at 35m depth, compared to generated noise. This is of particular importance for marine noise generated by the seismic vessel. Below 1 kHz the amplitude spectrum of the airgun signal differs significantly from the vessel’s noise spectrum due mammals with pronounced high-frequency hearing sensitivity to the low-frequency energy emitted by the airgun, while above 1 kHz the like toothed or beaked whales (see Section 3.9). The results amplitude spectrum of the airgun signal coincides with the vessel’s noise almost indicate that any high frequency signals created by these completely. This indicates that the slow spectral level decay is mainly caused by ship-generated noise. relatively small airgun arrays are so weak that they would not disturb these marine mammals significantly more than normal 150 ship traffic, as shown in Figure 3.21. Airgun array

130 120 110

Vessel Noise

100 90 80 70 60 50 10

100

1000

10000

100000

Frequency (Hz)

107

sitting on each individual airgun, creates high frequency noise. Furthermore, the airgun shuttle, which is the piston that pushes the air out of the gun, creates high frequency noise during the rapid movement and sudden stop of the shuttle. Finally, all airguns jump as a result of the bubble movement, and this jumping will create mechanical shaking which again creates mechanical high frequency noise. All these mechanisms occur close to or in the vicinity of each airgun.

3.4.3 Cavitation Cavitation is the sudden formation and collapse of lowpressure gas bubbles in a liquid. When this occurs in water, the interior of the bubble is filled with water vapour. A cavity is formed as the water pressure is approaching zero, or more accurately as the pressure is equal to the vapour pressure. This pressure is given by the phase diagram for water (Figure 3.23), and it depends on the water temperature. For instance, for a water temperature of 17° C, the vapour pressure is 0.03 bar, practically zero. Although the physical mechanisms behind cavity formation might vary, they are usually found to be associated with rapid velocity changes in the water. As the cavity collapses due to the hydrostatic pressure surrounding the cavity, the water vapour turns into fluid water again, and this phase transition is very rapid. During the collapse of a cavity a micro-shock wave is formed, and this is often strong enough to damage nearby material such as metal objects. Therefore, cavitation is usually regarded as an unwanted and undesirable effect. In addition, the violent collapse of cavitation bubbles results in the production of a crackling noise. This is one of the most evident characteristics of cavitation to the observer and therefore often the primary means of detecting the phenomenon. In marine seismic applications, the water gun is one example where the cavities created by the gun are desirable, since it is the main source for the acoustic sound. However, for airguns water vapour cavities are not useful, since the main acoustic signal is created by the air bubble being ejected by the gun. Figure 3.22 shows an underwater example of bubble cavitation. It is very likely that the cavitation occurring close

to an airgun is of this type. Bubble cavities collapse violently and are therefore very noisy, and if close to metal objects like propellers and airguns they might have an erosive effect. Since marine mammals have a very broad hearing sensitivity, up to 100 kHz, acoustic waves created by cavities might be of importance. The phase transition from water vapour into water is extremely rapid, so a collapsing cavity is capable of creating extremely high frequencies, similar to that of a dynamite explosion.

3.4.3.1  Cavitation Due to Ghosts? Another, far more sophisticated cause for high frequency emissions from airgun arrays is coupled to the effect of reflections from the sea surface. If two airguns emit a strong signal of, for example, 4 bars at 1m, the pressure in the water between the two guns might approach zero since the reflected signals from the sea surface are negative. In the example shown in Figure 3.25, we have assumed that the signals from the two airguns can be linearly superimposed. However, as the water pressure approaches zero, non-linear effects will be

Applied Fluids Engineering Laboratory, University of Tokyo.

From Plesset and Ellis (1955)

Figure 3.24: Photograph of a transient cloud of cavitation bubbles generated acoustically.

Figure 3.22: Picture of bubble and propeller cavitation.

108

Figure 3.23: Phase diagram for water. At 0.01°C the vapour pressure is 0.006 bar, which is close to zero pressure. At 17°C, the vapour pressure is 0.03 bar, and for 100°C it is 1 bar.

Figure 3.25: If these two airguns generate a peak pressure of 4 bars at 1m, the reflected signal from the sea surface measured at the hydrophone between the guns is approximately –1.3 bars, assuming linear theory. The hydrostatic pressure at the hydrophone position is 1.2 bars, which means that the total pressure – assuming that the pressure contribution from each source can be added linearly – is negative; cavitation will then occur.

more prominent, meaning that it is not easy to estimate exactly at which pressures cavitation will occur. Despite this, it is reasonable to assume that for compact and large airgun arrays we have a risk of cavitation formation in the area where the ghost reflections from several airguns coincide in time and space. This type of cavitation will be independent of the type of airgun used, since it is simply a function of the geometry of the array. And this is the good news: if the major part of the high frequency signal generated by an array is generated by cavitation between airguns, this effect can be eliminated simply by increasing the distance between the guns. Plesset and Ellis showed in 1955 that it is indeed possible to generate cavities by acoustic stimulation, as shown in Figure 3.24. Generally, the strength and length of this high frequency

21

22

23

30

Landrø et al., Geophysics, 2013

20

Time (ms)

19

Time (ms)

18

19

20

21

22

23

68

Figure 3.28: Same traces as in Figure 3.27, but aligned, using NMO-correction, followed by taking the absolute value and smoothing. Notice that all ghostcavitation signals show a gradual increase in amplitude followed by a sudden decrease at approximately 57 ms.

Figure 3.29: Simple model for number of cavities (n) created and initial radius (R) of the cavities versus time. The idea is that at 3.5 ms we have a maximum size of the ghost cloud, and a lot of small cavities with a maximum radius are then created. Note that both curves are normalised to 1 and therefore the two curves are identical.

Figure 3.27: Same traces as in Figure 3.26, filtered by a 10–20 kHz band-pass filter, which clearly demonstrates that the high frequency ghost-cavitation signal is repeatable in the sense that this phenomenon occurs at all traces and at the same time after the primary peak. 17

18

the abrupt decrease at approximately 57 ms. However, it is very weak, and close to zero amplitude, and we conclude that this weak ghost is probably related to the fact that the sea surface reflection coefficient decreases significantly for frequencies above 1 kHz if the sea surface is not perfectly flat. This skewness of the ghost cavity signal versus recording time can be explained by a very simple and intuitive model. Let’s assume that the number of cavities (n) and the radius of each cavity is increasing to a maximum that occurs when the cavity cloud is at its maximum. This is shown in Figure 3.29, where we have assumed that the ghost cloud exists for

Figure 3.26: Seven shots from a source shot line above a hydrophone located at the seabed. Notice the high-frequency noise occurring approximately 10 ms after the main peak. There are 2 ms between each time line.

30

17

34

Landrø et al., Geophysics, 2013

Airgun

6m

Landrø et al., 2013, investigated how repeatable the high frequency ghost cavitation signal is, and Figure 3.27 shows that the envelope signal is repeatable. The details are of course different, since we imagine that the ghost cavitation process is random and chaotic. If we take the absolute value of each sample and smooth the curve (and perform a move-out correction to align the curves), we observe another characteristic feature of the ghost cavitation signal (Figure 3.28): the strength increases gradually, followed by a relatively abrupt decrease to almost zero. The ghost of the ghost cavitation signal is weak; we expect that this second order ghost signal should occur after

Landrø et al., Geophysics, 2016

Airgun

3.4.4  How Repeatable is the Ghost Cavitation Signal?

Time (ms)

Hydrophone

cavitation signal will therefore increase with the size and compactness of the airgun array.

1

0.8

Normalised n or R

2m 3m

Landrø (2000)

sea surface

0.6

0.4

0.2

300

90

Shot number

0

0

1

2

3

4

5

6

7

Time (ms)

109

Landrø et al., Geophysics, 2016

a

1

Amplitude

0.8

0.6

0.4

0.2

0

40

42

44

46

48

50

52

54

56

58

b

Amplitude

0.8

0.6

0.4

0.2

0

0

2

4

6

8

10

12

14

16

18

20

Time (ms)

Amplitude

Landrø et al., Geophysics, 2016

Figure 3.30: Measured (a) and modelled (b) ghost cavitation signal for various high pass filters, showing a good correlation. The red curve in (b) represents modelled signal assuming a sea surface reflection coefficient of -0.2, and the black curve corresponds to -0.6.

0.15

6.5m 8.0m Infinite (only port array)

0.1

0.7 ms 0.05

0

38

40

42

Time (ms)

44

46

48

Figure 3.31: Amplitude spectra of two subarrays separated by 6.5m (black), 8m (red) and one subarray only (blue). Notice that the slope is steeper for the single array.

Landrø et al., Geophysics, 2016

150

SPL (dB; re 1 µPa, 1 Hz, 1m)

140 130 120 110 100

6.5m separation

90

8.0m separation

80

Infinite separation (only one subarray)

70 60 100

101

Frequency (kHz)

102

Figure 3.32: Absolute value and smoothing of the ghost cavity signals for various distances between two subarrays in the source array. Black and red curves correspond to 6.5 and 8m subarray separation distance, respectively. A 130 kHz high pass filter has been applied to the data prior to absolute value and smoothing. The blue curve shows the same type or response for the case where only the port array is fired. The grey and light red boxes show where the signal has been saturated.

110

T = 0.915R

ph

we find that this simple model gives a reasonably good similarity between measured and modelled ghost cavity signals, as shown in Figure 3.30.

3.4.5  How to Reduce the Ghost Cavity Signal

Time (ms) 1

7 ms and reaches a maximum after 3.5 ms. Using the famous Rayleigh formula which relates the collapse time (T ) of a cavity to its radius (R):

A dedicated experiment (Landrø et al., 2016) tested if the ghost cavity signal could be reduced by, for instance, increasing the distance between the sources in a source array. Figure 3.31 shows that this is indeed possible: by increasing the distance between two subarrays in a conventional marine airgun source array, we found that the high frequency signal (above 130 kHz) is significantly reduced by increasing the separation distance from 6.5 to 8m. In the frequency domain, it is clearly shown that the difference between 6.5 and 8m subarray distance is increasing with frequency, as shown in Figure 3.32. This means that the simplest way to reduce the ghost cavitation signals that occur for large compact airgun arrays is to increase the distance between the subarrays. Furthermore, this field experiment showed that the ghost cavity signal decreases with increasing source depth. It was also found that an airgun array consisting of two subarrays gives high frequency signals between 1 and 10 kHz that are similar to conventional cargo ships.

3.5  The Far-Field Response of an Airgun Array Below the thunders of the upper deep, Far, far beneath in the abysmal sea, His ancient, dreamless, uninvaded sleep The Kraken sleepeth: faintest sunlights flee About his shadowy sides; above him swell Huge sponges of millennial growth and height; And far away into the sickly light, From many a wondrous grot and secret cell Unnumber’d and enormous polypi Winnow with giant arms the slumbering green. There hath he lain for ages, and will lie Battening upon huge sea-worms in his sleep, Until the latter fire shall heat the deep; Then once by man and angels to be seen, In roaring he shall rise and on the surface die. “The Kraken”, Alfred Tennyson, 1809–1892

The Kraken is an ancient giant sea monster that originated in Norwegian folklore in the twelfth century. In 1752, the Bishop of Bergen, Erik Pontoppidan, described the Kraken as the largest sea monster in the world with a width of one and a half miles. When it attacked a ship, it wrapped its arms around the hull and capsized it, and the crew would drown or be eaten by the monster. The Norwegians thought that fish were attracted by the Kraken and therefore, when a fisherman had a good catch it was said that he had successfully fished over the Kraken without waking it. The seismic source is hopefully not as horrifying as the myth­ological sea-creature, although when asleep, we believe that it, like the Kraken, attracts fish. When it shoots, it does not. Therefore, it is important to understand sound propagation in the sea. During airgun shooting, the sound level in the sea measured far away from the seismic vessel depends not only on the seismic source parameters but also on the water layer propagation channel. Numerical modelling studies are valuable for providing rough estimates of plausible scenarios for the transmission of seismic energy in the water column. Luckily, there are several standard acoustic propagation models available to model sound propagation in rangedependent ocean waveguides. One of these is a computer program – nicknamed, appropriately enough, Kraken. The Kraken program models so-called normal modes – complex solutions of wave propagation which at first sight may look as scary as the octopus-like Kraken. But, normal mode theory is an efficient tool to model the acoustic wavefield trapped within the sea layer, and can be used to understand the sound level that marine life is exposed to when it is several kilometres from a seismic vessel.

3.5.1  Normal Modes – Background A normal mode is a free vibration of a physical system,

represented by a characteristic frequency of each mode. This frequency is often referred to as the eigenfrequency. The most common example of such a system is a string that is fixed at the ends. This string (for instance on a guitar) might vibrate in a number of modes, each with a characteristic frequency, the eigenfrequency. The sum or superposition of all possible functions corresponding to these modes constitutes the general solution. When a guitar string is plucked normally, the ear tends to hear the fundamental frequency most prominently, coloured by the presence of integer multiples of that frequency. The lowest frequency of vibration along the entire length of the string is known as the fundamental, while higher frequencies are referred to as overtones. The fundamental and overtones, when sounded together, are perceived by the listener as a single tone. A harmonic overtone has evenly spaced nodes along the string, where the string does not move from its resting position. Other examples of normal modes are organ pipe and sound propagation in, for instance, a trumpet or the sea layer. Mathematically, these normal modes can be represented as a sum of sinusoidal functions. The actual solution of a normal mode system is dependent on the boundary conditions. For example, for a drum the normal mode solution depends on how the drum skin is suspended. A normal mode is independent of the other modes, in contrast to non-normal modes where this is not the case. The concept of normal modes is used in wave theory, optical applications, quantum mechanics and molecular dynamics. Consider an ideal waveguide con­sist­ing of a homogeneous water layer of thickness D that has interfaces with vanishing pressure at the upper and lower boundaries at z=0 and z=D. Let k=ω/c where ω=2πf, f being the frequency, denoting the wavenumber with horizontal component K and vertical component γ, obeying K2 + γ 2 = k 2.

111

The ‘eigenfunction solutions’ of the wave equation,

Z(z) = sin(γz), must be zero at depths z=0 and z=D, since the pressure here is zero. The requirement Z(D)=0 gives the ‘modal equation’ for the idealised waveguide, γmD = mπ, or γm = mπ/D, where m is an integer which designates the ‘mode number’. The γm are known as the eigenvalues. The

values of the horizontal component of the wavenumber are Km = √k 2–γm2 with k 2 ≥ γm2. In the idealised waveguide, the depth dependent eigenf­uctions are simply Z m(z) = sin(mπz/D),

m=1,2,3,…

The sound pressure is the sum of the pressures in the modes.

surface; the reflection coefficient is R=-1. It is well known that for plane wave incidence on the sea floor beyond the critical angle, there is a perfect reflection with an accompanying phase shift. This reflection can be represented with an equivalent reflection having R=-1 at a virtual pressure release interface displaced a distance below the sea floor. Therefore, when we study long-range sound propagation in a water layer over a real sediment, the simplest waveguide is the homogeneous water layer that has interfaces with vanishing pressure at the upper and lower boundaries. The sound pressure is the sum of the pressures in the modes and given as (Jensen et al., 1994; Medwin, 2005):

3.5.2  Normal Modes in the Ocean There are only a few relatively simple oceanic waveguides which allow us to obtain a closed analytical form of the solution describing sound propagation at long distances from the source. Pekeris (1948) is regarded as one of the pioneers in this field (see box below). Consider a water layer with thickness D and water speed c with source at depth zs and receiver at depth z. The pressure is effectively zero at the sea

where A depends on the source power, ρ is the ambient density, and the summation is over all allowed modes m=1,…,M with real propagation wavenumbers; M increases with increasing frequency; am is the modal excitation. This normal mode expansion of the field in the waveguide is referred to as the Pekeris model and is useful to understand

Normal Modes – the Invention of Pekeris The normal modes concept was first introduced by Pekeris in 1948 for application of acoustic sound propagation caused by an explosive point source in shallow waters. It should be noted that Pekeris published a paper on normal modes in the theory of microwave propagation in 1946. A comprehensive description of normal modes generated within a layered half space is given in the well known textbook written by Ewing, Jardetsky and Press in 1957. Figure 3.33a is modified from their book and shows a liquid half space over an infinite liquid layer. It is possible to derive approximations for the recorded wave field at a large horizontal distance from the source, and the detailed derivation of this can be found in their book. In the analysis of the seismic data recorded at a long distance from a seismic vessel, the period equation is key to understanding the concept of normal modes: c2 tan kH

c2 2 1

1=

2 1

2 1

1

1

(1)

c2 2 2

  

Here c is the phase velocity, k is the wavenumber, H is the water depth, and α1 denotes the P-wave velocity of the water layer and α2 the first layer below the seabed and ρ1 and ρ2 denote densities for the corresponding layers, respectively. From this equation it is possible to determine the phase velocity (c) as a function of frequency for each mode. Since the left-hand side of equation (1) is periodic, we will get multiple solutions, leading to the harmonic (or normal) modes. Once the phase velocity for each mode is determined, it is possible to estimate the group velocity by taking the derivative of the frequency with respect to the wave number k. An example of phase velocities (c) and corresponding group velocities are shown in Figure 3.33b. In the book of

112

Ewing there is no explicit formula given for the group velocity. It is shown by Landrø and Hatchell that the group velocity (U) can be derived directly from equation (1): U =c c 1+

where

1

=

2

1Hk

c2

2 1

2 2 1 1 2 2 1 + 13 12 2 2 2

cos 2 kH

1 and

2

= 1

1



 (2)

c2

2 2

From equation (2) we see that the group velocity approaches α₁ when c → α₁ and α₂ when c → α₂, as expected. From Figure 3.33b we see that the phase velocity for each mode starts at the velocity for the second layer and asymptotically approaches the water velocity for higher frequencies. The asymptotic behaviour is observed for the group velocity, apart from the fact that the group velocity reaches a local minimum for a given frequency for each mode. If the second layer is a solid layer, the shear velocity for layer 2 (β₂) will enter into the period equation, as derived by Press and Ewing in 1950 (see Ewing et al., 1957 for a comprehensive derivation): c2 tan kH

c2 2 1

1=

2

c

1

1

2 1

4 2 4

1

c

2 2 2

4 1

c2 2 2

1

c2 2 2

(2

c2 2 2

)2

(3)

   Press and Ewing used the integral technique introduced by Lamb in 1904 to derive approximate expressions for the wavefield at an arbitrary point in the water, including the effect of a solid layer below the seabed. Figure 3.33c shows an example of normal modes recorded by the permanent receiver array that was installed at the Valhall field in 2003, offshore Norway. Five modes can be identified from this figure, and the group velocity as given by equation (2) fits reasonably well to the observed data.

the basic principles of normal modes. Each term in the series has simple trigonometric depth dependence of sinusoidal form. There are two major observations to be made: first, the amplitude decreases with square root of distance r, as one would expect for a cylindrical spreading wave trapped in the sea layer. Second, while the source has a single frequency, the dependence of pressure on distance and depth is very complicated because each of the mode components has different dependencies on distance on range Figure 3.34: Normal mode solutions for first mode (m=1; solid black line), first and second (m=1 and m=2; solid red and depth. line) and the three first (m=1, m=2 and m=3; solid blue line) for the Pekeris model; water layer 100m, source depth In real waveguides, absorption 6m, receiver depth 50m, 100 Hz frequency. losses in the sea floor cause the sound pressure to decay faster than 1/√r. These losses, as well as others, Often, however, we find that we lack sufficient information to can be included in the empirical mode attenuation rate δm. perform realistic numerical modelling. Therefore, we frequently

Figure 3.33a: A point source in a liquid layer over a liquid half space. Paths of multiple reflected and refracted waves which may interfere and build up the oscillatory refraction wave. This figure has been modified from figures 4-1 and 4-7 in Ewing et al., 1957.

* H 1

,

1

,

2

2

1.15

Figure 3.33b: The effect of introducing a solid water bottom with shear wave velocity: solid lines represent a shear wave velocity of zero, and dashed lines represent a shear wave velocity of 500 m/s in the second layer (the layer below the seabed). The P-wave velocity in the second layer is 1,700 m/s, and the density contrast is 1.8. Figure from Landrø and Hatchell, 2012.

Phase velocity

Normalised velocity

1.1

1.05

1

0.95 Group velocity

0.9 0

10

20

30

40

50

60

Frequency (Hz)

70

80

90

100

Figure 3.33c: Examples of normal modes from the Valhall field, offshore Norway. Time-frequency plot for a source-receiver offset of 10 km. A 10 Hz-wide sliding frequency filter has been used for the same seismic trace, after linear move-out correction using 1,500 m/s, and shifting to 4 seconds traveltime. Five modes may be interpreted. The solid curves represent modelled group velocities for various choices of water depth and seabed (second layer) velocity: red: 80 m and 1700 m/s; cyan: 70 m and 1,700 m/s. The velocity of the water layer is 1,470 m/s and the density ratio between the second and first layer is 1.6.

Time (s)

4.0

4.8 10

Frequency (Hz)

115

113

Figure 3.35: Normal mode solutions (including 5 modes) for two different sea layer thicknesses for a water layer of 100m (black solid line) and 200m (red solid line). The source depth, receiver depth and frequency as before. There is a significant reduction in acoustic amplitude level with depth, approximately 15–20 dB. Looking at the complexity of the solutions as more modes are included, it is easy to understand why normal mode solutions have Kraken-like behaviour.

end up with a simple comparison using logarithmic plots to judge whether the signal decays as –10log(r) or –20log(r) or more. We will use a recent example from a seismic survey where the water depth is between 40 and 50m as an example.

3.5.3  Shallow Water Example

between the maximum amplitude curve and the RMS-curve. The latter has a slope corresponding to -40log(r) and the former is closer to -60log(r). This means that for this example the attenuation of the seismic data is significantly stronger than -20log(r) for offsets greater than 3–4 km. We notice that the water depth changes gradually from 50 to 40m for offsets larger than 3 km. It is hard to judge how much a change in the water depth will influence the field data. However, based on the equations given above, the maximum amplitude should increase rather than decrease if we assume the simple isovelocity model. Therefore, it is likely that this change in slope for the measured data is not caused by change in water depth, but rather it is a result of attenuation effects. Unfortunately, this data set does not contain offsets above 15 km, so we are not able to check if the -60log(r) behaviour continues for larger offsets. 60log(r), which is observed between 4 and 15 km, corresponds to attenuation close to one over the cube of the offset. This indicates that the attenuation at far offsets (larger than 4 km) is severe.

The Pekeris model has cylindrical attenuation with distance, which means that the signal attenuates as 10log(r). In Section 3.6 we discuss that transmission loss in a waveguide such as the sea, with constant speed of sound, follows a spherical spreading law (20log(r)) at short distance and cylindrical spreading at longer distance. A combination of the two spreading laws gives for distances r greater than the ocean depth D (in metres), the asymptotic loss behaviour 20log(D) + 10log(r/D). To illustrate the possible huge difference between simplified models (where attenuation effects are neglected) and reality, we will use a field data set where we have measured 585 shots for 12 seconds each shot and where a geophone located at the seabed records the shots from a seismic vessel at distances from Figure 3.36:  Water depth at source versus offset for the field data. 30m up to nearly 15 km. The water depth at each source location is shown in Figure 3.36. A comparison between the root mean square (RMS) (measured for a window from 0–12 seconds) is shown in Figure 3.37 together with the maximum amplitude as a function of offset. For comparison we have inserted attenuation proportional to -20, -40 and -60 times the logarithm of r. For distances up to 3–4 km, we observe that the -20log(r) damping curve fits reasonably well with the observed data. Furthermore, there is practically no deviation between the maximum amplitude curve and the RMS-curve up to this point. However, for offsets larger than 3–4 km, we observe a distinct difference

114

Figure 3.37: Measured RMS (root mean square) amplitude (solid red line) and maximum amplitude (solid black line) versus offset. For comparison -20, -40 and-60 log(x) straight dashed lines are fitted to the data.

115

3.6  Sound in the Sea If you cause your ship to stop and place the head of a long tube in the water, you will hear ships at a great distance from you.

Aristotle (384–322 BC) was among the first to note that sound could be heard in water. Nearly 2,000 years later, Leonardo da Vinci (1452–1519) made the observation quoted above that ships at great distances away could be heard underwater. In 1743, Abbé Nollet conducted a series of experiments to settle a dispute about whether sounds could travel through water. With his head underwater, he reported hearing a pistol shot, bell, and shouts. He also noted that an alarm clock clanging in water could be heard easily underwater, but not in air, clearly demonstrating that sound travels through water. The name of Willebrord Snellius (1580–1626), a Dutch astronomer and mathematician, has for several centuries been attached in English-speaking countries to the law of refraction of light. But it is now known that this law was first described by the Arabian optics engineer Ibn Sahl (940–1000) working in Baghdad, when in 984 he used the law to derive lens shapes that focus light with no geometric aberrations.

3.6.1  Development of Acoustics In 1826, the Swiss physicist Jean-Daniel Colladon and the French mathematician Jacques Charles-François Sturm demonstrated that the speed of sound in water is faster than its speed in air. Using two boats 10 miles (16 km) apart in Lake Geneva, they suspended a church bell underwater from one boat and a long ear-trumpet from the other. A flash of gunpowder was ignited as the bell was rung. Listening in the tube, they could measure the time that it took for the sound of the bell to reach the tube. They calculated the speed of sound in fresh water at 8°C to be 1,435 m/s, a value remarkably similar to the current value used in modern-day physics. Colladon and Sturm demonstrated that water is an excellent

Figure 3.38: An artistic view of the first accurate measurement of the speed of sound in water in 1826. Charles Sturm rang a submerged bell (the source) while Daniel Colladon used an ear-trumpet underwater for listening and a stopwatch to note the length of time it took the sound to travel across Lake Geneva, a lake on the north side of the Alps, shared between Switzerland and France.

medium for sound, transmitting it over four times faster than its speed in air. In 1878 Lord Rayleigh published The Theory of Sound, a work marking the beginning of the modern study of acoustics. Lord Rayleigh was the first to formulate the wave equation, a mathematical means of describing sound waves that is the basis for all work on acoustics. His work set the stage for the development of the science and application of underwater acoustics in the twentieth century. The sinking of Titanic in 1912 and the start of World War I provided the impetus for the next wave of progress in underwater acoustics. Anti-submarine listening systems Figure 3.40: The United States Revenue Cutter Miami close to an iceberg similar to that which destroyed the Titanic. On 27 April 1914, a Fessenden oscillator was tested off the Miami and received signals both from an iceberg and the bottom. NOAA Photo Library

Figure 3.39: The passenger ship Titanic’s tragic sinking on 14 April 1912 encouraged developments in underwater detection systems to locate icebergs. The first known sinking of a submarine located by a hydrophone was the German U-Boat UC-3, in the Atlantic during WWI on 23 April 1916.

Lasse Amundsen

Leonardo da Vinci, 1490

116

were developed and in 1914, Reginald A. Fessenden developed the echo-ranger. The development of both active ASDIC and passive sonar (SOund Navigation And Ranging) pro­ ceeded apace during the war, driven by the first large-scale

deployment of submarines. Other advances in underwater acoustics included the development of acoustic mines. The period between the two world wars was a time of increasing discoveries in underwater acoustics. Scientists

P = 4πr0 2 I0 = 4πr 2 I Then I/I0 =(r0/r)2. The amount by which the intensity I has decreased relative to its level at the source, I0 , at r0 = 1m is called the transmission loss. It is usually expressed in dB, leading to the spherical transmission loss equation:

TL = -10 log10 (I/I0) = 20 log10 (r)

But sound cannot propagate uniformly in all directions from a source in the ocean forever. Beyond some range the sound will hit the sea surface and the sea floor. A simple Range r (m) 1 10 100 1000

Spherical spreading Relative intensity Transmission loss I/I0 (dB) 1 0 1/100 20 1/10,000 40 1/1,000,000 60

Figure 3.41a: The total amount of energy in a wave remains the same as it spreads out from the source.

approximation for spreading loss in a medium with upper and lower boundaries can be obtained by assuming that the sound is distributed uniformly over the surface of a cylinder having a radius equal to the range r and a height H equal to the depth of the ocean. The surface area of the cylinder is A=2πrH. The total power crossing all such cylinders is the same: P = 2πr0HI0 = 2πrHI. Then I/I0 =r0 /r, so that the cylindrical transmission loss in dB is:

TL = -10 log10 (I/I0) = 10 log10 (r)

Lasse Amundsen

The propagation of sound through water is described by the wave equation, with appropriate boundary conditions. A number of models have been developed to simplify propagation calculations, including ray theory, normal mode solutions, and parabolic equation simplifications of the wave equation. Each set of solutions is generally valid and computationally efficient in a limited frequency and range regime. Ray theory is more appropriate at short range and high frequency, while the other solutions work better at long range and low frequency. Various empirical and analytical formulae have also been derived from measurements that are useful approximations. Here, we discuss the two most elementary and simple ways in which sound spreads out in the ocean. But first, think of a wave spreading out from a raindrop hitting the water surface; the further from the raindrop source, the bigger the circle formed by the wave. As the circle gets bigger, its total length (circumference) also gets bigger. Spreading loss occurs because the total amount of energy in a wave remains the same as it spreads out from the source. (We are neglecting sound absorption.) When the circle of a surface wave gets bigger the energy spreads to fill it. Therefore, the energy per unit length of the wave must get smaller. Spherical and cylindrical spreading are two simple approximations used to describe how sound level decreases in the ocean as a sound wave propagates away from a source. Let us look into these spreading laws in more detail. The amount of energy from a sound source that is transported past a given area of an acoustic medium per unit of time is known as the intensity (I) of the sound wave. Intensity is energy/time/area. Since the energy/time ratio is equivalent to the power (P), intensity is simply the power/area, I=P/A. When sound propagates equally in all directions, the surface area is that of a sphere, A=4πr2, where r is the radius. The power crossing all such spheres is the same:

Cammeraydave/Dreamstime.com

Propagation Modelling

Figure 3.41b: Sound generated by a sound source (shown as a white dot) at mid-depth in the ocean. At first the sound is radiated equally in all directions; sound levels then decrease rapidly with spherical spreading as sound spreads out from a sphere with a radius of r0 to a larger sphere with a radius r. But once the sound is trapped between the top and bottom of the ocean it gradually begins to spread cylindrically, with sound radiating horizontally away from the source.

We can now construct a table showing the relative intensity levels and transmission losses for spherical and cylindrical spreading at various ranges, assuming that r0 is 1 metre.

Cylindrical spreading Relative intensity Transmission loss I/I0 (dB) 1 0 1/10 10 1/100 20 1/1000 30

Table 3.1: The relative intensity level decreases less rapidly for cylindrical than for spherical spreading. Equivalently, the transmission loss decreases less rapidly.

117

were beginning to understand fundamental concepts about sound propagation, and underwater sound was being used to explore the ocean and its inhabitants. For example, shortly after WWI, the German scientist H. Lichte published a theory on the bending, or refraction, of sound waves in sea water. Building on work by Lord Rayleigh and Snell, Lichte predicted in 1919 that, just as light is refracted when it passes from one medium to another, sound waves are refracted when they encounter slight changes in temperature, salinity, and/or pressure. He also suggested that ocean currents and seasonal changes affect how sound moves in water. Scientists also discovered that low-frequency sound could penetrate the seafloor, and that sound is reflected differently from individual layers in the subsurface sediment. For the first time, using sound, scientists could create a picture of what was beneath the seafloor. This provided clues to the history of the earth and a means for prospecting for oil and gas beneath the seafloor. Pioneering work was done by Maurice Ewing, A. Vine, B. Hersey, and S. (Bud) Knott. In 1936–37, Ewing, Vine, and Worzel produced one of the earliest seismic recorders designed to receive sound signals on the seafloor. The need to generate high-energy, low-frequency sound that could penetrate deep into the seafloor led to the use of explosives and eventually to the development of airguns and high-voltage discharges (sparkers). The development of ocean-bottom seismic stations continued sporadically until the early 1960s when nuclear monitoring became important. Then, new generations of seafloor seismometers resulted from Vela Uniform, a US project that was set up to develop seismic methods for detecting underground nuclear testing. In the seismic industry, Eivind Berg and co-workers were the first to develop four-component ocean-bottom sensors, more than 60 years after Ewing’s first trials.

3.6.2  Rapid Advancement The beginning of WWII marked the start of extensive research in underwater acoustics. Nearly all the established methods of studying submarine geology were found to have military applications, so progress in underwater acoustics, as in areas like radar and weapons, was shrouded in secrecy. At the end of the war, the U.S. National Defense Research Committee published a Summary Technical Report that included four volumes on research discoveries, but much of the work done during the war was not published until many years later, if at all. The WWII effort focused on making careful measurements of factors that affected the performance of echo ranging systems. Things that affect the performance of sonar systems are described by what is now called the ‘sonar equation’, and includes the source level, sound spreading, sound absorption, reflection losses, ambient noise, and receiver characteristics. The rapid advancement of underwater acoustics continued after WWII, with wartime developments lead­ing to largescale investi­gations of the ocean’s basins. Coupled with advancements in technology (e.g. com­puters), underwater acoustics became an important tool for uses such as weather and climate research, underwater communication, and not least, seismic exploration.

118

3.6.3  Some Masters of Underwater Sound The word ‘science’ is derived from the Latin word ‘scientia’ which means ‘knowledge’. Here we would like to digress and present some major scientific theories and discoveries, spanning many years, which eventually led to developments in acoustics. As inventions and discoveries added to one another, technology and science advanced and evolved. It is important to realise that all successes require a long journey with many failures and failed experiments along the way. We hope that the little stories that follow inspire your curiosity and whet your appetite for discovery. Aristotle (384–322 BC), the Greek philosopher, in his treatise On the Soul, wrote that “sound is a particular movement of air.” He understood that sound consisted of compressions and rarefactions of air which “falls upon and strikes the air which is next to it…” – a very good expression of the nature of wave motion. A notable quote, true on so many levels, is: “Knowing yourself is the beginning Figure 3.42: Aristotle. of all wisdom.” Ibn Sahl (940–1000) was a mathematician, physicist and optic engineer at the Abbasid Court in Baghdad. Under the Abbasid caliphate (750–1258), the focal point of Islamic political and cultural life shifted eastward from Syria to Iraq. In 762, Baghdad, the circular City of Peace, was founded as the new capital. The first three centuries of Abbasid rule were a golden age when Figure 3.43: Extract from Ibn Sahl’s treatise Baghdad was one of the On Burning Mirrors and Lenses from cultural and commercial 984 showing his discovery of the law of refraction. In the treatise, he set out his capitals of the Islamic understanding of how curved mirrors and world. In 1990 the lenses bend and focus light. Egyptian science historian, Professor Roshdi Rashed, credited Ibn Sahl with developing the first law of refraction, also known as Snell’s Law, named after Willebrord Snellius (1580–1626). Ibn Sahl used the law of refraction to derive lens shapes that focus light without geometric aberrations.

Ibn al-Haytham (965–1040; also known as Alhazen), born in Basra, is considered to be one of the first theoretical physicists. His most famous work is his seven-volume treatise on optics Kitab al-Manazir (Book of Optics), written from 1011 to 1021. Here he disproved the ancient Greek idea that light comes out of the eye, bounces off objects, and comes back to the eye. He also studied the way light is affected when moving through mediums Figure 3.44: Modern sketch of Ibn al-Haytham, as depicted on an Iraqi such as water or gasses. banknote issued in 2003. Based on this study, he was able to explain why the sky changes colour at twilight (the sun’s rays hit the atmosphere at an angle, causing refraction). Furthermore, he calculated the depth of the earth’s atmosphere, 1,000 years before it would be proved by spaceflight: as we know today, the Earth’s atmosphere is about 480 km thick, but most of it is within 16 km of the surface. Ibn al-Haytham translated the book Optics by Ptolemy of Alexandria (100–170), which contained his studies of refraction at air-glass and air-water boundaries. However, Ptolemy used an incorrect quadratic ‘law’ of refraction. Because Ibn al-Haytham during the translation accepted this part of the book, Ptolemy’s error was perpetuated for a further 600 years. However, Ibn al-Haytham also translated Ibn Sahl’s book On Burning Mirrors and Lenses, where Ibn Sahl referenced Ptolemy’s Optics, and corrected Ptolemy by stating his own law. Therefore, we know that Ibn al-Haytham had actually seen the correct ‘Snell’s law’ of refraction. In his Book of Optics, Ibn al-Haytham wrote: “If learning the truth is the scientist’s goal… then he must make himself the enemy of all that he reads.” Leonardo da Vinci (1452–1519), the famous Italian thinker and artist, is usually credited with discovering, around the year 1500, that sound moves in waves. Many historians regard Leonardo as the prime exemplar of the ‘Universal Genius’. His interests also included geology and Figure 3.45: Leonardo da Vinci. fossils. Centuries later, his recognition that fossils tell the true story of the Earth was rediscovered by science when scientists of the eighteenth and nineteenth centuries set out to prove that the Earth is far older than it says in the book of Genesis. His freedom of thought allowed him to make great discoveries, and he wrote in his notebook: “There are three classes of people: those who see. Those who see when they are shown. Those who do not see.”

Willebrord Snellius (1580–1626) is famous for his 1621 discovery of Snell’s law (also called Snell and Descartes’ law): the law of refraction of light rays. It describes the relationship between the angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media. Snell’s manuscript, which remained unpublished during his lifetime, has now disappeared, but in Figure 3.46: Willebrord Snellius, Dutch 1662 it was examined astronomer and mathematician. by Issac Vossius, who describes the Law as saying: “If the eye O (in the air) receives a light ray coming from a point R in a medium (for example, water) and refracted at S on the surface A of the medium, then O observes the point R as if it were at L on the line RM ⊥ surface A. Then SL:SR is constant for all rays.” The Dutch physicist Christian Huygens also referred to the writings in his Dioptrica, published in 1703. By this quirk of fate we call it Snell’s law. René Descartes (1596–1650), French philosopher, mathematician and scientist, is often credited with being the ‘Father of Modern Philosophy’. He independently derived Snell’s law using heuristic momentum conservation arguments in terms of sines and put the law into widespread circulation in his 1637 essay Discourse on Method, using it to solve a range of optical problems. In France Snell’s law is often called ‘la loi de Descartes.’ An often used quote is, “Dubium sapientiae initium” (Doubt is the origin of wisdom). Figure 3.47: René Descartes.

119

Jean-Antoine Nollet (1700–1770), a French physicist, contributed to the theory of sound when he showed in 1743 that sound travels in water. He saw electricity as a fluid, subtle enough to penetrate the densest of bodies. In 1746 he formulated his theory of simultaneous ‘affluences and effluences’ in which he assumed that bodies have two sets of pores in and out of which electrical effluvia might flow. He later had a dispute with Benjamin Franklin over the nature of electricity. Nollet wrote a volume’s worth of letters to Franklin denying the verity of his experiments. In 1746 he gathered about two hundred monks into a circle about a mile in circumference, with pieces of iron wire connecting them. He then discharged a Figure 3.48: French physicist Jean-Antoine battery of Leyden jars Nollet. through the human chain and observed that each man reacted at substantially the same time to the electric shock, showing that the speed of electricity’s propagation was very high.

Musée d’histoire des sciences de Genève

Jean-Daniel Colladon (1802–1893) conducted experiments on Lake Geneva in 1826 demonstrating that sound travelled over four times as fast in water as in air. In 1842, Colladon showed that one can guide light with a falling stream of water. He was studying the fluid dynamics of jets of water that were emitted horizontally in the air from a nozzle in a container. In performing demonstrations of these jets in a lecture hall, he Figure 3.49: Swiss physicist noticed that his audience Jean-Daniel Colladon. could not clearly see what was happening to the falling water. He then used a tube to collect and pipe sunlight to the lecture table. The light was trapped by the total internal reflection of the tube until the water jet, upon which edge the light was incident at a glancing angle, broke up and carried the light in a curved flow. His experiments formed one of the core principles of modern-day fibre optics. He wrote, “I… managed to illuminate the interior of a stream [of water] in a dark space. I have discovered that this arrangement… offers in its results one of the most beautiful, and most curious, experiments that one can perform in a course on optics.”

120

Jacques Charles François Sturm (1803–1855) was a French mathematician who, in 1826, with the Swiss engineer Daniel Colladon (Figure 3.49), made the first accurate determination of the velocity of sound in water, and a year later he wrote a prize-winning essay on compressible fluids. In 1829 he presented the solution to the problem of determining the number of roots, on a given interval, of a real Figure 3.50: Sketch of Jacques polynomial equation of arbitrary Charles François Sturm. degree (Sturm’s theorem). Sturm found a complete solution to this problem, which had been open since the seventeenth century. In 1836 and 1837, Sturm and Liouville published a series of papers on second order linear ordinary differential operators, which began the subject now known as the Sturm-Liouville theory. Lord Rayleigh (1842–1919). His first researches were mainly mathematical, concerning optics and vibrating systems, but his later work ranged over almost the whole field of physics, sound and wave theory, electrodynamics, hydrodynamics and photography. Rayleigh was awarded the Nobel prize in Physics in 1904 for his work on gases. See Chapter 3.3 for his work on underwater air bubbles. One famous quote is, “The history Figure 3.51: John William Strutt, of science teaches only too plainly 3rd Baron Rayleigh. the lesson that no single method is absolutely to be relied upon, that sources of error lurk where they are least expected, and that they may escape the notice of the most experienced and conscientious worker.” Horace Lamb (1849–1934), English mathematician and physicist, had James Maxwell and George Stokes as his academic advisors, so it is not a big surprise to learn that he worked on both Maxwell’s equations and on the theory of motions of fluids. In 1910 he published the book The Dynamical Theory of Sound. Within geophysics, he is well known for the mathematical Figure 3.52: Horace Lamb. solution of Lamb’s problem, which deals with an acoustic point source in a medium consisting of two homogenous half-spaces. This work was later used by Press, Ewing and Tolstoy to study the motion of waves in a liquid layer superposed on a solid bottom. At a meeting of the British Association in London in 1932, he is reputed to have said, “I am an old man now, and when I die and go to Heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics and the other is the turbulent motion of fluids. And about the former I am really rather optimistic.” (Tabor 1989). A very similar quote, however, is also attributed to Heisenberg.

Elisha Gray (1835–1901) was an American electrical engineer, best known for his development of a telephone prototype. His ideas for a telephone were filed at the patent office (14 February 1876) mere hours after Alexander Graham Bell. In a famous legal battle, Bell was given priority. Gray also had a patent on an underwater telephone receiver (hydrophone) to pick up underwater sound from underwater bells warning of hazards, an idea he had conceived when working with Thomas Edison on improving the telephone. In 1899, Gray moved to Boston where one of his projects in collaboration with Arthur J. Mundy was to develop an underwater signalling device to transmit messages to ships. One such signalling device was tested on 31 Figure 3.53a: American December 1900. Three weeks inventor Elisha Gray. later, Gray died from a heart attack. A quote by Gray is: “When the uncultured man sees a stone in the road it tells him no story other than the fact that he sees a stone… The scientist looking at the same stone perhaps will stop, and with a hammer break it open, when the newly exposed faces of the rock will have written upon them a history that is as real to him as the printed page.” Arthur Joseph Mundy (1850–1912) was another American entrepreneur with an idea that underwater sound was a promising method of communicating at sea. In the late 1870s, a number of American and European scientists began attempts to develop an underwater signalling system. They were unsuccessful until 1898 when Mundy joined forces with Elisha Gray to start experiments. In 1899, they patented a way to transmit sounds efficiently over considerable distances underwater. Their patent clearly revealed their vision for underwater communications: “Our invention relates to a method of ringing or sounding a bell and also to a system and apparatus for transmitting intelligence between ships at sea and between the shore and any ship by means of sound-signals made in the water.” Through underwater communications, they argued, ships could navigate safely in the fog. In 1901, a test in Boston Harbor was run in which sound was transmitted and received at a distance of eight miles. In the same year Mundy was the co-founder of the Submarine Signal Figure 3.53b: The ‘warning bell’ under the floating telephone station, ‘Sea Bell’. To the left is Arthur Joseph Mundy and to the right is Elisha Gray in a photo printed in December 1901 in “The Sphere”, a Newspaper for the Home. The newspaper described their new wireless submarine telephony, where sound set up by a bell underwater can send messages to a ship at distance of 10–12 miles. Electrical receivers are attached to the side of the bow under the water-line. The wire is carried to the pilot-house. The moment the sound strikes the receiver in the water, it rings a gong at the other end.

Company of Boston, Massachusetts – the first commercial enterprise organised to conduct underwater sound research and to develop equipment to be used for increasing safety of navigation. The early development work was directed toward the improvement of the microphone. In 1902, Mundy designed the first watertight underwater microphones, which proved to be a significant improvement. Within a few years the equipment was installed on buoys and lightships throughout much of the western hemisphere, helping guide ships to port by a means similar to radio direction finding for distances approaching 10 nautical miles. In a 1903 US patent application, Mundy coined the term ‘hydrophone’ to describe the underwater microphone. Reginald Aubrey Fessenden (1866–1932) was a Canadian radio pioneer who on 23 December 1900, after many unsuccessful tries, transmitted these words: “Hello test, one – two – three – four. Is it snowing where you are, Mr. Thiessen? If it is, telegraph back and let me know.” Mr. Thiessen, one mile distant, confirmed. Radio broadcasting was born. This date heralded the beginning of radio telephony, but it took six more years for Fessenden to broadcast the first programme Figure 3.54: Reginald Fessenden. on Christmas Eve in 1906 – a ‘Christmas concert’ of Handel’s ‘Largo’ to the astonished crews of the ships of the United Fruit Company out in the Atlantic Ocean and the Caribbean Sea. He later developed the first practical sonar oscillator. In March 1914, this was tested off Newfoundland, Canada, where echo-ranging from a 3,200m distant iceberg was demonstrated. The Fessenden oscillators were so successful that they were used until and during World War II for sonar and mine detection purposes. The Submarine Signal Company produced a low-frequency echo sounder based on the Fessenden Oscillator, called a ‘fathometer’, because water depth was measured in fathoms, a unit of length equal to 6 feet (about 1.8m). On his gravestone is inscribed these words: “By his genius distant lands converse and men sail unafraid upon the deep.” Hugo Lichte (1891–1963), a German physicist, is acknowledged as being the founder of ocean acoustic physics, having been ahead of his contemporaries by decades. In 1917, as a research associate in The Imperial German Navy (Kaiserlichen Marine) in the Torpedoinspektion in Kiel, he determined theoretically the water speed’s dependence on temperature, salinity and static pressure while Figure 3.55: Hugo Lichte, working on problems related to underwater sound and communication. ‘founder of ocean acoustic physics’. At the end of WWI the Kaiserlichen Marine was replaced by the Vorläufige Reichsmarine (1919–21).

121

Weizmann Institute of Science

Chaim Leib Pekeris (1908–1993) was born in Lithuania and emigrated to the USA in 1925. In 1934 he started working within geophysics at MIT’s geology department and in 1941 he moved to the Hudson Laboratories doing military research. Pekeris moved to Israel in 1948 where he was one of the designers of the Weizmann Automatic Computer (WEIZAC), the first computer in Israel, and one of the first large-scale Figure 3.56: Chaim Leib Pekeris, physicist and mathematician. electronic computers in the world. John von Neumann and Albert Einstein were both members of an advisory committee that was established in 1947 for the planned computer in Israel, although Einstein did not support the idea. In response to the potential use of such a computer Neumann responded, “Don’t worry about that, if nobody else uses the computer, Pekeris will use it full time!” We might therefore state that Chaim Pekeris was among the first geophysicists who realised the huge potential of computers. Based on the earlier work by Lamb (1903), Pekeris was the first to develop the normal mode theory for acoustic waves propagating within shallow waters. His paper from 1948 entitled Theory of propagation of explosive sound in shallow water is a classic, widely used in underwater acoustics. Pekeris did not include a solid water bottom layer, but assumed acoustic wave propagation only. Later, Press, Ewing and Tolstoy included the effect of a solid water bottom, by combining the theory derived by Lamb in 1904 with the Pekeris normal mode theory from 1948. In a 1996 oral history interview, J. Lamar (Joe) Worzel (1919– 2008), an American geophysicist known for his contributions to underwater acoustics, underwater photography, and gravity

122

measurements at sea, remembered his and Maurice Ewing’s collaboration with Pekeris (Worzel, 1996): “Our part was the experimental part. This was work we did during World War II… What happened there was we did the explosion sounds in shallow water and we found that the water wave in the ocean was dispersive with the high frequencies traveling fastest and the low frequencies traveling slowest. So you got a wave pattern like that in the water wave – well, we got together some data like that and we sent it to Pekeris and said that this looks like an important thing and we had no theory for it. Do you know of any theory that you have? And he wrote back and said funny you should ask because I wrote a theory out for that kind of situation about a year ago only we couldn’t find any data to fit it and so we filed it and I’ve never had any data until you sent me this. Can you send me more? … And so we sent him more. I had the responsibility of getting the records together to send him more and he wrote this theoretical business which I still have difficulty understanding but Ewing understood it as soon as he saw the graphs. The data, all these graphs about what’s going on he understood immediately.” Ernst Hjalmar Waloddi Weibull (1887–1979), a professor at the Swedish Royal Institute of Technology, and Scientific Advisor to the Swedish armaments company A. B. Bofors, is today primarily known for the Weibull-distribution used to model random lifetimes. However, he published a scientific paper on the propagation of underwater explosive waves in 1925. He observed that the exploding Figure 3.57a Waloddi Weibull. charge emitted a series of distinct pulses, originated by oscillations in the spherical volume of gas into which the explosive had been converted. In the context of this book we acknowledge his development of a reflection seismic method for measuring the thickness of the upper sediment layers in great ocean depths. Based on an invitation from the Swedish physicist and oceanographer Hans Petterson, Weibull had been experimenting for some years in Swedish coastal waters attempting to use reflection seismic to measure the thickness of the sediment layers, when in the spring of 1946 he participated in a test expedition with the Swedish government research ship the Skagerak down to the western Mediterranean with equipment tests for the upcoming Albatross expedition. His seismic method was considered to be a quick survey method that required a minimum outlay of time and money for investigations in deep waters. A depth charge was made to explode by means of an ignitor released through hydrostatic pressure in depths varying from 500 to 6,500m under the water surface but well above the sea bottom (Weibull, 1954). The arrival at the surface of the sound waves set up by the explosion was registered by hydrophones hung out over the sides of the ship. The hydrophones converted sound (pressure) waves into electrical impulses, which were transmitted by cable to an oscillograph in the onboard laboratory. On the resulting oscillograms it was possible to recognise signals set up by the direct sound waves of the explosion, followed by repeated echoes from the water

Sam C. Saunders

Since only a navy recruited and maintained on a volunteer basis was allowed to Germany, from 1919, Lichte continued his work in the industry for civil purposes. In April 1919, he submitted to Physikalische Zeitschrifte the first scientific paper on underwater acoustics, theoretically describing the sound speed stratification of the ocean, forming ocean-wide sound ducts. He started the paper with his motivation for the research: “Resumed navigation from and to German sea ports has raised the importance of delineating mine-free corridors. Underwater sound signals are among the most important tools to this end. Consequently, learning more about sound propagation in water is of considerable interest.” Lichte’s 1919 paper was largely unnoted by researchers in the United States and in England until the 1970s. The paper was doubtless the outgrowth of the capability of early twentieth-century German physics applied to the defence of the Kaiser’s U-boat fleet during WWI (Urich, 1977). Lichte worked with Alexander Behm (1880–1952), who developed a working ocean echosounder in Germany at the same time as Reginald Fessenden was doing so in North America, and with Heinrich Barkhausen (1881–1956), who is best known for the discovery of the Barkhausen effect, the abrupt increase of the value of the magnetic field during magnetisation of a ferromagnetic material. Lichte was also instrumental in the development of the first sound films. After 1945 he settled for a position as Lehrer für Physik und Mathematik at Lilienthal-Oberschule in Berlin.

Figure 3.57b: The Albatross expedition led by Hans Petterson is a famous Swedish oceanographic research trip that between summer 1947 and autumn 1948 sailed around the world covering 45,000 nautical miles. The purpose of the expedition was to explore the depths of the Atlantic, Pacific and Indian Oceans near the equator by studying aspects of physical, chemical, biological, and geological oceanography. It retrieved coresamples from the bottom of the ocean, took water samples, made temperature recordings, carried out deep-sea trawling, took continuous echo soundings, and carried out seismic reflection measurements of the sediment thickness using ‘sink bombs’ as sources with Weibull’s method. In the Atlantic the echograms revealed that in some places the deep-sea floor was essentially flat except for narrow, steep-sided valleys. These plains were interpreted as evidence of thick sediment and the valleys as grabens caused by faulting. In fact, the Swedish expedition had discovered the first deep-sea channels caused by turbidity currents (Menard, 1987).

age of the particular part of the Atlantic Ocean where these great thicknesses were found must be counted in many millions of years. This, he stated, appears to knock the bottom out of Wegener’s famous ‘continental drift’ theory, according to which the Atlantic Ocean should have begun to open up in Cretaceous time, that is, merely 60 to 80 million years ago. The sedimentary thicknesses found in the Pacific and Indian Oceans were surprising. Petterson (1954) wrote: “The explanation for the much lower figures found in the other oceans is even more far-fetched. They cannot well indicate a lesser age of the oceans in question, but may instead be due to a much lower rate of sedimentation. In fact there are certain indications that at least in the central parts of the Pacific Ocean the accumulation of red clay proceeds at a rate about ten times slower than in the central Atlantic.” Weibull’s technique was later taken up by Scripps Institution of Oceanography scientists who made numerous thickness measurements in the Pacific. Maurice Ewing (1906–1974), American geophysicist, used seismic methods to make funda­ mental contributions to the understanding of marine sediments and ocean basins. Making seismic refraction measurements along the Mid-Atlantic Ridge, and in the Mediterranean and Norwegian seas, Ewing took the first seismic measurements in open seas in 1935. During World War II he did pioneering work Figure 3.58: Maurice Ewing. on the transmission of sound in seawater. His work with Frank Press on interpreting surface waves led to the first well-founded estimate of the depth of the Mohorovičić discontinuity under the ocean floor. To honour the memory of Maurice Ewing and his contributions to geophysics, SEG established in 1978 the Maurice Ewing Medal as the highest honour given by SEG. Maurice Ewing is quoted as saying: “Imagine millions of square miles of a tangled jumble of massive peaks, saw-toothed ridges, John T. Chiarella

surface and the bottom (sea bottom multiples). In addition, and importantly, one could also observe much weaker and deeper echoes, due to sound waves which had penetrated through the sediment layer and become reflected against transition surfaces within it, or from the underlying rock bed. He went on to publish a paper on the propagation of pressure waves from underwater explosions, and he refined his new method to investigate the depth of sediments using explosive sources (Weibull, 1947, 1954). He observed among other things the bubble pulse phenomenon which has since troubled geophysicists. He measured the maximum thickness of the sedimentary layer in the centre of the Tyrrhenian Sea, part of the Mediterranean Sea off the western coast of Italy, where, below a water layer of 3,600m, he estimated the sediment to have a thickness of nearly 3,000m (Weibull, 1947). Wallodi Weibull next participated in the first part of the famous Swedish Albatross expedition (1947–48) which crossed the equator 18 times and covered 45,000 nautical miles (83,000 km), where the main task was sediment coring of the ocean floor in the Atlantic, Pacific and Indian Oceans near the equator. Sediment soundings with depth charges were carried out at 75 different positions, where in most cases, two depth charges were set to explode at different depths (500 and 2,500, 4,500, or 6,500m). The thickness of what was assumed to be sediment ranged up to 3,500m in the North Atlantic but was no more than a few hundred metres in the Pacific. Weibull also noted that acoustic waves were impenetrable to layers of lava. This observation is valid still today as little progress indeed has been made to look for sediments beneath thick lavas. In Petterson (1954) Weibull’s results were interpreted: “… if we accept Weibull’s maximum sediment thickness in the central Atlantic of about 12,000 feet, and if we further assume that the whole of this sediment is of the same type as its surface layer, namely Atlantic red clay, we can make an approximate estimate of the time required for accumulating a layer of this thickness. Taking 0.3 inches in 1,000 years as a reasonable value for the rate of sedimentation of Atlantic red clay, we arrive at a time span of nearly 500 million years.” Petterson of course realised that this estimate had major uncertainties due to compaction which would reduce thickness and possible variations in the rate of sedimentation. However, he found it evident that the

Figure 3.57c: A reproduction of one of Weibull’s oscillograms, obtained in the central Atlantic Ocean between Madeira and the MidAtlantic Ridge. We can see three breaks in the record, due to deep echoes thrown back by three different reflecting layers in the bottom, the uppermost of sediment thickness 1,550m and the middle one of thickness 2,200m. The deepest, which is presumably reflected from the bedrock beneath the sediment carpet, indicates a total thickness of the latter of 3,475m, which was a record for the cruise. It was assumed that the wave velocity was 1,500 m/s. On the image, the scribblings refer to: Station no 315, shot no 341, where the depths were e=20m for the hydrophone, a=3,730m for the charge, and b=4,360m for the sea bottom. The charge consisted of a hydrostatic fuse (4 g) + 3x170 g of TNT = 514 g.

123

earthquake-shattered cliffs, valleys, lava formations of every conceivable shape – that is the Mid-Ocean Ridge.” Joe Worzel remembered his collaboration with Ewing (Worzel, 1996): “In one of the contracts, probably it would be in ’42 or ’43, they had what we called the ‘fine print clause’ which was a statement that Woods Hole will find out all there is to know about underwater sound and tell the navy. Well, that still hadn’t been done by thousands of researchers. And Ewing and I were put on the project and we tried to find out what the problem was. What they were trying to solve. And they wouldn’t tell us. It was too classified to tell the people who were going to do the work… But anyhow we talked to them about what it was they wanted us to measure. After all you got to know what in heck you’re trying to do. And they wanted us to find four grossly different geological situations and to see what the sound did in those grossly different situations. And we kind of guessed what the problem was from little things, little bits of things we heard here and there and one thing and another… We were thinking that the Germans were blowing up our mine fields, our acoustic mine fields, using explosive sounds and our people couldn’t figure out how they were doing it. And this was in fact what was happening. And so we said we would go to Solomons, Maryland, which was a thick layer of mud, soft sediment. We’d go down and we’d do these measurements in ten fathoms and twenty fathoms, that was the range of depths that we were supposed to understand for them… We went off and we started making these measurements and we found what they wanted to know. What these acoustic mines did is they had a device on the mine so that a water wave from an explosion wouldn’t set it off. And so we figured immediately, well that was obvious, the low frequency acoustic waves that travelled through the bottom travelled faster, they’d get there before the water wave could shut the mine off and boom it goes. And they would set off a whole mine field by setting off one explosion. The whole mine field would go up that our people spent days laying. But at any rate we made these measurements and we found the dispersion in the water wave and we turned that over to Pekeris for his analysis, his theoretical reasoning for, which he as I told you had already figured out. And we gave them the answer that they wanted and they adjusted their acoustic mines accordingly and the problem was solved.” Anton Ziolkowski (1946–) was the first scientist to model the output pressure waveform from an airgun in 1970. He understood early on that measurements of underwater sound should be done in conjunction with development of physical theories explaining the observations to us. Anton modified the old saying of “Measuring is knowing” into “Measuring is the way to knowing”. According to his Figure 3.59: Anton Ziolkowski. colleague Jacob Fokkema, this is a good way to characterise Anton’s approach to science as well as his significant contributions to the geophysical science. Anton has worked both for the industry and for academia through his career. From 1982 to 1992 he was Professor of Applied Geophysics at Delft University of Technology, and in 1992 he became Professor of Petroleum Geoscience at the University

124

of Edinburgh. During his time in Delft Anton contributed significantly within the field of convolution and especially deconvolution (how to separate the source signal from the seismic signal, so that we get closer to the true earth response). As an enabling technique for this he suggested (together with his coauthors) to estimate the source signature of an airgun array by mounting near-field hydrophones close to each individual gun. For his contributions within geophysics, he has received several awards and honours, the latest one being the Desiderius Erasmus award of EAGE (European Association of Geoscientists and Engineers) in 2016. Svein Vaage (1948–2008) graduated from the University of Bergen in 1978 with an M.Sc. in solid-earth physics. During his first year of research he worked on earthquakes and classical seismology, before embarking on seismic exploration. Svein Vaage was instrumental in measuring signals from marine seismic sources. In the mid 1980s he established a comprehensive Figure 3.60: Svein Vaage. measurement programme where the seismic signatures of airguns were measured at various depths, various firing pressure and in numerous configurations. This huge experimental database established the foundation for a comprehensive tool for modelling the source signal generated by airguns and water-guns used for seismic exploration. One of the authors (ML) of this book actually worked under Svein’s guidance in the late 1980s. During intensive working periods Svein often used the typical Norwegian quote: “The winter is long but not eternal” to encourage his co-workers. He had a strong belief in the value of accurate measurements, and also in repeating measurements if possible. Svein was also a pioneering force behind many innovations in seismic streamer acquisition, including the Continuous Long Offset (CLO) method and Simultaneous Shooting. He was a significant contributor to the development of the GeoStreamer, a marine seismic streamer measuring both pressure and vertical particle velocity simultaneously. This streamer was patented by PGS, and this led to the development of the concept of broadband seismic. Eivind W Berg (1955–), a Norwegian geophysicist who, together with James Martin and Bjørnar Svenning, was honoured in 1999 with the SEG’s Kauffman Gold Medal for demonstrating that high-quality, high-density marine shear-wave data can be acquired by recording converted waves at the seabed. Berg presented their work at the 1994 EAEG (now EAGE) Annual Meeting and this sparked an explosion of Figure 3.61: Eivind Berg, activity and advancement in marine 1999 recipient of the SEG’s Kauffman Gold Medal. shear-wave seismic technology.

3.7  A Feeling for Decibels What is the difference between decibels in air and in the sea? What is the loudest bang reported? What is the loudest animal in the sea? What is the similarity between a seismic water gun and the pistol shrimp? I’d like to be, under the sea, in an octopus’s garden in the shade He’d let us in, knows where we’ve been, in his octopus’s garden in the shade Ringo Starr, The Beatles – “Octopus’s Garden”

The human ear is an incredible hearing mechanism that allows sound pressures to be picked up and transmitted through auditory nerves to the brain, where signals are interpreted as sound. The ear’s response to sound is non-linear; in judging the relative loudness of two sounds, the ear responds logarithmically. Therefore, physicists and acoustic engineers have adopted a logarithmic scale, known as the decibel (dB) scale, for sound pressure level measurements. The sound pressure level (SPL) is related to a reference pressure. For airborne sound, the reference pressure is 20 microPascal (µPa) – the approximate threshold of human hearing at 1,000 Hz (roughly the sound of a mosquito flying 3m away). To get a feeling for decibels, look at Table 3.1 (overleaf), which gives values for the sound pressure level of sounds in our air environment. The pressure at which sound becomes painful for a human is Figure 3.62a: An 1888 lithograph of the 1883 eruption of Krakatoa, which is thought to have produced one of the loudest sounds ever heard by man.

the pain threshold pressure. The threshold pressure for sound varies only slightly with frequency and can be age-dependent. It is well known that people who have been exposed to high noise or music normally have a higher threshold pressure. A short exposure to 140 dB is seen as the approximate threshold for permanent hearing loss in humans.

3.71  Big Bangs One of the world’s worst volcanic eruptions in modern history was the 1883 eruption of the volcano on the small island of Krakatoa in Indonesia. A series of explosions sunk two-thirds of the island and the resulting giant tsunamis and toxic gases brought mass death and destruction to Indonesia. The ground Figure 3.62b: Have you ever wondered why the sky is a lurid red in “The Scream”, Edvard Munch’s 1893 painting of modern angst? Astronomers suggest that Munch drew his inspiration from the vivid red Krakatoa volcanic twilights seen in Europe from November 1883 to February 1884.

125

Table 3.1: Examples of sound pressure levels in decibels of sounds in our environment. The pain threshold for humans is 130–140 dB. Any sound above 85 dB can cause hearing loss. The loss is related both to the power of the sound as well as the length of exposure. ‘N’ represents a dB level representative of the object, and not necessarily what the listener would experience at a distance of 1m. The decibel (dB) is the unit used to measure the intensity of a sound. The decibel scale is a little odd because the human ear is incredibly sensitive. Your ears can hear everything from your fingertip brushing lightly over your skin to a loud jet engine. In terms of power, the sound of the jet engine is about 1,000,000,000,000 times more powerful than the smallest audible sound. That’s a big difference! On the decibel scale, the smallest audible sound (near total silence) is 0 dB. A sound 10 times more powerful is 10 dB. A sound 100 times more powerful than near total silence is 20 dB, and so on.

shook in the wake of the blast, which was estimated to be equivalent to 200 megatons of TNT – about four times the yield of the Tsar Bomba, the largest nuclear device ever detonated, in the Novaya Zemlya archipelago in 1961. Shock waves from Krakatoa echoed around the world 36 times and lasted for about a month. The bang was heard 4,653 km away on Rodriguez Island in the Indian Ocean. Literature suggests that the sound level was around 180 dB at 161 km (100 miles). Assuming that the sound level falls off as the inverse of distance, the decibel level on Rodriguez Island would still be 150!

3.7.2  Sound in Water Great care must be taken when comparing sound levels in water with sound levels in air; 100 dB in water is not the same as 100 dB in air. Two complications must be addressed. Firstly, researchers studying sound in water and air use a different reference pressure. In water, the common reference is 1 µPa, instead of 20 µPa as in air. These two references are, therefore, 26 dB apart (20 log 20 = 26). Secondly, the impedances (density times velocity) of water and air differ. The impedance ratio is 1,540,000/415, or 3,711. As a result, similar sound intensities (measured in watts per square metre) in water and air will give a pressure that in water is the square root of the impedance ratio lower, or 61 times lower, than that in air, corresponding to a difference of 35.6 dB (20 log 61 = 35.6). Therefore, to compare airborne sound to waterborne sound, 26 + 35.6 dB, or approximately 62 dB, must be added to the air

unfiltered precursor

1.0

0.5

0.0

-0.5 10

10

20

Time (ms)

30

40

Figure 3.63: Near field measurement of a water gun. The precursor is probably caused by a cavity collapsing inside the water chamber of the gun. The strong signal observed after 32 ms is caused by cavities collapsing in the water.

measurement to obtain the in-water equivalent of the sound pressure level. This compares well with studies on human underwater hearing, which show that the hearing threshold is 67 dB for a signal of 1,000 Hz. When measuring sound from a source, it is necessary to measure the distance from the source as well, since the sound pressure in homogeneous space decreases with distance with a 1/ distance relationship. Therefore, when presenting dB values, they must be related to the reference pressure at a reference distance of one metre. In water, then, the unit is dB re 1 µPa @1m. A simple example: if a jet engine is 140 dB re 20 µPa @ 1m, then underwater this would be equivalent to 202 dB re 1 µPa. To convert from water to air, simply subtract the 62 dB from the sound pressure level in water. A supertanker generating a 170 dB sound level would be roughly equivalent to a 108 dB sound in air.

3.7.3  The Pistol Shrimp – The First Water Gun The pistol shrimp generates a high-frequency signal by creating a high velocity water jet into the water (see Section 3.7.4), which will cause cavities because of significant local velocity variations in the water close to the source. Exactly the same principle is used for the Figure 3.64: The S15 water gun. seismic water gun, a source that creates water jets that propagate out of four nozzles. The cavity is filled with water vapour, which turns into water molecules again during the collapse. Already by 1917 Lord Rayleigh had derived an equation relating the collapse time t to the radius R of the cavity: t ~ R √ρ ⁄ ph , where ρ is the Sercel

1883 Krakatoa eruption . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 (N) 1908 Tunguska comet explosion . . . . . . . . . . . . . . . . . . . 300 (N) Threshold of irreparable damage; Jet 50m away . . . . . . . . . . 140 Threshold of pain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130–140 Threshold of discomfort; Rock concert . . . . . . . . . . . . . . . . 120 Disco, 1m from speaker; Power lawnmower at 1m . . . . . . . . 100 Hearing damage from long-term exposure . . . . . . . . . . . . . . . 90 Diesel truck, 10m away . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Kerbside of busy road, 5m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Office environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Average home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Quiet bedroom at night . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Whisper at 1m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Quiet rustling leaves; calm human breathing . . . . . . . . . . . . . 10 Threshold of hearing (undamaged human ears) . . . . . . . . . . . . 0

126

1.5

Sound Pressure Level (dB re 20 µPa)

Radiated Pressure (bar-m)

Examples with distance

Sercel

Figure 3.65: Operation of a S15 water gun. Air pushes water through the portholes creating cavities that collapse and create a strong, high-frequency acoustic signal.

water density and ph is the hydrostatic pressure in water. This means that the bubble period for a water gun should vary as the cubic root of the volume, and this has been confirmed by experiments. Some of the high frequency signal created by airguns is assumed to originate from small cavities collapsing close to the surface of the airgun. It is believed that the geometrical design of the airgun influences the amount of such cavities. The peak amplitude of a water gun signal is approximately inversely proportional to the hydrostatic pressure, which means that the acoustic signal is stronger for shallow-towed water guns. Correspondingly, we should expect that cavity noise from airguns decreases with depth. Today, water guns are rarely used in seismic acquisition, because they are less repeatable and more impractical than airguns (although the pistol shrimp is still using this fascinating technology). In order to create a cavity, a minimum value for the water jet velocity is required. Using the Bernoulli equation and assuming zero water velocity outside the water jet, we get the following approximation for this critical velocity: . For a water gun, a vessel propeller, or a pistol shrimp at 5m depth the critical velocity is 17 m/s. The water jet created by the shrimp is approximately 27 m/s, well above this critical velocity. For a water gun, typical shuttle velocities might be as high as 60 m/s, also well above this critical limit. This critical velocity increases with depth, so that at 100m depth the critical velocity is 47 m/s.

Caplio

Figure 3.66: The tiny pistol shrimp is remarkably loud.

Examples

Sound Pressure Level (dB re 1 µPa @ 1m)

Undersea earthquake (magnitude 4.0 on Richter scale) . . . . 272 Chemical explosive, charge size 1 kg . . . . . . . . . . . . . . . . . . . .>270 Lightning strike on water surface . . . . . . . . . . . . . . 260 (approx.) Airgun array (peak-to-peak) . . . . . . . . . . . . . . . . . . 260–240 (p–p) Seafloor volcanic eruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255+ Sperm whale clicks . . . . . . . . . . . . . . . . . . . . . . . . . . 236–202 (rms) Low-frequency naval sonar (100–500 Hz) . . . . . . . . . . . . . . . . 215 Supertanker (length 340m; speed 20 knots) . . . . . . . . . . . . . . 190 Blue whale (vocalisations: low-freq. moans) . . 190 (avg.145–172) Pistol shrimp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Fin Whale (vocalisations: pulses, moans) . . . 188 (avg.155–186) Offshore drill rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Open ocean ambient noise (sea state 3–5) . . . . . . . . . . . . 100–74 Auditory threshold of a diver at 1 kHz . . . . . . . . . . . . . . . . . . . . 67 Table 3.3: Examples of sound pressure levels in decibels of natural and humanmade noises in sea. All the above are nominal total broadband power levels in 20–1,000 Hz band – levels that would be measured by a single hydrophone (reference 1 μPa @ 1m) in the water. To convert to sound pressure of a similar intensity in air, subtract 62 dB.

3.7.4  Loudest Animals in the Sea The sperm whale, with the largest nose of any animal, makes the loudest sounds by any living source. Its clicks, used for both echolocation and communication, can have extremely intense source levels up to 236 dB re 1 μPa (rms), at the standard reference distance of 1m, with dominant frequency at 15 kHz. In Section 3.8 we discuss studies of sperm whales in relation to seismic surveys. The blue whale, the largest living animal, is also loud. Its low-frequency calls have been measured at 190 dB and can travel in deep-water sound channels for thousands of kilometres. The pistol, or snapping shrimp, 2 cm in size and native to the warmer waters of the Mediterranean, competes with the blue whale in producing loud sounds. The shrimp stuns its prey by snapping its claws together to create a deafening ‘crack’ – allowing it to move in for the kill – it all happens in a fraction of a second. The claw click creates a cavitation bubble that generates acoustic pressures of up to 218 dB at a distance of 4 cm from the claw. The pressure is strong enough to kill small fish. It is equivalent to a zero to peak source level of around 190 dB re 1 μPa @1m. With the fastest ‘gun’ in the sea, the pistol shrimp is responsible for a surprising amount of the noise in the ocean. The snapping has even been known to disrupt the navigation equipment on submarines.

127

3.8  Marine Mammals and Seismic Surveys The 1956 documentary The Silent World by Jacques-Yves Cousteau wowed audiences with its vibrant depiction of aquatic life – but the ‘silent world’ is far from silent. Consider the subtleness of the sea; how its most dreaded creatures glide under water, unapparent for the most part, and treacherously hidden beneath the loveliest tints of azure… Consider all this; and then turn to this green, gentle , and most docile earth; consider them both, the sea and the land; and do you not find a strange analogy to something in yourself? Herman Melville (1819–1891), Moby-Dick; or, The Whale Sound is signal or noise. A listener will define sounds of interest as signals and everything else that might interfere with those signals as noise, especially if disturbing or unpleasant. For example, seismic operators consider their airgun array to produce a signal, while we may guess that marine mammals are likely to consider it to be noise. Marine mammals use sound to communicate with one another, sense their environment, and find food. They obtain information about the environment by listening to sounds from natural sources, such as surf noise, which indicates the presence and direction of a shoreline or shoal, ice noise, and sounds from predators such as killer whales. Furthermore, toothed whales use echolocation sounds to sense the presence and location of objects, like prey. For similar reasons, humans use the advantages of sound in the oceans for communication, for navigation, and to search for food using fish-finding sonar. Marine mammals also use sound for communication, navigation, and localising and catching prey and for many years we have wondered how man-made sounds in the sea could affect marine life. It enhances background noise levels and thus may prevent detection of other sounds important to the health and behaviour of marine mammals. Also, man-made noise could interfere negatively with the mammals’ calls and echolocation pulses. Anthropogenic sounds which set acoustic ‘footprints’ in the oceans come from many human activities. Ships are a major source of noise in the ocean, and industrial activity, including platform construction and drilling, also contributes to the noise levels. The anthropogenic sounds fill the whole range of frequencies of the natural sounds, and thus have a possible masking effect on the natural sounds, which may limit the ability of marine mammals to detect sound cues in their environment.

3.8.1  Whale Stranding Every year thousands of whales beach themselves. Multiple strandings in one place are rare but when they happen they often attract media coverage as well as rescue efforts. The whales that most frequently mass strand are toothed

128

Figure 3.67: In the novel Moby-Dick; or, The Whale by Herman Melville, Ishmael narrates the quest of Ahab, captain of the whaler Pequod, for revenge on the albino sperm whale Moby-Dick, which on a previous voyage destroyed Ahab’s ship and severed his leg at the knee.

whales, which normally inhabit deep waters and live in large, tightly knit groups. Their social organisation appears to be a key factor influencing their chances of mass stranding, as, if one gets into trouble, its distress calls may prompt the rest to follow and beach themselves alongside. However, many other, sometimes controversial, theories have been proposed to explain stranding, but the question as to why they do it remains unresolved. Naval activities, such as live-ammunition training, vessel noise and explosions, introduce noise into the oceans. The naval activity that has been subject to the most scrutiny is midfrequency sonar, which can produce sound at levels of up to 237 dB re 1 μPa @ 1m at frequencies between 2–8 kHz. A controversial issue is the extent of the relationship

Chagai

between sonar operations in nearshore areas and the relatively rare stranding of beaked whales. Sonar operations have been correlated temporally and spatially with strandings of 14 beaked whales in the Bahamas in 2000, but why these whales actually swam onto the beach is not understood.

3.8.2  Can Seismic Affect Marine Mammals?

Figure 3.69: Sperm whale diving beneath the surface of the water. In large male sperm whales, one third of the body length is dedicated to the huge nose, the world’s largest and most powerful biological sound generator. Every second, the whale emits a directional click from its nose to create an acoustic picture of its surroundings, allowing it to navigate and locate prey – a process called echolocation. Sperm whales also make distinct patterns of clicks for conversation. The actual clicking mechanism is not yet fully understood.

Francois Gohier/Science Photo Library

Can seismic airgun arrays have harmful impacts on marine mammals? Could airgun noise mask communication, or hamper their ability to identify and catch prey? Could airgun activity cause populations to move from preferred habitats, feeding grounds, breeding and resting areas during movements along migratory pathways? Biologists, acousticians, geophysicists, and government regulatory technical managers do not agree on a simple and definitive answer. Figure 3.68: Volunteers attempt to rescue beached pilot whales at Farewell Spit, New Zealand, 2006. Therefore, in 2005, an international E&P Sound and Marine Life Joint Industry Programme was formed with Therefore, when geophysicists state that airgun source arrays members from the International Association of Oil and Gas produce sound levels equivalent to 260–240 dB re 1 μPa @1m Producers, with the goal of obtaining scientifically valid data on (peak-to-peak, or p–p), they are referring to the levels one the effects of sounds produced by the E&P industry on marine metre away from the hypothetical point source, but since this life. The research is carried out by prominent marine mammal is not actually a point source, the sound level 260–240 dB will researchers and institutions from around the world (see www. never be realised in the water. No marine mammal can possibly soundandmarinelife.org). be exposed to the pressure levels quoted a metre from the However, most experts in the field believe that there is a very theoretical point source. low probability of physical damage to the hearing of any species Thus, the point source model is convenient, but only has of marine mammals from seismic surveys. validity in the far-field of the array. If the point-source sound pressure level is stated to be 260 dB re 1 μPa @1m (p-p), we 3.8.3  Sound From Airgun Arrays can predict using the model of geometrical spreading that the Let us look at some facts. pressure measure in the far-field at 100m is (260–20 log 100) An airgun array consists of many single airguns distributed dB, or 220 dB (p-p). in a pattern. When geophysicists report the pressure sound The peak-to-peak level measures the entire height of the level from an airgun array, they refer the level to a hypothetical airgun array signal. When characterising noise, it is common point source that would radiate the same pressure sound to measure the root-mean-square (rms) level, which gives the level in the far-field as the physical array. The far-field, where average of the pressure signal over a given duration. For airgun the acoustic output appears to be coming from a single point signals, subtract 18 dB from the p–p level to obtain an estimate source, is typically 150–200m beneath the centre of the array. of the rms level, so the above point-source pressure level 260 dB

129

re 1 μPa @1m (p–p) corresponds to 242 dB re 1 μPa @1m (rms). Most of the sound energy from seismic sources is at frequencies below 200 Hz. Single airguns generate signals in the frequency range of 5–200 Hz, while the combined signal from airgun arrays is in the order of 5–150 Hz. The sound pressure for individual frequencies, or bands of frequencies, varies, but the maximum sound level falls between 10 and 80 Hz.

Figure 3.70: Sperm whale clicks, recorded presumably on axis (top) and about 20° off axis (bottom). Adapted from Møhl et al., 2003.

3.8.4  Sperm Whale Clicks Are Intense! The clicks generated by sperm whales are used for both echolocation and communication. From the mid-1960s, the clicks were described as multi-pulsed, long duration, nondirectional signals of moderate intensity levels about 170–180 dB re 1 μPa (rms), and with a spectrum peaking in the 2–8 kHz range.

Aguasonic Acoustics/Science Photo Library

3.71: This pretty pattern is a visual representation of songs sung by the northern minke whale. The image was produced by engineer Mark Fischer, who converted the sound frequencies of the whale song into a graph using a mathematical process known as wavelets. Minke whale songs are produced by the male during the mating season and can travel long distances underwater.

130

However, studies by Møhl and co-authors, working in the Bleik Canyon offshore Norway, give new insights. The clicks recorded on large-aperture hydrophone arrays are monopulsed, lasting 100 μs, and have pronounced directionality, with extremely intense (on-axis) source levels up to 236 dB re 1 μPa (rms), with spectral emphasis at 15 kHz. The measured sound levels make sperm whale clicks by far the loudest of sounds recorded from any biological source. The fact that sperm whales themselves generate sounds at rms levels only 6 dB lower than the loudest levels generated by seismic surveys (although their peak levels will be about 10 kHz rather than the 10–80 Hz range of seismic surveys)

have led experts to believe that seismic operations probably do not cause any problems for sperm whales. This position is further backed by sperm whale studies in the Gulf of Mexico (Caldwell, 2002). Marine seismic acquisition will probably always be under intense scrutiny by environmental groups. However, one should not jump to conclusions, and discussions should continue on exactly how much sound is acceptable, while seismic operators must comply with regulations. Scientific research that will increase our knowledge base is ongoing. It is critical that stringent mitigation measures on marine seismic exploration companies are based on sound science, not speculation.

Whales and Dolphins

Jane Whaley

Whales and dolphins belong to the mammalian order only for very high frequency sound, with absorption loss Cetacea, which is subdivided into two suborders, negligible for frequencies below 5 kHz over ranges of more Odontoceti and Mysticeti. than 100 km. Thus, the sounds of baleen whales, which are Odontocetes are the toothed whales and dolphins, the mostly below 1 kHz, will not experience much absorption largest being the sperm whale, followed by Baird’s beaked loss. Large whales are known to communicate over very whale and the killer whale, called such because it feeds large distances, possibly several hundred kilometres. These on warm-blooded prey, and occasionally even hunts other sounds appear to serve predominantly social functions, whales. Mysticetes, by contrast are ‘toothless’. In place of including reproduction and maintaining contact, but they teeth they have rigid brushlike whalebone plate material may also play some role in spatial orientation. called baleen, used to strain shrimp, krill and zooplankton, The sounds of dolphins, on the other hand, are generally hanging from their upper jaw. All the great whales are above 20 kHz which limits their communication distance. mysticetes or baleen whales. They also emit a variety of sounds like whistles, burst Underwater, visibility is limited, and acoustics play an pulses, and echolocation clicks, extending over a wide highimportant role in the life of all cetaceans. Whales and frequency range from about 5 kHz to over 135 kHz. dolphins emit a wide variety of signals, Figure 3.72: Dolphins playing. utilising a frequency range from about 15 Hz (blue and fin whales) to over 100 kHz, used by a number of odontocetes when echolocating and emitting burst pulses. A rule of thumb is that larger animals tend to emit lower frequency sounds and smaller animals emit higher frequency sounds. The use of a particular frequency band has implications as to the distance other animals can hear the sound. Acoustic propagation losses are related to geometrical spreading and absorption. Geometric spreading loss is frequency independent, while sea water absorption loss varies with frequency. But absorption is important

131

3.9  The Hearing of Marine Mammals Oh, the rare old Whale, mid storm and gale, In his ocean home will be A giant in might, where might is right, And King of the boundless sea.

Birgitte Reisæter Amundsen

Anonymous

Figure 3.73: The Maldives is home to a particularly rich whale and dolphin fauna: 23 species of whales and dolphins have now been recorded.

Animal Humans Cats Dogs Horses Elephants Grasshoppers Mice Bats Whales Seals and sea lions

132

long-term submersion, but they retain an air-filled middle ear and have the same basic inner-ear configuration as terrestrial species. Each group has distinct adaptations that correlate with both their hearing capacities and with their relative level of adaptation to water.

3.9.1  Measuring Hearing An audiogram is a graphical representation of hearing thresholds at several different frequencies, showing the extent and sensitivity of hearing. To obtain an audiogram, sound at a single frequency and at a specified level is played to the subject, by means of loudspeakers or headphones in air, or underwater loudspeakers in water. A button is pressed when the tone can be heard; the level of the sound is reduced, and the test repeated, until eventually, a level of sound is found where the subject can no longer detect it. This is the threshold of hearing at that frequency. The measurement is typically repeated at different frequencies and the results

Frequency (Hz) High Low

20 100 40 31 16 100 1,000 2,000 5 200

20,000 32,000 46,000 40,000 12,000 50,000 90,000 110,000 200,000 55,000

Table 3.4: Frequency range of hearing for humans and selected animals. The bat is the land animal with the broadest hearing span. Marine mammals have a mammalian ear that through adaptation to the marine environment has developed broader hearing ranges than those common to land mammals. As a group they have functional hearing ranges from 5 Hz to 200 kHz. Figure 3.74: The squeaks that we can hear a mouse make are in the low frequency end and are used to make long distance calls, as low frequency sounds travel further than high frequency ones. Mice can alert other mice of danger without also alerting a predator like a cat to their presence, if the predator cannot hear their high-frequency distress call.

George Shuklin

We discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as ‘lower-pitched’ or ‘higher-pitched’ sounds are pressure vibrations having a low or high number of cycles per second, or frequencies. Hearing sensitivity, and the frequency range over which sound can be heard, varies greatly from species to species. The human ear has evolved to detect frequencies of sounds that are most useful to humans, and has a maximum frequency range of about 20 Hz to 20 kHz. Infrasonic describes sounds that are too low in frequency, and ultrasonic, those too high in frequency to be heard by the human ear. However, for many fish, sounds above 1 kHz are ultrasonic. For those marine mammals that cannot perceive sounds below 1 kHz, much of the signal of an airgun may be infrasonic. These considerations indicate the importance of considering hearing ability when evaluating the effect of underwater signal or noise on marine animals. In this section we give an introduction to the auditory capabilities of marine mammals. All marine mammals have special adaptations of the external and middle ear consistent with deep, rapid diving and

Louise Murray/Science Photo Library

are presented as the threshold of hearing of the subject as a function of frequency; the subject’s audiogram. Typically plotted on a logarithmic frequency axis, audiograms have the appearance of an inverted bell-shaped curve, with a lowest threshold level (maximum hearing sensitivity) at the base of the curve and increasing threshold levels (decreasing sensitivity) on either side.

3.9.2  How To Test A Dolphin? The hearing of dolphins or small whales can be tested through traditional behavioural studies or through auditory brainstem response (ABR) experiments. Most behavioural hearing studies are performed on mammals in captivity, so the behavioural hearing information that is available tends to be for the smaller marine mammals such as pinnipeds (seals, sea Figure 3.75: Dolphins are highly intelligent animals and can be trained to respond to tones, using the lions and walruses), sirenians (manatees same step procedure as when testing humans. The US Navy also trains them to detect underwater mines, and dugongs), and odontocetes (toothed deliver equipment to divers and locate lost objects. This bottlenose dolphin is holding a bite-plate, which whales, dolphins and porpoises). Very few, if can contain surveillance equipment or be used to hook a tethered line onto an underwater object. any, behavioural hearing studies have been done with the large baleen whales because they are not kept mammals are also studied by conducting anatomical in captivity, and it is very difficult to perform hearing tests on examinations of dead animals. By examining the air-filled animals in the wild. middle ear and fluid-filled inner ear, scientists have been able The behavioural studies are in many respects similar to to estimate the range of frequencies that an animal may be able the way we test the human hearing threshold. The animal to hear. Much of our knowledge of mysticete (baleen whales) is schooled to stay underwater while a sound is played. If it hearing has come from these anatomical studies. hears the sound, it is trained to respond in a particular way, and if it does not hear it, or if no sound is played, it responds 3.9.3  Auditory Capabilities differently. Each time a sound is presented, and the animal is Odontocetes, like bats, are excellent echolocators, capable of right, it is rewarded. The sound is reduced Figure 3.76: Reported hearing threshold for white whales (various experiments A,B,C,D, E and F) and bottle– until the animal can no longer hear, and nose dolphins (red triangles G). Inevitably, marine animals will have varying acuity of hearing between ‘says’ it cannot, at which point the sound individuals. Consequently, the number of individuals tested in any given audiogram measurement has level is gradually increased until the animal to be sufficient to establish reasonable confidence in the quality of the measurement. The white whale audiograms A–F have been measured by different authors, under different experimental conditions, using indicates that it can hear it. By playing individuals drawn from different stocks, thus increasing the degree of confidence in the data. lots of different frequencies (pitches), it is Diagram from http://www.onlinezoologists.com/cs/node/21 by Wesley R. Elsbury. possible to determine the threshold point at which the animal can just barely hear for each frequency. In this way the scientists can determine what frequencies and sound levels are audible to different animals. The ABR hearing measurement, which is also used to measure hearing in human babies just after they are born, is a way to study what a whale hears through the detection and recording of electrical impulses in the brain that occur in response to sound. It is harmlessly measured from the surface of the animal’s skin with gold EEG sensors. The ABR test is powerful because it can be done quite quickly compared to behavioural hearing methods and because it can be performed with untrained or stranded animals. The hearing functions of marine

133

Figure 3.77: The scientific data collected in this composite audiogram shows that mysticetes have the most sensitive low-frequency hearing of all marine mammals, with an auditory bandwidth of 5 Hz to 22 kHz. They have good sensitivity from 20 Hz to 2 kHz. Their threshold minima are unknown, but speculated to be 60–80 dB re 1 μPa. The thresholds for odontocetes and pinnipeds are a composite of measured lowest thresholds for multiple species. Odontocetes have auditory bandwidth 100 Hz to 180 kHz. Pinnipeds listening in water have auditory bandwidth 75 Hz to 100 kHz. The vertical axis is relative intensity in underwater dB and the horizontal axis is the frequency of a sound on a logarithmic scale.

160 150 140 130

Threshold (dB re 1µ Pa)

120 110 100

Human diver

90 80

(Modified from Office of Naval Research. 2001. Final Environmental Impact Statement for the North Pacific Acoustic Laboratory, May 2001.)

70 60

Pinnipeds

Mysticetes

50 40

Odontocetes

30 20 10

100

1000

10000

100000

Frequency (Hz)

producing, perceiving, and analysing ultrasonic frequencies. In general, odontocetes have a hearing bandwidth of 100 Hz to 180 kHz, with the most sensitive hearing in the high-frequency range of 10 kHz to 65 kHz where their hearing threshold is 45 to 55 dB re 1 μPa. In the low frequencies below 1 kHz where airgun sound is concentrated, toothed whales have a very high hearing threshold of 80 to 130 dB re 1 μPa. There is, however, considerable variability within and among species. Sperm whales, beaked whales and dolphins are said to sense sound from 75–100 Hz if loud enough. Dolphins are renowned for their acute hearing sensitivity, especially in the frequency range 5 to 50 kHz. Several species have hearing thresholds between 30 and 50 dB re 1 μPa in this frequency range. The killer whale has a sound pressure level threshold of 26 dB re 1 μPa @ 15 kHz. Mysticetes have a very different hearing capability compared to toothed whales. Due to their immense size, these mammals cannot be kept in captivity for study like toothed whales. Little information exists on their hearing and further information is unlikely to be obtained in the near future. In the limited

134

studies done, baleen whales reacted primarily to sounds at low frequencies in the 20 Hz to 500 Hz range. While this is their most sensitive hearing range, the hearing bandwidth for baleen whales is believed to range from 5 Hz to above 20 kHz. In order to provide predictions, models based on anatomical data indicate that the functional hearing range for mysticetes commonly extends to 20 Hz, with several species expected to hear well into infrasonic frequencies. The upper functional range for most mysticetes has been predicted to extend to 20–30 kHz. Pinnipeds have a similar hearing bandwidth to toothed whales, 75 Hz to 100 kHz, but their most sensitive hearing is at middle frequencies of 1 kHz to 30 kHz where their hearing threshold is 60 to 80 dB. In the low frequencies below 1 kHz where airgun signal is concentrated, hair seals have a high hearing threshold of 80 to 100 dB. Man-made sound underwater can cover a wide range of frequencies and levels of sound, and the way in which a given mammal reacts to the sound will depend on the frequency range it can hear, the level of sound and its spectrum.

3.10  Fish Are Big Talkers

Don’t tell fish stories where the people know you; but particularly, don’t tell them where they know the fish. Mark Twain

3.10.1  Sound Production Like us, fish produce sound both unintentionally and intentionally. Unintentional sounds from fish come all the time, resulting from hydrodynamic patterns during swimming and feeding. These sounds can provide information to other fishes. In fact, the fishing industry has recognised predatory fishes’ ability to take advantage of unintentional sounds to find prey and have designed fishing lures that emit low frequency sounds that mimic those produced by injured prey fish. Intentional sounds are created to stay in touch with the shoal, to warn other fish of danger, to attract, communicate with and stimulate mates, and to scare intruders away from eggs and young.

Drawing by Birgitte Reisæter Amundsen (10)

Of the 33,400 species of fish living today we know that more than 1,000 make sounds, ranging in frequency from 50 to 8,000 Hz. For most fish, the sonic mechanism is a muscle that vibrates its swim bladder not unlike our vocal cord. For many other species, the sonic mechanism remains a mystery.

The sounds that fish make are usually simpler than the calls of marine mammals. Fish sounds have been described as grunts, scrapes, knocks, clicks, squeaks, groans, rumbles and drumming. Nearly all the fish species produce their sounds using their teeth or their swim bladder or a combination of both. Some fish are capable of making very loud sounds. One of the noisiest is the oyster toadfish – a bottom dwelling fish found along the east coast of North America from the West Indies to Cape Cod. The toadfish is thought to make two types of calls: a grunting sound when it is aggressive or frightened, and a loud foghorn-like call heard underwater for great distances to attract

Volker Steger/Science Photo Library

Figure 3.78: The oyster toadfish doesn’t need good looks to attract a mate – just a nice voice. To generate the foghorn sound, the toadfish contracts its sonic muscle against its swim bladder thousands of times a minute. At nearly three times the average wing beat of a hummingbird, toadfish have the fastest known muscle of any vertebrate.

135

Hans-Petter Field/Wikipedia

a female during the spawning season. Many fish take advantage of the noises other species make. We know that some sharks use sound to help them locate prey, while some smaller fish can detect the sounds larger predators make in their hunting. Furthermore, it is believed that a few fish species, including herrings and American shads, can detect the ultrasonic echolocation sound produced by hunting dolphins from a distance of up to 200m.

3.10.2  Cod You Believe It?

© Amos Nachoum/SeaPics.com

The cod is believed to speak primarily on special occasions. We do not hear much from it, but if aggressive or while spawning it Figure 3.79: The Atlantic cod can sometimes be very vocal, especially when spawning. is very vocal, producing a number of sounds via the swim bladder. These are mainly short3.10.3  Herring-made Bubble Screens duration low-frequency pulses, described as grunts or bops, The sound production of Pacific and Atlantic herring is poorly with frequencies in the range of 50–500 Hz. In the spawning understood. It has been shown that the herring produce season the male makes pulsed, low-frequency sounds to distinctive bursts of pulses, termed fast repetitive tick (FRT) scare away other fish. In contrast the cod’s courting and sounds. These trains of broadband pulses, 1.7–22 kHz, last spawning sounds consist of long series of pulses given with for a period of 0.6 to 7.6s. The function of these sounds is increasing frequency. During the spawning act the sound unknown, but social mediation appears likely. It is conceivable resembles continuous low-frequency sound – not unlike the that killer whales use the distinctive herring sounds as sound from a Harley Davidson motorbike! Lately, it has been foraging cues. speculated that they use the continuous low-frequency sound In Norwegian waters in the late autumn billions of herring to synchronise the fertilisation process. migrate from open oceanic waters into deep fjords to spend Larger Arctic cod caught during the spawning season the winter season waiting for spring to approach. In February, offshore Norway are commonly called skrei, probably from they migrate southwards to their spawning grounds, and then the Norse word skrida, to wander. The skrei undergo seasonal out to open waters again. Flocks of killer whales follow the wanderings, inhabiting the Barents Sea in the summer and herring to feed on their favourite prey. Often the whales dive autumn and migrating every December to January southwards to over a hundred metres to drive herring up to shallower to their spawning grounds between Finnmark and western waters, forcing the fish into tight groups while preventing Norway. The most important of these grounds are in Lofoten them from escaping to deeper, darker and safer water. During and Vesterålen. this process the whales emit echolocation clicks, click bursts, and whistles, some of which may help to tighten the herring Figure 3.80: A Norwegian killer whale herding herring. school or coordinate whale movements. At the right moment individual whales swim into the herd of fish and perform tail slaps that produce thud-like sounds. Apparently stunned by the tail slap, many fish turn belly-up, remain motionless, and become an easy catch. Attacks from killer whales have been documented in Vestfjorden, in northern Norway, using multibeam sonar and echosounder. It was observed that herring schools were forced from large depths up to the surface by killer whales and saithe, after which the herrings expelled gas from their swim bladder via the anus as a consequence of the rapid change in depth, thereby producing a curtain of tiny bubbles around the school. The bubbles may confuse and deflect the killer whales, both visually and acoustically, due to increased scattering of light, reduced range of vision, and the confounding effects of the reflection energy of bubbles and fish. Under the cover of their bubble curtain, the herring have a chance of escaping.

136

Figure 3.81: Killer whale tail slap extracted from video recordings (top) and the corresponding sound track (bottom). The letters of the video frames correspond to the times illustrated in the sound track. The clicks before and after the underwater tail slap are killer whale echolocation clicks (arrows). Adapted with permission from Simon et al., 2005, Journal of Experimental Biology 208, 2459–2466.

3.10.4  The Killer Whale’s Tail Slap

in a hump-backed dolphin habitat. Although the bubble Norwegian killer whales debilitate prey by slapping their tails curtain did not eliminate all behavioural dolphin responses into herring schools. It has been suggested that the thud-like to the loud noise, the experiment and its application during sound produced by the tail slaps is caused by cavitation. construction represented a success as the broadband pulse Single pulses measured from such tail slaps have waveforms levels were lowered by 3–5 dB at a range of a kilometre. and spectral characteristics very similar to those from the clicks of pistol shrimps, which are known to be produced by 3.10.6  Fisheries and Seismic cavitation. The pistol shrimps produce broadband sounds The potential impacts of seismic activities on fisheries can with frequencies beyond 200 kHz and with a peak frequency be divided into two types: acoustic disturbance of fish, and in the range of 2–5 kHz. The source level is between 183–191 conflicts of interest over use of the same areas. Therefore, dB (p-p) re 1μPa @ 1m. The sound of killer whale tail slaps seismic acquisition is regulated so that it will cause as little contain frequencies beyond 150 kHz, with peak frequencies inconvenience as possible to the fishing industry, and to avoid below 10 kHz. The fact that the source levels of tail slaps the spawning periods. are 186 dB (p-p) suggests similarity between the sound production Figure 3.82: By placing a bubble curtain at the bounce point of the multiple on the air-water interface, the idea is to suppress multiples in seismic acquisition. In 2002, ExxonMobil and PGS conducted an offshore field mechanism of killer whale tail slaps, test which established that the curtain could have sufficient reflectivity to redirect upcoming acoustic energy snapping shrimp, and a cavitating away from the seismic sensors so that it would not be recorded as multiples. propeller. Underwater bubbles inhibit sound transmission through water due to density contrast and concomitant reflection and absorption of sound waves. Therefore, it is not only herrings that exploit bubble curtains. Man-made bubble curtain systems that produce bubbles in a deliberate arrangement in water to attenuate unwanted wave trains, like water layer reverberations, were tested in the 1950s. In the 1970s it was proposed that a bubble screen above the source towed behind a submarine acquiring seismic below ice would reduce the effect of seismic scattering from the ice layer that was disturbing reflections from the sub-seabed. In the late 1990s, in the shallow waters of western Hong Kong a bubble screen was used to reduce the sound of pile driving

PGS/Rune Tenghamn

3.10.5  Man-made Bubble Screens

137

To understand how human-generated sounds affect fish it is necessary to understand how and what fish can hear, and how they respond to different types of sounds. The majority of fish species are known to be able to detect sounds from below 50 Hz up to 500 or even 1,500 Hz. A smaller number of species can hear sounds that are over 3,000 Hz, while a very few can detect sounds that are well over 100 kHz. In a somewhat vague and unclear manner, fish are termed as either hearing generalists or hearing specialists. Hearing generalists are fish with the narrower bandwidth of hearing – typically able to detect sounds up to 1 or 1.5 kHz. Specialists have a broader hearing range, detecting sounds above 1.5 kHz. Furthermore, where specialists and generalists overlap in frequency range of hearing, the specialists usually have lower hearing thresholds than the generalists, meaning that they can detect quieter sounds. Fish species with either no swim bladder (e.g. elasmobranchs, the collective name for sharks, skates and rays) or a much reduced one (many benthic species living on, in, or near the seabed like flatfish) tend to have relatively low auditory sensitivity, and generally cannot hear sounds at frequencies above 1 kHz. The sound pressure threshold can be as high as 120 dB re 1 μPa (hereafter dB) at the best frequency. Such fishes are therefore hearing generalist species. Fish without swim bladders are only sensitive to the particle motion component of the sound field. Fish having a fully functional swim bladder have increased

Christine Reisæter Amundsen (10)

3.11  Fish Hear a Great Deal

hearing sensitivity, especially when there is some form of close coupling between the swim bladder and the inner ear. These transmit oscillations of the swimming bladder wall in the pressure field to the inner ear. With the ability to perceive also the pressure component of sound these fish are referred to as hearing specialists. In the clupeids (common food fish like herrings, shads, sardines and anchovies), the coupling takes the form of a gascontaining sphere (prootic bulla) connecting the swim bladder to the hearing system. This considerably lowers their hearing thresholds and extends the hearing bandwidth to higher frequencies up to several kHz.

Figure 3.83: Fish species with no swim bladder, like the ray, tend to have relatively low auditory sensitivity. Lukas Blazek/ Dreamstime.com

138

Alexstar/Dreamstime.com

Figure 3.84: Goldfish are otophysan fishes and therefore possess Weberian ossicles that allow sound pressure waves impinging upon the swim bladder to be carried directly to the ear, leading to sensitive hearing (wide-frequency range and relatively low thresholds of around 60 dB). Goldfish hear above 3 kHz, with their best hearing between 500–1,000 Hz.

In otophysan fish (e.g., carps, minnows, channel catfishes, and characins; the majority of freshwater fish worldwide), a bony coupling is formed by the Weberian ossicles. These bones allow them to use their swim bladder as a sort of drum to detect a greater range of sounds, and create a super-league of hearing-sensitive fish.

3.11.1  Fish Sensory Systems Fish have evolved two sensory systems to detect acoustic signals: the ear and the lateral line. Fish do not need an outer or middle ear, since the role of these structures in terrestrial vertebrates is to funnel sound to the ear and overcome the impedance difference between air and the fluids of the inner ear. Since fish live in, and have the same density as water, there is no impedance difference to overcome. Fish do have an inner ear which is similar in structure and function to the inner ear of terrestrial vertebrates. The most important similarity between the ears of all vertebrates is that

3.11.2 Audiograms For fish to hear a sound source, the generated sound pressure level should be higher than its auditory threshold and background noise levels from natural sources and anthropogenic sound.

a

b

Endolymph

Inner ear

Membrane

Otolith

Hair cells Otolith organs

Nerve

Lasse Amundsen (modified from Karlsen (2010))

Figure 3.85: (a) The fish inner ear with three semi-circular canals and three otolith organs. (b) Schematic cut through an otolith organ.

sound is converted from mechanical to electrical signals by the sensory hair cells that are common in them all. Extremely high-intensity sounds are able to fatigue or damage these cells, resulting in temporary or permanent hearing loss. However, fish continue to add sensory hair cells throughout their lives. In addition, there is evidence that fishes can repair sensory cells that have been fatigued due to sound exposures that cause a shift in auditory thresholds. Fish will move along with the sound field from any source. While this might result in the fish not detecting the sound, the inner ear also contains dense calcium carbonate structures – the otoliths – which move at a different amplitude and phase from the rest of the body while stimulating sensory hair cells. This system of relative motion between the otoliths and the sensory hair cells acts as a biological accelerometer and provides the mechanism by which all fish hear. However, in fish with a swim bladder the acoustic sound pressure can indirectly stimulate the fish’s inner ear via the bladder. For the stimulation to be efficient, the swim bladder must either be close to or have a specific connection to the inner ear. In one form, a gas bubble makes the mechanical coupling; in another the inner ear is directly connected to it by a set of small bones called the Weberian ossicles. Since the air in the swim bladder is of a very different density to that of the rest of the fish, in the presence of sound the air starts to vibrate. This vibration stimulates the inner ear by moving the otolith relative to the sensory epithelium. In these cases the fishes are sensitive to both particle motion and pressure modes of sound, leading to enhanced pressure detection and a broadened frequency response range. The lateral line consists of a series of receptors along the body of the fish enabling detection of hydrodynamic signals (water motion) relative to the fish. It is involved in schooling behaviour, where fish swim in a cohesive formation with many other fish and for detection of nearby moving objects, such as food.

139

Sound in Water

Any sound source in water produces both oscillations of water molecules – particle motion – and pressure variations – sound pressure. Particle motion can be either described as acoustic displacement, particle velocity, or particle acceleration. Sound pressure is the parameter with which most are familiar, since it determines the ‘loudness’ of a sound to humans and mammals. Far away from a sound source, in the far-field of the source, the ratio of sound particle Figure 3.86: The cod has been shown to detect sound pressure at the higher frequencies of its hearing range. At low frequencies, below about 100 Hz, however, the cod is particle motion sensitive. Its swim bladder appears to serve as an accessory hearing structure: oscillations are transmitted through the surrounding tissue to the inner ear, even though there is no apparent specialised anatomical link to the inner ear. The cod possibly detects ultrasound. Astrup and Møhl (1993) indicate that cod has ultrasound thresholds of 185 to 200 dB @ 38 kHz, which probably allows for detection of odontocetes’ clicks at distances up to 10 to 30m (Astrup, 1999).

140

holds for many of these fish species. Cod, haddock and pollock have similar hearing capabilities and sense sound in the frequency range 0.1–450 Hz. For sound intensities close to the threshold, cod are sensitive to sound pressure in the frequency range 100–450 Hz and to sound acceleration for frequencies below 100 Hz. For sound intensities above the threshold value, cod will detect both sound acceleration and sound pressure over a substantial frequency range, 20–150 kHz. Sound pressure thresholds in cod fish in the frequency range 60–300 Hz lie between 80 and 90 dB. The herring family has an upper hearing frequency limit of 1–8 kHz, with an optimum range of 0.6–2 kHz. These hearing specialists are also sensitive to sound pressure towards lower frequencies. Sound-induced escape responses can be triggered from sound pressure by infrasonic sound down to 5 Hz. Recent studies have demonstrated that the inner ear of the herring subfamily shad is specialised to sense ultrasound in the range of 20–120 kHz, with threshold values in the range of 150–160 dB for frequencies at 80–100 kHz. The sensitivity is sufficient for the shad to sense attacking dolphins’ ultrasonic clicks, which can have sound pressure up to 220 dB @ 1 m. The other subfamilies of herring do not have ultrasound hearing.

velocity (V) and sound pressure (P) is constant, equal to the water impedance. But moving closer than around ¹⁄₆ of the wavelength, into the sound’s near field, this simple relationship breaks down. The V/P ratio strongly increases with decreasing distance to the source, thus inducing high levels of particle accelerations. However, when a fish is free to swim away from the sound source, it will likely be in the far-field of the source. For a frequency of 1 Hz, the far-field distance is around 250m.

Per Eide Studio/Norwegian Seafood Export Council

Traditionally, studies of hearing have used behavioural or electrophysiological methods. The qualitative behavioural methods are based on conditioning the fish with acoustic signals in conjunction with either reward (food) or punishment (electric shocks). The electrophysiological methods used to involve inserting electrodes into either the midbrain or auditory end organs of the test fish to record neuronal activities in response to acoustic signals. This requires invasive surgery. In the late 1990s, a non-invasive auditory brainstem response (ABR) method where electrodes are placed on the skin of the fish’s head was developed to obtain audiograms of fish, similar to those we described in Section 3.9 to test marine mammals. It has now become a standard method for research into fish auditory physiology. Fish lacking the swim bladder or having a swim bladder that is not in close proximity or mechanically connected to the inner ear are sensitive mainly to sound acceleration. Their audiogram shows a sharp upper frequency limit for hearing at 200–300 Hz. This response is typical for fish species like flounders, flatfish, demersal fish (such as bullheads and sculpins), wolf fish, mackerel, salmonids, redfish and eels. The European plaice is known to be significantly sound sensitive into the infrasonic band (less than 20 Hz) and this probably

160 150

skate

140 herring

Threshold level (dB re µPa)

130 120

Atlantic salmon

110 100

cod pollock haddock

Channel catfish

90 80 70

Figure 3.87: Fish and human hearing: audiogram curves give the faintest sounds that can be heard at each frequency, and fall in two main categories: hearing generalists like skate, salmon and cod, with medium hearing ability, and hearing specialists like Atlantic herring and gold fish. Hearing generalist species have a narrower hearing frequency range (less than 1,500 Hz) and higher hearing threshold (above 100 dB) than hearing specialist fish (up to 8 kHz and down to 60 dB). The level at which fish respond to a sound stimulus can be significantly above the detection threshold. The human auditory system is most sensitive to waterborne sound at frequencies from 400 Hz to 2,000 Hz, with a peak at approximately 800 Hz and at a sound pressure level of 67 dB. The most sensitive frequency range is also the range having the greatest potential for hearing damage.

human carp

60 goldfish

50 40 10

100

Frequency (Hz)

1000

10000

141

3.12  Seismic Surveys and Fish What is known about the impact of seismic surveys on fish and fisheries? I know that the human being and the fish can coexist peacefully. George W. Bush, 43rd U.S. President (1946– ), Saginaw, Michigan; 29 September 2000

Institute of Marine Research (IMR)

There is growing interest in understanding the effects of humangenerated sound on fish and other aquatic organisms. The sources of anthropogenic sounds are many. Boats and ships are a major source of noise, and sonars, used not only by navies but also by the shipping and fishing industries and the oceanographic research community, are a significant sound source. Pile driving and seismic airgun arrays are also high-intensity sources. When a seismic survey is conducted, fish in the area hear the sound of the airguns and will often react to it, but how they react will depend on many factors, and it is difficult to conduct unambiguous studies in order to map the reaction patterns. Here, we summarise the most recent and, to date, the largest study on this topic, undertaken by the Norwegian Institute of Marine Research (IMR) on the effect of seismic surveying on fisheries (see Løkkeborg et al., 2010).

142

3.12.1  Studies on Fish and Fisheries Many studies have been conducted on the effect of seismic signals on fish in all stages of life. These include studies of possible direct damage in the very early stages (eggs and fry), but since adult fish can move away from the sound source if they are disturbed, the studies for this group mostly concern behavioural effects. Organisms can be damaged when exposed to sound pulses with a rapid rise time (i.e. rapidly increasing sound pressure) and a peak value of 230 dB or more. Sound pulses from airguns will often have a relatively slow rise time, and for this reason organisms can tolerate a higher peak pressure from these than from, for instance, underwater explosions. Sound pressures with a peak pressure of more than 230 dB will only occur in the immediate vicinity of the airguns, within a radius

3.12.2  The Lofoten and Vesterålen Survey

Øystein Paulsen, IMR

of just a few metres. Dalen et al. (1996) concluded that there is such a small amount of eggs and fry present within the danger zone that damage caused by airguns will have no negative consequences on fish stocks. They calculated that the mortality rate caused by airguns might amount to an average of 0.0012% a day, so in comparison to the natural mortality rate of 5–15% a day, the effects of seismic-induced damage seem insignificant. The interaction between petroleum exploration and fish is a central issue in Norway, Australia and Brazil in particular. In Norway, the IMR executes an oil-fish programme with the objective of generating new knowledge about the acute and long-term effects on fish and other marine organisms of discharges of oil to the sea and chemicals used in drilling and production. It also seeks to obtain new knowledge about the effects of seismic shooting on fish and other marine organisms.

Figure 3.88: Lumpfish larvae, a few weeks old. Fishing for lumpfish dates back several hundred years. The eggs from the lumpfish, found in the Atlantic between January and September, make an inexpensive caviar usually served as an appetizer.

Distance from source (km)

Løkkeborg et al (2010)

Pressure amplitude (dB rel. 1µPa)

From 29 June to 6 August 2009 the Norwegian 3.12.3  Sound Levels Recorded Petroleum Directorate (NPD) carried out a 3D seismic survey The airgun sound pressure was measured at the seabed by (approximately 15 x 85 km 2) outside Vesterålen in an area deploying a hydrophone rig at various locations. Figure 3.89a known as Nordland VII, about 250 km south-west of Tromsø shows the sound pressure level from one seismic survey line in the Norwegian Sea. The period was chosen based on advice relative to the distance from the airgun array. The hydrophone from the Directorate of Fisheries, the IMR and the fishermen’s rig was at 184m depth. At the start of the line, about 46 km from organisations, with the aim of conducting the survey during a the rig, the sound pressure level was measured at 140 dB re 1 µPa, period when seismic acquisition activity would cause as little and it kept almost constant until the vessel was 30 km distant. As inconvenience as possible to the fishing industry, and avoiding the vessel approached, the sound pressure level steadily increased spawning periods. up to 170 dB @ 6 km. Then the level increased more rapidly and The seismic vessel was equipped with eight streamers and the maximum at 191 dB was obtained when the vessel passed two seismic sources, each with a total volume of 3,500 in3 (57 l). close at a distance of 500m. The sound level turned out to be The sources were fired alternatively in a ‘flip-flop’ configuration somewhat higher with distance after the vessel passed than when at 138 bar (2,000 psi) every 10s. A total of 41 lines, around 85 it approached the rig, probably due to variation in water depth. km long and 400m apart, were shot. (The shallower the water, the stronger will be the acoustic signal At the same time, the NPD initiated and funded a recorded in the water column.) NOK 25 million research project, one of the largest ever conducted, with the task of examining the Firgure 3.89a: Sound pressure level (peak value) at a hydrophone deployed at 184m depth as a function consequences of seismic data acquisition of airgun distance. The closest distance is around 500m. on the presence of the fish species normally 200 70 caught in this area: Greenland halibut, redfish, ling, pollock, haddock and saithe 190 60 (all hearing generalists). The institute commenced its studies 12 days prior to the 180 50 start-up of the seismic data acquisition, which took 38 days, and continued for 25 170 40 days after the activities stopped. As the 160 chartered gillnet and longline vessels were 30 fishing, a research vessel worked with 150 another chartered fishing vessel to map the 20 occurrence of fish and plankton in the area 140 using echosounders and sonar. In addition, 10 stomach specimens were taken from the 130 catches, and recordings were made of the 120 0 sound from the airguns on the seismic 00:00 03:00 06:00 09:00 12:00 15:00 vessel (Løkkeborg et al., 2010). Time

143

3.12.4  Effect of Seismic on Fisheries The survey clearly indicated that fish react to the sound from the seismic guns by changing their behaviour, resulting in increased catches for some species and smaller catches for others. It appears that pollock and parts of the schools of saithe migrated out of the area, while other species seemed to remain. Analyses of the stomach contents in the fish caught did not reveal changes attributable to the seismic survey. Neither were any changes in the distribution of plankton proven during the seismic data acquisition. The most probable explanation for both increased and reduced catches for the different species and types of fishing gear is that the sound from the airguns put the fish under some stress, causing increased swimming activity. This would, for example, explain why Greenland halibut, redfish and ling were more likely to go into the net, while long line catches of the same species declined. However, the results of this study, showing few negative effects of seismic shooting, deviate from the results of previous studies, which demonstrated considerable reductions in the catch rates for trawl and line fishing. In research from the North Cape Bank in the Barents Sea in 1992, reported in Engås et al. (1996, 2002),

144

Distance from source (km)

Løkkeborg et al (2010)

Pressure amplitude (dB rel. 1µPa)

200 70 The measurements show that the fish in the survey area were exposed to varying 190 60 levels of sound pressures, depending on their distance from the seismic vessel. At 30 km 180 50 distance from a fish shoal the sound pressure level was 140 dB, which is well above the 170 40 hearing threshold of cod but still below their 160 threshold for behavioural change. 30 The highest sound level measured at 150 the hydrophone rig was 191 dB, when the 20 vessel passed at a distance of 500m. At this 140 level, fish at this distance would be expected 10 to react strongly if they have not already 130 chosen to swim away, with known reactions 120 0 including increased swimming activity, startle 21:00 00:00 03:00 06:00 09:00 12:00 15:00 responses, changes in schooling behaviour, Time and vertical movement. Figure 3.89b: Sound pressure level (peak value) at a hydrophone rig deployed at 73m depth as function Sound measurements in the area of line of airgun distance. The closest distance is around 10.1 km when the maximum sound pressure level was catch of haddock showed a sound pressure 155 dB re 1 µPa. level of maximum 155 dB @ 10 km. See Figure 3.89b. Fish can hear this level but it will probably not the seismic acquisition activity was concentrated within a smaller induce behavioural changes. For several weeks the seismic area, 81 km2. The vessel was equipped with two streamers and vessel operated many kilometres away from the haddock lines, one 5,012 in3 seismic source. There were 36 sail lines, around so the fish were first exposed to low sound levels over a long 18.5 km long, with a separation of 125m, compared to 450m for time. Then the vessels approached the catch area, and the sound the Vesterålen survey, entailing a stronger and more continuous level gradually increased. sound impact on the fish than in the Barents Sea study. In terms Fish are known to adjust to external influences. For of the number of shots per square kilometre per hour, the sound instance, a novel sound in their environment, like seismic, influence was approximately 19 times higher in the 1992 survey may initially be distracting, but after becoming accustomed than in the Vesterålen survey (Løkkeborg et al., 2010). to it their response to it will diminish. This decrease in Seismic has been acquired offshore Norway for almost 60 response to a stimulus after repeated presentations is called years. The technology has become more sophisticated since the habituation. 1990s, largely as a result of the number of streamers that seismic Habituation may have led to the higher response levels for vessels can tow. A high number of streamers implies that the haddock. However, lower line catch rates for haddock as the sail lines have larger spacing; thereby, the number of shots in the seismic vessel approached the lines indicate that the haddock area is reduced. This is positive for fish and fisheries. In addition, reacted to the seismic sound at closer distances. the sensor systems in the seismic streamers are gradually being

Figure 3.90: Firemore Bay on the coast of north-west Scotland was the site of a 2001 study on the impact of seismic shooting on fish.

Martin Landrø

Figure 3.91: Fishing is a mainstay of life in Røst, one of the remote Lofoten Islands north of the Arctic Circle, near the test area chosen for the NPD survey.

3.12.5  The Fish Rock Experiment Although behavioural studies of fish suggest that there might be some changes in behaviour associated with seismic surveying,

a study by Wardle et al., 2001, found results to the contrary. Bangs did not chase fish away, but they did cause an involuntary sudden bending of the body, or C-starts (see box overleaf). The investigators used a video system to examine the behaviour of fish on an inshore rocky reef, ‘Fish Rock’ in Firemore Bay, Loch Ewe, on the west coast of Scotland, in response to shooting a stationary triple G. airgun (three Ron Washbrook/Dreamstime.com

perfected, so that streamers can be towed at greater depths where ambient ocean noise is less, thereby perhaps accomplishing seismic shooting with lower decibels of sound.

145

Adapted from Wardle et al (2001)

Amplitude (dB re 1 µPa-m/Hz)

Strength (kPa-m)

20 synch­ronised airguns, each gun 150 in 3 (2.5 l) a and 2000 psi). Fish inhabiting this reef include 15 juvenile saithe that leave for the open sea when 10 they are about three years old, adult pollock and juvenile cod, with some flatfish, wrasse and 5 gobies. The water depth is 10–20m. 0 The G. guns represent a type of gun now commonly used by survey companies in arrays -5 and clusters for seismic survey work. The guns were fired once a minute for eight periods on -10 four days at different positions. The peak–peak -15 sound level was 210 dB re 1 μPa @ 16m from the source and 195 dB @ 109m from the source. We -20 0 100 200 300 400 500 note that a 210 dB equivalent pressure would be Time (ms) received at about 100m below a full-scale seismic airgun array generating about 250 dB re to 1 µPa 175 @ 1m. b 170 Firing the G. guns had little effect on the day165 to-day behaviour of the resident fish, which were not sufficiently irritated by the shooting to move 160 away from the reef. 155 However, reef fish watched by the TV 150 camera showed involuntary reactions in the 145 form of C-starts at each explosion of the guns at all ranges tested (see Figure 3.94). When the 140 explosion was not visible to the fish, the C-start 135 reaction was cut short and the fish continued 130 with what they were doing before the stimulus. In 125 one experiment, when the guns were suspended 0 50 100 150 200 250 mid-water (5m depth) and just outside visible Frequency (Hz) range at 16m, the fish receiving a 6 ms peak to Figure 3.92: (a) Time amplitude signature of triple G. guns recorded at 16m distance. peak, 206 dB pressure swing exhibited a C-start (b) Spectrum of the sound signature. and then continued to swim towards the gun position, their intended swimming track apparently unaltered. from the gun by approximately 30m. The long-term day-to-night move­ments of two tagged It is noted that the fish involved in the Fish Rock experiment pollock were observed. One of them showed little variation are mainly inshore and reef species, closely associated with a home territory and not easily moved. In contrast, other at the onset of and during gun firings, but was never closer open-sea experiments have found indications of large-scale than 35m from the guns. The other pollock showed detailed influences resulting in apparent movements of commercial fish reactions to the gun only when it was close to the fish. At species, for example, making them more or less accessible to one onset of firing, the pollock was about 10m from the gun fisheries. location. After the first firing the fish moved rapidly away

The C-Start Response

When a fish receives a strong sound stimulus, or is attacked, an alarm reaction or an escape reaction is triggered. The reaction is known among scientists as the ‘C-start’ response, and it is perhaps one of the best-studied fish behaviour patterns. The ‘C-start’ takes its name from the shape the fish makes as it rapidly starts its escape from danger. The drawing shows that a threatened fish will perform two distinct movements to change direction and speed away from the threat. First, it will curve its body into the shape of a ‘C’ away from the source of sound. Second, it will then straighten its body from the curved position. The straightening motion allows the fish to push water off the full broadside of its body to quickly swim away from its attacker. Field experiments have demonstrated that sound energy

146

1

2

3

4

Figure 3.93: The C-Start response.

transmitted from airguns initiates this type of response on various fish species. In particular, intense infrasound results in escape reactions.

Wardle et al (2001)

Figure 3.94: Images from video tape of three 30–40 cm saithe swimming towards the airgun at 16m range and about 4m from the sea bed and 10m from the surface. When the gun fires they show the typical C-start, veer off course and then continue swimming in the direction of the gun. All three saithe show the reaction in the same TV frame (frame 2). Note the sound pulse, lasting 6 ms, travels 30m during one TV frame of 20 ms and the visual range is about 6m. The first three images are 20 ms apart, the fourth frame is 5s later.

3.12.6  Startle Threshold Fish react to man-made sound in various ways. The weakest form of response is minor changes in swimming activity where the fish change their direction or increase their speed. The most significant response on sound is an escape reaction where the fish initially show the C-start response. Few controlled studies have been made of startle response thresholds in different fish species and auditory groups due to sound pulses of varying frequencies. It has been reported, however, that for hearing generalists the C-start response is triggered at a far-field sound pressure of 174 dB re 1 μPa @ 10 Hz, and 154 dB @ 100 Hz. These numbers indicate that the threshold for triggering rapid escape responses is significantly above the absolute hearing threshold. Further, Karlsen (2010) gives startle behaviour responses for codfish starting at around

160–175 dB for a frequency of about 100 Hz. His observation indicates that their startle threshold values are around 80 dB above known auditory thresholds. To precisely relate any startle response thresholds of fish to marine seismic activity is a difficult task since seismic sound propagation in the sea is a rather complex subject. Often, one oversimplifies the problem and assumes that sound attenuates with spherical spreading, 20 log(R), or cylindrical spreading, 10 log(R), where R is the distance from the seismic source. Spherical spreading applies to loss in deep oceans and cylindrical spreading to an ideal waveguide with perfectly reflecting boundaries at the sea surface and the water bottom. Such conditions are often encountered in the oceans for sound that strikes the bottom at angles greater than a critical angle. It follows that spreading loss in a waveguide such as

147

Figure 3.95: Auditory and startle thresholds for codfish, which are hearing generalists with medium hearing ability. The audiogram (black curve) gives the faintest sounds that can be heard at each frequency. The startle response level (red curve) is assumed to be around 80 dB above the known hearing threshold. The red curve is displayed as a smoothed version of the black curve, added around 80 dB. Fish species react very differently to sound. Therefore, any generalisation about the effects of sound on fish should be made with care. The reactions of fish to anthropogenic sound are expected to depend on the sound spectrum and level, as well as the context (e.g. location, temperature, physiological state, age, body size, etc.)

20log(Z) + 10log(R/Z)

Lasse Amundsen, modified from Karlsen (2010)

the sea, with constant speed of sound, follows a spherical spreading law at short distance and cylindrical spreading at longer distance. A combination of the two spreading laws gives for distances R greater than the ocean depth Z in metres, the asymptotic loss behaviour (see Section 3.5.3):

180

Threshold level (dB re µPa)

However, it is well know that the bathymetry and the composition of the 160 ocean bottom, whether soft or hard, is important for long-range propagation. 140 In addition, sound propagation changes with the oceanographic conditions and thereby the season. Therefore, if 120 one wants to scientifically consider sound propagation under specified 100 ocean conditions, one has two options: to measure or model seismic sound propagation. Obviously, it would be 80 costly to measure sound propagation from seismic activity in the water column 60 everywhere where seismic is acquired. The realistic alternative is to develop mathematical-acoustic simulation models which describe how sound propagates in the sea at long distances from the seismic source. Inputs to such acoustic models are source information and available geological and oceanographic information.

10

3.12.7  Simulation Models Combined with knowledge of fish hearing and their startle thresholds, simulation models can be used to estimate the distances at which various fish species are affected by seismic activity. The ultimate goal is to develop an acousticbiological model to use in the design and planning of seismic surveys, such that possible disturbance to fishing interest is minimised. Maybe, in the future, just as acousticians compute noise maps around airports due to airplane takeoff,

148

100

1000

Frequency (Hz)

underwater acousticians will be able to compute sound propagation maps in the sea due to seismic shooting? In summary, seismic surveys may introduce a behavioural change in fish in the vicinity of the seismic source. The radius of the affected zone will depend on many variables, like the local physical conditions of the sea, the food supply for the fish and the behavioural patterns of the fish. Fish with natural habituation will be more steadfast than shoals of fish migrating through an area. Therefore it may be difficult to accurately determine the exact impact of seismic on the behaviour of fish. However, as long as their prey does not vanish, the steadfast fish will return.

3.13  Effect of Seismic on Crabs We have discussed the effect of seismic shooting on mammals and fish. What about animals without ears, like the crab? In this section, we discuss how the crab’s hearing system works and report from a Canadian research project investigating the effects of seismic shooting on snow crabs. You cannot teach a crab to walk straight. Aristophanes, 446–386 BC, the ‘father of old comedy’

3.13.1  Do Crabs Hear? The crab is one of approximately 50,000 species described as crustaceans, ranging from tiny creatures 0.1 mm long, up to the Japanese spider crab with a length of 3.8m. There are fossils which have been dated back to the Cambrian age, so this group of animals has an impressive track record. They are also invertebrates (animals without a backbone), which means they have no bones or cartilage – which is what forms ears as we know them. Therefore, all crustaceans lack ears similar to human beings. Instead, crabs are equipped with tiny microscopic hairs covering their shells. These hairs detect changes in water pressure and transform these changes into a signal that is sent to the crab’s nervous system.

The knowledge related to hearing and sound communication for invertebrates is limited. Some invertebrates produce sound, which could mean that they also have hearing capabilities. In 1971 Kenneth Horch made a study on the hearing of the ghost crab (ocypode), and measured hearing curves between 0.4 and 3 kHz. He found that ghost crabs in air proved to be sensitive not only to tones presented by the speaker but also to talking, whistling, and playbacks of field recordings of crab sounds. Furthermore, he found that the legs were the most sensitive part of the animal to sound, with the fourth pair of walking legs generally the most sensitive. The experiment showed similar responses for vibration and sound, with the best sensitivity between 1 and 3 kHz.

© 2009 Hans Hillewaert

Figure 3.96: The Atlantic Ghost Crab (Ocypode quadrata) is a very common species on the beaches of north-east Florida, where it lives in burrows well above the high tide line. Although the crab itself is seldom seen during daylight hours, the round shaped entrances to its burrows and the tracks it leaves in the sand are quite conspicuous.

149

3.13.2  Mating Dances Use Sound

a S2

100

S2

90

E1

dB Spl

Since the responses for vibration and sound were similar, it might indicate that the ghost crab cannot really distinguish between them. However, several later studies, coupled with the fact that marine invertebrates do not have swim bladders, clearly suggest that the crab is more sensitive to vibration or particle motion. In his experiment, Horch found that the walking legs were essential for hearing sensitivity, in contrast to the claws. No change in hearing ability was observed until more than half of the legs were removed. As more legs were removed from this point, he found a gradual weakening of the response until zero when all legs were removed. Horch also reports in his paper from 1971 that the painted ghost crab (Ocypode gaudichaudii) reacts to sudden sounds such as the calls of shore birds.

80

70

S1 S1

E1 60

Christian et al.

In several species of American Fiddler Crabs, males use both visual and acoustic signals to find a mate. First, they perform their mating dance by waving their large claw in specific patterns. Then, at night, the male produces sounds from just 0.4 0.6 0.8 1.0 1.5 2.0 3.0 inside his burrow to attract females. These sounds begin at a kHz low rate, and then steadily increase in frequency. In European Figure 3.97: Hearing curves for two ocy­podes. Thresholds determined species of Fiddler Crabs, the mating dance is similar, although electrophysiologically with re­cord­ings made from electrodes implanted in the brains of animals suspended in air. Tones presented by loudspeaker (S) or the males produce two different sounds to attract a female. to individual walking legs with a small earphone (E). DB SPL: sound pressure The first is called a ‘short drumroll’ and is made when the relative to 0.0002 microbar. male is unable to be seen by the female for a short period during his claw-waving dance. The second sound is a ‘long crabs. Some comprehensive research was undertaken in Canada drumroll’ and is used under different circumstances. by Christian et al. in 2002, in which they studied snow crab This acoustic communication not only reaches female crabs, behaviour using airgun sources of 40 and 200 cubic inches. but is also heard by other male crabs. When nearby males hear The purpose of the test was to examine a number of health, the mating calls of other males, they then increase their level of behavioural, and reproductive variables before, during and dancing and mating calls. after seismic shooting. Snow crabs reacted slightly to sound It is assumed that crabs orient themselves according to in the laboratory when sharp noises were made near them. smells in the water (A. K. Woll, 2006), with Figure 3.98: Photo of sensory hairs in the snow crab statocyst. The crab is equipped with at least three visual orientation being probably of less various hair types. One detects vibration or direct contact, another is sensitive to chemicals and the importance. However, can underwater third is designed to detect pressure changes in the water. These hairs are similar to hydrophones used sound give crabs an orientation cue to find to record seismic signals. Experiments show that crabs do not respond to sound signals like music; their way from the open ocean to the coast? however, they react instantly if you jump close to them. Jeffs et al. (2003) used artificial under­ water sound sources to study if the larval and post-larval stages of coastal crabs were attracted to coastal reef sound. The results demonstrated that in their pelagic stages, crabs respond to underwater sounds and that they may use these sounds to orient themselves towards the coast. The orientation behaviour was modulated by lunar phase, being evident only during firstand last-quarter moon phases, at the time of neap tides. Active orientation during neap tides may take advantage of these incoming night-time tides for predator avoidance or may permit more effective directed swimming activity than is possible during new and full moon spring tides.

3.13.3  Effect of Sound on Crabs There are very few studies where the focus has been to look at the effect of sound on

150

Christian et al., 2003

However, in the field the video camera showed that crabs on the sea bottom gave no visible reactions to a 200 cubic inch airgun array being fired 50m above them. Effects on eggs were also conducted in this study and the researchers found that exposure to high levels of sound (221 dB) may retard the development of eggs. However, it is stated that this result needs further investigations. The overall conclusion from the Canadian study is that no obvious effects were observed on crab behaviour, on the health of adult crabs, and on experimental commercial catches. Despite this, it should be noted that this study was not conducted during normal seismic acquisition, and the total number of seismic signals to which these crabs were exposed was therefore less than would occur during a normal seismic survey. However, it should also be noted that the airgun exposures during each study trial were characterised by higher energy levels than those to which crabs would be subjected during normal seismic activities. Snow crabs tend to naturally live in relatively deep water so considerable signal attenuation has occurred by the time the sound reaches the crab on the seabed. In conclusion, therefore, we can say that there is limited knowledge of the effects of seismic acquisition on crabs. Even knowledge about the hearing capabilities of various types of crabs is limited. It is commonly agreed that crabs respond to acoustic sound, and, for instance, the ghost crab has a maximum hearing sensitivity around 1–3 kHz. It is therefore right to assume that crabs notice seismic activity. Initial experiments performed in Canada showed practically no behavioural or other impacts on adult crabs.

Figure 3.99: Scanning microscope photo of crab hearing hairs. The typical length of these hairs is 300 micrometres. These hairs are similar to seismic streamer cables, although the dimensions are somewhat different – 300 micrometres versus 6 km. Another similarity to seismic acquisition is the use of several streamers: while seismic contractors can tow up to 20 streamers, the snow crab is equipped with even more receptors!

Figure 3.100: Hydrophone signal, measured 50m below the 200 cubic inch airgun array, which was used December 1, 2002, Test 1, 200 in3, 50 metres Low Frequency in the Canadian study on seismic and crabs.

Hydrophone Signal Christian, 2003

2•104

Pressure (Pa)

1•104

Pat

0

-1•104

-2•104

0

0.2

0.4

Tt Time (sec)

0.6

0.8

1

151

3.14  A New Airgun: eSource Guest contributors: David Gerez, Halvor S. Groenaas and Ola Pramm Larsen, WesternGeco; Matthew Padula, Teledyne Bolt.

WesternGeco and Teledyne Bolt have jointly developed a new type of airgun which reduces the potential environmental impact of marine seismic surveys. I have been impressed with the urgency of doing. Knowing is not enough; we must apply. Being willing is not enough; we must do. Leonardo da Vinci (1452–1519) The eSource* airgun was designed to attenuate high-frequency acoustic noise while preserving the acoustic signal at the frequencies required for seismic imaging. Almost all of the high-frequency energy emitted by an individual airgun is produced by the rising flank of the acoustic pulse. The eSource airgun precisely controls the slope of this event, which is determined by the rate at which the airgun’s ports are opened to allow pressurised air to stream from the fire chamber into the surrounding water. The port area (‘1’ on Figure 3.101) is a function of the static geometry of the ports and of the dynamic position of the mechanical shuttle that exposes a portion of the Figure 3.101: Simulated velocity field of the eSource airgun, with air streaming out of the ports (1) into the ports to the pressurised air in the fire surrounding water and regulating the movement of the shuttle in the control chamber (2). chamber. The shuttle’s acceleration is determined by the opposing forces produced by the air pressures acting on both sides of the shuttle lead to a directive seismic signal due to the long wavelengths, in the control chamber (‘2’ on Figure 3.101). These pressures are but do influence the dynamics of the bubble. At the time of the in turn a function of the control chamber’s geometry and the acoustic main peak (a), the bubble is still expanding but starting shuttle’s position. to decelerate. The first acoustic trough is produced when the bubble reaches its maximum size and starts to contract (b). 3.14.1  Thorough Testing The bubble continues to contract until it reaches the second In optimising the design, it was necessary to apply advanced peak (c and d). As the bubble contracts, the surrounding water computational fluid dynamics (CFD) models to understand picks up momentum and the four masses collide in the centre. the complex dynamic interactions across multiple domains, They deflect each other at an angle, producing the twisting and including fluid flows within the airgun, mechanical structures toroidal profile of the next cycle. and forces, and the stochastic oscillation of the released air bubble. Modelling this application presents a number of 3.14.2  eSource Bandwidth Settings challenges, such as extreme pressure gradients, supersonic The eSource airgun can be configured in the field to any of velocities, multiphase flows, moving meshes, and micrometrethree bandwidth settings (denoted A, B, and C). These emit scale flow paths. It was also critical to ensure the reliability of main peaks with progressively more gentle slopes (Figure 3.103) the airgun in the field over hundreds of thousands of severe compared with a conventional airgun. The peaks of the bubble firing cycles. This required advanced structural finite element oscillation, on the other hand, do not change as significantly analysis (FEA) simulation coupled with the forces predicted by (Figure 3.104). Because the main peak produces most of the the CFD model, and thorough physical testing. energy above the seismic band and the bubble produces most Figure 3.102 shows the airgun’s firing event, as predicted of the energy at the lower end of the seismic band, noise by the CFD model and observed in a physical experiment. is reduced while preserving the seismic signal. The output Because the airgun has four ports, it emits four volumes of air spectrum (Figure 3.105) can be matched to the seismic imaging that nearly – but not fully – coalesce and are jointly referred bandwidths of specific surveys. to as ‘the bubble’. Their distinct and irregular profiles do not *eSource is a trademark of Teledyne Bolt

152

a

b

Figure 3.103: Signature of a single 150 cu. in. airgun firing at 10m depth with 2,000 psi air pressure: conventional (black), eSource A (red), eSource B (green), and eSource C (blue). 10 shots superposed, main peak only.

c

Figure 3.104: Signature of the eSource airgun with time scale extended to freebubble oscillations.

d

Figure 3.105: Source spectral energy over 1,000 ms.

Figure 3.102: The eSource airgun CFD simulation (left) and image from physical experiment (right) shown with corresponding near-field acoustic pressure (inset) at four instants: main peak (a), first trough and maximum bubble extent (b), zero crossing (c), and second bubble peak (d).

153

154

Chapter 4

Reservoir Monitoring Technology Repeat your mistakes.

Statoil

Rodney Calvert (1944–2007)

The Grane field demonstrates the permanent reservoir monitoring (PRM) setup. The water depth is 130m, and the receiver cables are trenched into the seabed with a separation distance of 300m between them. The in-line distance between receivers is 50m. A vessel towing two sources is used for PRM at both the Snorre and the Grane fields.

155

4.1  An Introduction to 4D Seismic

4D, or time-lapse, seismic is a phrase used to describe the process of using two or more seismic surveys acquired over the same area or field to find changes that have occurred over calendar time. For hydrocarbon reservoirs, these changes might represent production related changes such as pore pressure, temperature or fluid saturation. 4D seismic might also be used to monitor seasonal changes (near surface effects), earthquakes (before and after) or, for instance, underground storage of CO2 . As a common reservoir monitoring tool today, 4D seismic has gone through a tremendous development over the past three decades. In the beginning monitoring was done by repeating 2D seismic lines, and then by repeating large 3D surveys. Today, several fields can be monitored by trenched receiver cables at the seafloor, enabling frequent and close to continuous reservoir monitoring. The benefit of this development is two-fold: increased hydrocarbon production by infill drilling, and early detection of unwanted and unforeseen reservoir developments, such as gas breakthrough and sudden pressure increases. The first 4D commercial seismic surveys were acquired in America in the early 1980s for heavy oil fields. Heavy oil is viscous, and steam injection was used as a way to heat the oil and thereby reduce the viscosity. In this way the oil migration towards the producing wells was improved, and oil production increased. During the heating process, the P-wave velocity of the oil decreased, and this change was clearly observed on repeated seismic data acquired before and after the heating process. The major breakthrough for commercial 4D seismic acquisition in the North Sea was the Gullfaks 4D study launched by Statoil in 1995. By comparing seismic data before and after the start of production, Statoil identified several (about 20) targets that were undrained. Statoil estimated the added value of this study to be of the order of US$ 1 billion.

4.1.1  An Early North Sea 4D – Still Alive! One of the first 4D studies performed in the North Sea, however, had a more dramatic background: an underground blowout that Saga Petroleum encountered when drilling exploration well 2/4-14, which occurred in January 1989, and lasted for almost a year. During this period, Saga Petroleum acquired several 2D seismic lines close to the well. Figure 4.2 shows a vertical cross-section from the 3D seismic data that was acquired in 1991, two years after the blowout. We can clearly see vertical chimneys along the two vertical well bores. The chimney in well 14 is more pronounced and stretches further into the overburden, compared to well 15. When we use time-lapse seismic data as shown in Figure 4.3, we clearly see huge and dramatic changes in the overburden. At 520 ms (two-way traveltime) we observe a new event that is hardly visible on the 1988 data. This is interpreted as gas leaking vertically

156

Statoil

This introduction is partially based on Chapter 19 in Petroleum Geoscience (edited by K. Bjørlykke) and on unpublished material we have gathered from work at Statoil and at NTNU over two decades.

Figure 4.1: Artistic view of the rotated Jurassic fault blocks constituting the Gullfaks reservoir (left). Solid black lines represent wells; light brown-yellow layers represent sandstones, and blue-grey layers represent shales. The righthand figure illustrates the connection between the top reservoir sandstone layer (dashed red line) and the corresponding seismic response before and after ten years of production. The original oil-water contact is shown as a dashed green line on the time-lapse seismic data, clearly demonstrating the production effect on the data. Also notice the decreased traveltime for the seismic event below the oil-water contact. This reduction in velocity is caused by water replacing oil in the lower part of the reservoir rock.

outside well 14 from a leakage point through the casing at approximately 900m depth, and into a thin and almost horizontal sand layer at approximately 490m depth. Figure 4.3 shows a clear example of the most common interpretation Figure 4.2: Vertical cross-section of a seismic line acquired in 1991 (after the blowout) intersecting the blowout well (2/4-14) and the relief well (2/4-15). Notice the gas accumulation along the vertical well paths of both wells, and the horizontal accumulation of gas (leaking from the well and marked by the red arrow).

technique in 4D seismic analysis: comparing amplitude levels before and after production or other changes. In this case, the interpretation is simple and obvious. In other cases it might be far more complex, especially if several effects occur at the same time and at the same location, as, for instance, pressure and saturation effects. However, there are more details that can be revealed from Figure 4.3. The pulldown of approximately 15 ms at the well position (marked by the red arrow) is probably caused by the presence of gas above this sand layer. In 2009, 20 years after the blowout, we observe that this pulldown is less pronounced, maybe only 10 ms, suggesting that some of the gas in the overburden has moved away, either horizontally or vertically, or both. A direct difference between the two datasets from 1988 and 1990 is shown in Figure 4.4. We can clearly see the gas anomaly at 520 ms, and maybe there are some shallower anomalies caused by gas accumulation close to the well, especially to the right of the well at 200 ms. Another typical 4D analysis technique is to estimate time shifts observed at a seismic interface below the area of interest. An example of this is shown in Figure 4.5, where the time shift curve from 1990 shows a clear peak close to the projection of the 2/4-14 well. This 2D line does not intersect both wells, only well 15. In 2009 we observe a new peak in the time shift curve occurring close to well 15. This is interpreted as gas migrating horizontally in a sand layer, and then vertically outside the well, through the sediments that have been weakened due to drilling. It is interesting that this process is fairly slow, an important observation which is relevant for underground storage of CO2 .

Figure 4.3: Time-lapse 2D seismic data from 1988 (prior to blowout), 1990 (a year later) and 2009 (20 years later). Notice the pulldown on the 1990 data (marked by the red arrow). This pulldown is caused by the presence of gas above this layer, and is less pronounced in 2009, indicating that some of the gas above the sand has disappeared.

4.1.2  Rock Physics – the Missing Link in 4D Seismic When subsurface rocks are monitored, the main objective is to detect changes that occur over time. Figure 4.4: 4D difference between 1990 and 1988. The gas accumulation at 520 ms Typical processes we want to monitor might be is clearly visible. However, shallower events are harder to interpret, since the 4D hydrocarbon production, storage of CO2 or other fluids, repeatability decreases for shallower depths. climate-related changes, earthquakes and geohazards. Typical parameters that might change within a rock are rock is stretched or compacted) and chemical processes that fluid saturation, pore pressure, temperature, porosity (if the might occur in the rock. Rock physics is a discipline where such parameters are coupled either directly Figure 4.5: Estimated timeshifts for a reflector below the sand layer shown as a bright event in or indirectly to the seismic parameters, Figure 4.4. Notice that timeshifts increase from 1990 to 2009, especially close to the relief well (15). typically P- and S-wave velocities and densities. Inset: Amplitude anomaly at the top of the gas-charged sand layer (bird-view), showing the extent In addition to these three major seismic of the gas accumulation, the position of the two wells and the 2D seismic line (dashed blue line). parameters we might add those describing anisotropy and attenuation properties of a rock. Both theoretical rock physics models and laboratory experiments are used as important input to time-lapse seismic analysis. A standard way of relating, for instance, P-wave seismic velocity to changes in fluid saturation is to use the Gassmann model, as shown in Figure 1.86. Several models have been proposed to estimate the effective modulus of a rockbased texture. Many of these models assume spherical or elliptical grains that are equal in size and in mutual contact. The Hertz-Mindlin model (Mindlin, 1949) can be used to describe

157

the properties of pre-compacted granular rocks. The effective bulk modulus of dry random identical sphere packing is given by:

K eff =

C

(1)

where CP is the number of contact points per grain, φ is porosity, G is the shear modulus of the solid grains, σ is the Poisson ratio of the solid grains and P is the effective pressure (that is P = Peff). The effective pressure is (to the first order) equal to the net pressure; i.e. the overburden pressure minus the pore pressure. The shear modulus is given as:

C p2

G eff =

2

2

(2)

The relations between seismic P- and S-wave velocities and the bulk and shear moduli are given by:

(3)

=

S

=

G

(4)

where VP and VS are P- and S-wave velocities respectively, and ρ is the rock density. Inserting equations (1) and (2) into equations (3) and (4) and computing the VP/VS ratio, yields (assuming σ = 0): VP . (5) = VS

curve. The cause for this might be multifold. Firstly, the Hertz-Mindlin model assumes the sediment grains are perfect, identical spheres, which, of course, is never found in real samples. Secondly, the ultrasonic measurements might suffer from scaling issues, core damage and so on. Thirdly, cementation effects are not included in the Hertz-Mindlin model. It is therefore important to note that there are major uncertainties regarding the actual dependency between seismic velocity and pore pressure changes.

4.1.3  The Noble Art of Analysing Time-Lapse Seismic Data The analysis of 4D seismic data can be divided into two main categories, one based on the detection of amplitude changes, the other on detecting travel-time changes, as shown in Figure 4.7. Experience has shown that the amplitude method is most robust, and therefore this has been the most frequently employed method. However, as the accuracy of 4D seismic has improved, the use of accurate measurements of small timeshifts is increasingly the method of choice. There are several examples where the timeshift between two seismic traces can be determined with an accuracy of a fraction of a millisecond. A very attractive feature of 4D timeshift measurement is that it is proportional to the change in pay thickness (thickness of those intervals containing oil or gas), and this method provides a direct quantitative result. The two techniques are complementary in that amplitude measurement is a local feature (measuring changes close to an interface), while the timeshift method measures average changes over a layer, or even a sequence of layers. In addition to the direct methods mentioned above, 4D seismic interpretation is aided by seismic modelling of various production scenarios, often combined with reservoir fluid-flow simulation and 1D scenario modelling based on well logs. Figure 4.8 demonstrates an excellent example of a more complex drainage situation that can be inferred from detailed analysis of 4D seismic data. On the 1985 data (top section) we observe a broken oil-water contact (partially continuous black event at 1,950 ms). In the middle section we observe that parts of this oil-water contact have disappeared, indicating water flushing in this area (marked by two black arrows). Further to the east we notice three blue events that have brightened (marked by

This means that according to the simplest granular model (Hertz-Mindlin), the VP/VS ratio should be constant as a function of confining pressure if we assume the rock is dry. The Hertz-Mindlin model assumes that the sand grains are spherical and that there is a certain area of grain-to-grain contacts. A major shortcoming of the model is that the P- and S-wave velocities will have the same behaviour with respect to pressure changes, so that the VP/VS ratio is constant, as shown in equation (5). For saturated rocks, this ratio goes to infinity for zero effective pressure, and not a constant. This is because the P-wave velocity approaches the fluid velocity for zero effective stress, and not zero Figure 4.6: Typical changes in P-wave velocity versus effective pressure using the Hertz-Mindlin model. In this case as in the Hertz-Mindlin model (dry the in situ effective pressure (prior to production) is 6 MPa, and we see that a decrease in effective pressure leads to rock assumption). The shear wave a decrease in P-wave velocity. The black curve represents the Hertz-Mindlin model (exponent=1/6), the red curve is velocity, however, approaches zero. a modified version of the Hertz-Mindlin model (exponent=1/10) that better fits the ultrasonic core measurements. If we assume that the in situ (base survey) effective pressure is P0 we see from equation (3) that the relative P-wave velocity versus effective pressure is given as:

VP P = P VP0

(6)

Figure 4.6 shows this relation for an in situ effective pressure of 6 MPa. When such curves are compared to ultrasonic core measurements, the slope of the measured curve is generally smaller than this simple theoretical

158

three black arrows), which are interpreted as water following the high permeable Rannoch formation. Close to well C-33, we notice that this water causes brightening between top reservoir and the original oil-water contact. The response at the oil-water contact has not changed, indicating untouched oil for the top 20m, followed by water flushing for the next 20m (these numbers are very approximate), followed by an untouched 20m sequence close to the oil-water contact. This behaviour was essentially confirmed by the well that was drilled, although the saturation log actually showed more details compared to the 4D seismic interpretation. The difference section at the bottom of Figure 4.8 shows clear anomalies at the oil-water contact level (three blue anomalies). Notice that close to the well Figure 4.7: Two 4D analysis techniques: notice there are no amplitude changes at top reservoir between 1985 and 1995, but huge amplitude changes at the oil-water contact (OWC). Also note there are no significant differences at top reservoir the change in travel time (marked by the yellow arrow) for the seismic event below the oil-water and oil-water contact level, but stronger differences contact. in the middle of the sequence. These examples illustrates that 4D seismic interpretation is an overburden and so on. However, the most important issue arena where several disciplines meet: geology, geophysics, rock that we can influence is the repeatability of the seismic data. physics, reservoir engineering and petrophysics. In simple terms, repeatability is a measure of how accurately we can repeat the seismic measurement – the acquisition 4.1.4  Repeatability – A Key Success Factor? repeatability. This is dependent on several factors, including: We all remember from medical X-ray imaging or from visits to • Varying source and receiver positions (x, y and z coordinates); the dentist: precision is important and in particular the need • Changing weather conditions during acquisition; to repeat the images as perfectly as possible, so the dentist can • Varying sea water temperature; decide whether it is necessary to take action or not. Time-lapse • Tidal effects; seismic experiments face the same challenges: how can we • Noise from other vessels or other activity in the area (rig be sure that the differences we observe are due to production noise); related changes and not some kind of spurious noise that can • Varying source signal; lead to false interpretations? • Changes in the acquisition system (new vessel, other cables, The quality of a 4D seismic data set is dependent on several sources etc.); issues, such as reservoir complexity, the complexity of the • Variation in shot-generated noise (from the previous shot). Figure 4.8: Example of complex 4D interpretation: water is flowing in a thin high permeable layer (marked by three black arrows in the middle section). Well C-33 confirmed untouched oil at the top and base (original oil water contact) reservoir, and high water saturation in the middle.

159

The obvious way to fix the first challenge in this list is to trench the receiver cables into the seabed, which is often referred to as PRM (Permanent Reservoir Monitoring). This is costly, but has a high reward at the other end – high quality and very precise 4D seismic data. PRM systems have been installed at four fields offshore Norway and some other fields elsewhere, but it is not a very common practice so far. Since the upfront costs are extensive, PRM needs a large field where 4D seismic is expected to increase hydrocarbon recovery significantly. A common way to quantify seismic repeatability is to use the normalised RMS (root-mean-square)-level:

NRMS = 2

RMS (monitor – base) RMS (monitor) + RMS (base)

in Figure 4.10, where approximately 70,000 shot pairs with varying horizontal mis-positioning is shown. The main message from this figure is clear: it is important to repeat the horizontal positions both for sources and receivers as accurately as possible, as even a misalignment of 20–30m might lead to a significant increase in the NRMS-level. Such a plot can serve as a variogram, since it shows the spread for each separation distance. Detailed studies have shown that the NRMS-level increases significantly in areas where the geology between the source and receiver is complex (along the straight line between the source and receiver). From Figure 4.10 we can see that the NRMS-value for a shot separation distance of 40m might vary between 20 and 80% and a significant portion of this spread is attributed to variation in geology. This means that comparing NRMS-levels between various fields is not straightforward, since the geological setting might be very different. NRMS-levels are frequently used, however, since it is a simple, quantitative measure. It is also important to note that the NRMS-level is frequency dependent, so the frequency band used in the data analysis should be given. Over the last two decades the focus on source and receiver positioning accuracy has led to a significant increase in the repeatability of 4D seismic data. Some of this improvement is also attributed to better processing of time-lapse seismic data. This trend is sketched in Figure 4.11. Today the global average NRMS-level is around 20–30%. It is expected that this trend will continue, but not at the same rate as previously. The main reason for this expectation is that the non-repeatable factors, especially rough weather conditions, that need to be attacked to get beyond 20–30% are more difficult and will represent a major hurdle on the way to increased repeatability.

(7)

where the RMS-levels of the monitor and base traces are measured within a given time window (on a sample by sample basis). The factor 2 in equation 7 arises from taking the average of the monitor and base RMS-values. Normally, NRMS is measured in a time window where no production changes are expected. Figure 4.9 shows two seismic traces from a VSP (Vertical Seismic Profile) experiment where the receiver is fixed (in the well at approximately 2 km depth), and the source coordinates are changed by 5m in the horizontal direction. We notice that the normalised RMS-error (NRMS) in this case is low, only 8%. In 1995 Norsk Hydro (now Statoil) acquired a 3D VSP data set over the Oseberg field in the North Sea. This data set consists of 10,000 shots acquired in a circular shooting pattern and recorded by a 5-level receiver string in the well. By comparing shot-pairs with different source positions (and the same receiver), it is possible to estimate the NRMS-level as a function of the horizontal distance between the shot locations. This is shown

4.1.5  Geomechanical Changes Caused by Hydrocarbon Production Geomechanics has traditionally been an important discipline in both exploration for and production of hydrocarbons. However, its importance in time-lapse seismic was not fully realised until

Landrø, 1999

Figure 4.9: Two VSP-traces measured at exactly the same position in the well, but with a slight difference in the source location (5m in the horizontal direction). The distance between two timelines is 50 ms.

Figure 4.10: NRMS as a function of the source separation distance between approximately 70,000 shot pairs for the Oseberg 3D VSP data set. Landrø, 1999

100 90 80

RMS error (%)

70 60 50 40 30 20 10 0

160

0

10

20

30

40

50

60

Shot separation distance (m)

70

80

90

100

the first results from the chalk fields in the southern North Sea (Ekofisk and Valhall) were reported. The seafloor subsidence at Ekofisk has been known for many years, but the practical use of 4D seismic to monitor and better understand these processes was not realised immediately. The physical cause for the severe compaction of the chalk reservoir is two-fold: firstly, the depletion of the field leads to a decrease in the pore pressure, and hence the reservoir rock compacts due to lack of pressure support; and secondly, the chemical reaction between water replacing the oil and the chalk matrix leads to a weakening of the rock framework, and a corresponding compaction. When the reservoir rock compacts, the over- and underburden rocks will be stretched. Typical observations of seafloor subsidence show that the subsidence is less than the measured compaction for the reservoir. Typically, the vertical movement of the seafloor is approximately 20–50% less than the movement of the top reservoir. This is, however, strongly dependent on the reservoir geometry and the stiffness of the rocks over and under the reservoir. Therefore, the most common way to interpret 4D seismic data from a compacting reservoir is to use geomechanical modelling as a complementary tool. A major challenge when the thickness of subsurface layers is changed during production, is to distinguish between thickness changes and velocity changes. One way to resolve this ambiguity is to use geomechanical modelling as a constraining tool. Another way is to combine near and far offset traveltime analysis (Landrø and Stammeijer, 2004) to estimate velocity and thickness changes simultaneously. Hatchell et al. and Røste et al. suggested in 2005 the use of a factor relating the relative velocity change (dv/v) to the relative thickness change (dz/z; displacement):

dv/v (8) dz/z Hatchell et al. found that this factor varies between 1 and 5, and it is normally less for the reservoir rock than the overburden rocks. For sandstones and clay it is common to establish empirical relationships between seismic P-wave velocity (v) and porosity (φ). These relations are often of the simple linear form: (9) v = a +b R=

where a and b are empirical parameters. If a rock is stretched (or compressed), a corresponding change in porosity will occur. Assuming that the lateral extension of a reservoir is large compared to the thickness, it is reasonable to assume a uniaxial change, as sketched in Figure 4.12. From simple geometrical considerations we see from this figure that the relation between the thickness change and the corresponding porosity change is:

dz d = (10) z 1 In the isotropic case (assuming that the rock is stretched in all three directions) it is: dz d = (11) z 3(1 ) Inserting equations 9 and 10 into 8 we obtain an explicit expression for the dilation factor R:

R =1

a+b v

(12)

Figure 4.11: Schematic view showing improvement in seismic repeatability (NRMS) versus calendar time.

which is valid for the uniaxial case. Using equation 11 instead of 10 gives a similar equation for the isotropic case.

4.1.6  Discrimination Between Pressure and Saturation Effects Although the main focus in most 4D seismic studies is to study fluid flow and detect bypassed oil pockets, the challenge of discriminating between pore pressure changes and fluid saturation changes occurs frequently. From rock physics we know that both effects influence the 4D seismic data. Pressure effects are linked to the seismic parameters in another way in addition to the fluid saturation effects. For instance, pressure effects may be modelled using the Hertz-Mindlin type of equations (equations 1–4 in Section 4.1.2), while fluid effects are often described by the Gassmann equations. This means that it is possible to discriminate, since we have different rock physics relations for the two cases. Some of the early attempts to perform this discrimination between pressure and saturation were presented by Tura and Lumley (1999) and Landrø (1999). In Landrø’s method (2001) the rock physics relations (based on Gassmann and ultrasonic core measurements) are combined with simple AVO (amplitude versus offset) equations to obtain direct Figure 4.12: Cartoon showing the effect of increased porosity due to a uniaxial stretching of a rock sample.

Uniaxial stretching of a rock x

x

h x h+dx

ф1 =

h( x + x – h) x2

ф 2=

dф dx = 1–ф x

x+dx

(h + dx) x + ( x – h)h x( x + dx)

+

161

Figure 4.13: Seismic section through the Cook Formation at Gullfaks. The blue solid line indicates the original oil-water contact (marked OWC). Notice the significant amplitude change at top Cook between 1985 and 1996, and that the amplitude change extends beyond the OWC level, a strong indication that the down-flank amplitude change is caused by pore pressure changes.

expressions for saturation changes and pressure changes. inaccuracies in the algorithm or by limited repeatability of Necessary inputs to this algorithm are near and far offset the time-lapse AVO-data. The estimated pressure changes are amplitude changes estimated from the base and monitor 3D more consistent with the fault pattern in the region, and it is seismic cubes. This method was tested on a compartment likely that the pressure increase is confined between faults in from the Cook Formation at the Gullfaks field. Figure 4.13 almost all directions. Later time-lapse surveys show that the shows a seismic profile (west-east) through this compartment. eastern fault in the figure was ‘opened’ some years later. A significant amplitude change is observed for the top Cook interface (red solid line in Figure 4.13), both below 4.1.7  Other Geophysical Monitoring Methods and above the oil-water contact. The fact that this amplitude So far 4D seismic has proved to be the most effective way to change extends beyond the oil-water contact is a strong monitor a producing reservoir. However, as discussed above, indication that it cannot be solely related to fluid saturation there are several severe limitations associated with time-lapse changes, and a reasonable candidate is therefore pore pressure seismic. One of them is that seismic reflection data is sensitive changes. Indeed, it is confirmed that the pore pressure has to acoustic impedance (velocity times density). Although 4D increased by 50–60 bars in this segment, meaning that the timeshift can reveal changes in average velocity between two reservoir pressure is approaching the fracking pressure. We interfaces, an independent measurement of density changes also observe from this figure that the base reservoir event would be a nice complement to conventional 4D seismic. (blue solid line) has slightly shifted downwards by 2–3 ms. The idea of actually measuring the mass change in a This slowdown is interpreted as a velocity drop caused by reservoir caused by hydrocarbon production has been around the pore pressure increase in the segment. When the pore for quite some time. The limiting factor for gravimetric pressure increases, the grain coupling (or the surface contact reservoir monitoring has been the repeatability (or accuracy) area) will be somewhat weaker, leading to a decrease in both of the gravimeters. Sasagawa et al. (2003) demonstrated P- and S-wave velocities. Figure 4.14 shows an attempt to Figure 4.14: Estimated fluid saturation changes (left) and pore pressure changes (right) based on 4D AVO analysis for the top Cook interface at Gullfaks. The blue solid line represents the original oil-water contact. discriminate between fluid saturation Notice that the estimated pressure changes extend beyond this blue line to the west, and terminate at a fault changes and pressure changes for to the east. Yellow indicates significant changes. Figure from Landrø, 2001. this compartment, using the method described in Landrø, 2001. In 1996, 27% of the estimated recoverable reserves in this segment was produced, so we know that some fluid saturation changes are expected to be observed on the 4D seismic data. From this figure we observe that most of the estimated fluid saturation changes occur close to the oil-water contact for the western part of the segment. However, some scattered anomalies can be observed beyond the oil-water contact in the northern part of the segment. These anomalies are probably caused by

162

that improved accuracy can be achieved by using three coupled gravimeters placed at the sea floor. This technical success led to a full field programme at the Troll field, in the North Sea. Another successful field example is a timelapse gravity monitoring of an aquifer storage recovery project in Leyden, Colorado (Davis et al., Figure 4.15: Schematic plot showing how 4D refraction analysis can be used to detect shallow gas leakage. 2005), essentially mapping water influx. Obviously, this technique is best suited for reservoirs where significant movements at the surface to subsurface movements. For mass changes are likely to occur, such as water replacing gas. seismic purposes, such a geomechanical approach can be used Shallow reservoirs are better suited than deep ones. The size to obtain improved velocity models in a more sophisticated of the reservoir is a crucial parameter, and a given minimum way; if you need to adjust your geomechanical model to obtain size is required in order to obtain observable effects. For correspondence between observed surface subsidence and monitoring volcanic activity, gravimetric measurements reservoir compaction, then this can be used to distinguish might help to distinguish between fluid movements and between overburden rocks with high and low stiffness for tectonic activity within an active volcano. instance, which again can be translated into macro-variations Since the breakthrough of active EM (Electromagnetic) of the overburden velocities. The deeper the reservoir is, surveys (see Chapter 6) at the beginning of this millennium the less frequent will be the surface imprint of the reservoir (Ellingsrud et al., 2002) this technique has mainly been used changes, and hence this method cannot be used directly to as an exploration tool, in order to discriminate between identify small pockets of undrained hydrocarbons. However, hydrocarbon-filled and water-filled rocks. Field tests have it can be used as a complementary tool for time-lapse seismic shown that such data are indeed repeatable, so there should since it can provide valuable information on the low frequency definitely be a potential for using repeated EM surveys to spatial signal of reservoir compaction. monitor a producing reservoir. So far, frequencies as low Seismic refraction methods are among the oldest methods as 0.25 Hz are being used (and even lower), which means in seismic exploration. The term refraction is not unique or that the spatial resolution will be limited. However, as a precise. According to Sheriff (2002) it has two meanings, complementary tool to conventional 4D seismic, 4D EM first to change direction (following Snell’s law) and second to might be very useful. In many 4D projects it is hard to involve head waves. Here we will use a relaxed interpretation, quantify the amount of saturation changes taking place which means that the term includes head waves, diving within the reservoir, and time-lapse EM studies might be used waves and reflections close to critical angle. An attractive to constrain such quantitative estimates of the saturation application of time-lapse refraction is to detect changes in changes. Another feature of the EM technique is that it is not shallow sediments. One such example is sketched in Figure very sensitive to pressure changes, so it may be a nice tool for 4.15, where an underground blowout (the same as shown in separating between saturation and pressure changes. This is figures 4.2 and 4.3) causes gas to be trapped in a thin sand in contrast to conventional 4D seismic, since seismic data is layer. In this case, the most robust 4D refraction attribute is sensitive to both pressure and saturation changes. the time shift. This will be further discussed in Section 4.3. By using the so-called interferometric synthetic aperture radar principle obtained from orbiting satellites, an impressive 4.1.8 CO2 Monitoring accuracy of the distance to a specific location on the earth’s Interest in CO 2 injection, both for storage and as a tertiary surface can be measured (see Chapter 1.6). By measuring the recovery method for increased hydrocarbon production, has phase differences for a signal received from the same location grown significantly over the last decade. Statoil has stored for different calendar times, it is possible to measure the approximately 0.9 million tons of CO 2 per year since 1997 in relative changes with even higher accuracy. By exploiting a the Utsira Formation at the Sleipner field, and several similar sequence of satellite images, it is possible to monitor height projects are now being launched worldwide. A key focus has changes versus time. The satellites used for this purpose are been to develop geophysical methods to monitor the CO 2 orbiting the earth at a height of approximately 800 km. injection process, and particularly to try to quantify directly Several examples of monitoring movements of the surface from geophysical data the volume injected. above a producing hydrocarbon reservoir have been reported. One way to improve our understanding of how the CO 2 For reservoir monitoring purposes there are, of course, flows in a porous rock is to perform small-scale flooding limitations on how much detailed information such images experiments on long core samples. An example of such can provide for reservoir management. The obvious link to a flooding experiment and corresponding X-ray images the reservoir is to use geomechanical modelling to tie the for various flooding patterns is shown in Figure 4.16. By

163

measuring acoustic velocities as the flooding experiment is conducted, these experiments can be used to establish a link between pore scale CO 2 injection and time-lapse seismic on the field scale. Figure 4.17 shows an example of how CO 2 influenced the seismic data over time. By combining 4D traveltime and amplitude changes, we have developed methods of estimating the thickness of CO 2 layers, making it possible to estimate volumes. Another key parameter is CO 2 saturation, which can be estimated using rock physics measurements and models. Figure 4.16: (left) Long core showing the location for the X-ray cross-section (red arrow). Although the precision of both 4D seismic Water injection is 50 g/l. (right) X-ray density maps of a core slice: 6 time steps during the injection process (from Marsala and Landrø, EAGE extended abstracts, 2005). methods and rock physics is increasing, there is no doubt that precise estimates are hard to achieve, and therefore we need to improve existing methods and learn past decade. However, it is still a crucial issue, and even minor how to combine several methods in order to decrease the improvements might mean a lot for the value of a 4D seismic uncertainties associated with these monitoring methods. study. Further improvements in repeatability will probably A more recent example of time-lapse seismic monitoring involve issues like source stability, source positioning, of CO 2 storage is from the Snøhvit field. Snøhvit is a gas field shot time interval and improved handling of various noise in the western Barents Sea, offshore Norway, where the gas sources. Maybe in future we will see vessels towing a supercontains approximately 5–8% CO 2 , which is separated from dense grid of sensors behind them in order to obtain perfect the gas and re-injected into sand layers close to the reservoir. repositioning of the receiver positions. From 2008 to 2011 approximately 1.6 million tons of CO 2 were Another direction to improve 4D studies could be to injected into the Tubåen Formation. An increase in the pore constrain the time-lapse seismic information by other types of pressure in this formation led to the decision to inject into the information, such as geomechanical modelling, time-lapse EM Stø Formation by drilling a new injection well in 2016. or gravimetric data or innovative rock physics measurements. An example of pressure-saturation discrimination using Our understanding of the relation between changes in time-lapse seismic data acquired in 2003 and 2009 (see the subsurface stress field and the seismic parameters is still Grude et al., 2013) is shown in Figure 4.18. From this figure limited, and research within this specific area will be crucial we clearly see the increased pore pressure within the Tubåen to advance 4D in future. New analysis methods like long offset Formation, and relatively confined saturation changes 4D might be a complementary technique, or an alternative that are visible closer to the injection well. The figure also method where conventional 4D analysis has limited success. demonstrates that the discrimination method improves when However, this technique is limited to reservoirs where the the offset span is increased, since the lower panel shows velocity increases from the cap rock to the reservoir rock. clearer images of pressure and saturation and probably less The link between reservoir simulation (fluid-flow cross-talk between saturation and pressure. simulation) and time-lapse seismic will continue to be developed. As computer resources increase, the feasibility of a 4.1.9  Future Aspects joint inversion exploiting both reservoir simulation and timeThe most important issue for further improvement of the 4D lapse seismic data in the same subsurface model will increase. seismic method is to improve repeatability. Improvements in Despite extra computer power, it is reasonable to expect that both seismic acquisition and 4D processing will contribute the non-uniqueness problem (several scenarios fitting the to this process. As sketched in Figure 4.11, it is expected that same data sets) requires that the number of earth models is this improvement will be less pronounced than it was in the constrained by geology. Figure 4.17: Timelapse seismic data showing monitoring of the CO2 injection at Sleipner. The strong amplitude increase (shown in blue) is interpreted as a thin CO2 layer. The dashed red lines indicate top and base of the Utsira sand layer (figure modified from Ghaderi and Landrø, 2009).

164

Figure 4.18: RMS amplitude measured in the Tubåen Formation on the inverted pressure (left) and saturation (right) 3D cubes based on near and mid stack (top) and near and far stack (bottom). The CO2 injection well is shown by the black dot and north by the black arrow. The red circle focuses on the amplitude effect in the neighbouring fault block to the injection block, where the noise is far less for the near-far stacks (lowermost figures). The leakage between the inverted pressure and saturation can be seen for the near-mid stacks (uppermost figures). Figure from Grude et al., 2013.

165

4.2  Time-Lapse Seismic and Geomechanics Ekofisk, discovered in the Norwegian North Sea in 1969, was one of the first fields where time-lapse seismic proved to be an excellent tool for monitoring and mapping overburden and other geomechanical changes in a producing field. A stone is ingrained with geological and historical memories. Andy Goldsworthy , Artist (1956–)

Doornhof, Kristiansen, Nagel, Pattillo and Sayers, Oilfield Review, 2006

Studies at the Ekofisk field (Figure 4.19) have shown how reservoir compaction led to seabed subsidence of approximately 30–40 cm a year over a long period (1986 to 1998), before the subsidence suddenly dropped to 15–20 cm per year (Figure 4.20). Doornhof et al. (2006) explain this as a result of depletion and the chemical reaction between water and chalk. In 1987 a major campaign to inject sea water into the chalk reservoir had begun, the amount of water injected per day reaching a maximum level in 1996–1997. After a certain volume of a reservoir has been swept by water, chalk weakens to a given limit before the process slows down, which explains these changes Figure 4.20: Ekofisk production and subsidence history. From 1987 to 1999 subsidence was between 30 and 40cm per year, corresponding to a total seafloor subsidence of more than 4m over this period. in subsidence rate. In addition to this chemical process, we can observe that the average reservoir pressure started to increase again in observed between the two, which means that overburden rocks 1994–1995, so possibly the sudden drop in subsidence in 1998 are being stretched. Looking at seismic surveys over an area was caused by a combination of these two effects. away from the crest of the field in the period from 1989 to 1999, Furthermore, when subsidence is compared to reservoir Guilbot and Smith (2002) found that the top reservoir moved compaction, the compaction rate is found to be higher than downwards by 4m, while the corresponding seabed subsidence the subsidence rate; often a difference of 40–50% can be was only 2.4m, meaning that the overburden rock had been Figure 4.19: The Ekofisk complex.

166

Time (ms TWT)

Time (ms TWT)

Guilbot and Smith, TLE, 2002

stretched by 1.6m. From rock physics it is well known that such stretching will reduce the P-wave N N velocity of the rock, and hence will introduce an overburden time shift. Guilbot and Smith (2002) used 4D traveltime shifts to conclude that the overburden at Ekofisk had 2/4-X-27 been stretched and that the chalk reservoir rock was compacted due 2/4-X-09 to production. A 4D traveltime shift means that the traveltime between two seismic horizons (representing geological interfaces in the subsurface) has changed. Guilbot and Smith found that the time shift for the overburden was up to 18 ms between 1989 and 1999. The overburden thickness at Ekofisk is approximately 3 km. This time shift was positive, meaning that the average P-wave velocity in the 1250m 1250m overburden had decreased between 1989 and 1999. For the same period they found a negative time shift of up to 10 ms for the Ekofisk reservoir formation, which was not a huge Figure 4.21: 4D time shifts for top reservoir interface (left) and for the Ekofisk Formation (right). The black area in surprise since it was well known the middle is caused by the gas chimney problem at Ekofisk, leading to lack of high quality seismic data in this that the seafloor at Ekofisk had area. Notice that some areas of the reservoir zone (right) are more compacted than others. undergone severe subsidence. The compaction of the reservoir was also well known, and it was understood as well that the subsidence 4.2.1  Reservoir Compaction and Velocity Changes was less than the compaction at reservoir level, and hence the When the reservoir rock compacts, the over- and underburden overburden had been stretched. What was new and exciting are stretched. This stretch is relatively small (of the order of in Guilbot and Smith’s findings was that time-lapse seismic 0.05%). However, it produces a small velocity decrease that is could be used to quantify compaction and thus to create observable as time shifts on time-lapse seismic data. maps that show that some reservoir compartments are more A simple calculation can help us to understand why timecompacted than others. This initiated new research on how lapse seismic can detect such small changes. Assume that the to couple geomechanical modelling with time-lapse seismic velocity decrease caused by the stretching of the overburden measurements. is -0.1%. As an example we can assume that the average

167

N

6,000m 6,000m – = 0.03s = 30 ms 1,980m/s 2,000m/s

dT dz dv = – T z v

1250m

(meters)

which is well above detectable time shifts (less than 1 ms for high quality time-lapse seismic data). If the fact that the thickness of the overburden is also changed is taken into the equation, the following relation might be derived (assuming that the stretch dz is much smaller than the overburden thickness (z) and that the velocity change dv is much smaller than v – see Landrø and Stammeijer, 2004):

1250m

(meters)

T2 –T1 =

N

Guilbot and Smith, TLE, 2002

velocity of the overburden thickness is 2,000 m/s, and that the overburden is 3,000m. After stretching, the velocity is 1,980 m/s. The difference in two-way traveltime is then (neglecting the length change of the overburden):

Røste et al., TLE 2015

When the rock is stretched, dz is positive, and dv is negative, which means that the two terms on the right-hand side in this equation enforce each other. In the first example we Figure 4.22: Estimated compaction of the chalk reservoir at Ekofisk between 1989 and 1999, neglecting overburden velocity changes (right) and including the changes (left). Notice that the two plots have found that the velocity change caused a 30 ms different spatial distributions, not just a scalar difference between them. increase in two-way traveltime. If we assume that the reservoir compacts by 10m and that the subsidence at the seafloor is 7m, the total stretch of the reservoir compaction at Ekofisk, one without taking the overburden is 3m, and the first term on the right side in the overburden changes into account and one where the effect is equation is 3/3,000 = 0.001, corresponding to 3 ms, which is included (Figure 4.22). The difference is significant, and it clearly significantly less than the velocity effect in this case. demonstrates that the assumption of a constant overburden Guilbot and Smith (2002) show two maps of estimated is not correct for the Ekofisk 4D case. Both in magnitude and spatial distribution, the two maps are very different. Røste et al. (2005) assumed that the relative velocity change Figure 4.23: Estimated time shifts for the Snorre overburden. divided by the relative stretch is given as a constant, α. For Note the negative time shifts (velocity slow down) of up to 3 ms. the example given above α = -10. Also in 2005, Hatchell et al. introduced essentially the same parameter, denoted R, where R = -α. For a compacting reservoir, the P-wave velocity will increase and the thickness decrease, hence again, the α value will be negative. This factor is often referred to as the dilation factor. If α or R is known, we observe from the equation above that when the 4D time shift is measured, both the velocity changes and the thickness changes can be estimated.

4.2.2  Compaction in Clastic Reservoirs It is well known that highly porous chalk fields compact, but what about clastic reservoirs? In the North Sea both Elgin and Franklin, which are HPT (High Pressure and Temperature) fields, show significant compaction, although not as much as chalk fields. In a recent paper Røste et al. (2015) report that the Snorre field exhibits compaction and that overburden time shifts of the order of 3 ms have been observed between 1997 and 2009, as shown in Figure 4.23. Compared to Ekofisk, these time shifts are significantly smaller, by a factor of 6, underlining the fact that clastic reservoirs compact less than chalk fields. For a sandstone reservoir, the chemical effect (water weakening) is probably negligible, so it is fair to assume that the reservoir compaction is mainly pressure driven. Apart

168

from this, the mechanism for the stretching of the overburden rocks is the same. A vertical profile showing that the time shifts reach nearly up to the seabed and that the width of the anomaly is more than a kilometre is shown in Figure 4.24. From the time shift estimates, it is possible to estimate velocity changes, which might be compared to geomechanical modelling. Notice the arching effect (the variation in overburden vertical stress caused by the reservoir compaction); stress arching occurs if the compaction zone is finite (for an infinite reservoir without edges there will be no arching effects) and some of the overburden weight is transferred to the sides of the reservoir, as illustrated in Figure 4.25. Due to the finite size of the compacting compartment, there will be edge effects where the vertical stress close to the edge of a reservoir increases, while there is a decrease right above the crest of the compartment. Stress arching and in general stress changes in the overburden might cause problems for well bore stability, and there are several examples where wells have failed due to severe overburden stress changes. Hence, it is important and useful to monitor and map these overburden changes for a producing reservoir and, as these examples show, timelapse seismic is an excellent tool for this. Permanent reservoir monitoring at the Ekofisk field is discussed further in Section 4.6.

a

Røste et al., TLE 2015

Figure 4.24: (a) Estimated time shifts between 1997 and 2009; note that the time shifts extend almost to the water bottom, which is at approximately 300m. (b) Estimated velocity changes. Notice four distinct areas (1–4), where 1 and 3 show a velocity decrease, 2 no change and 4 a velocity increase, probably caused by stress arching. (c) Estimated velocity changes – the velocity increase marked 4 in (b) is interpreted as a stress arch effect. Modelled velocity change (using geomechanical modelling) assuming α = -20.

b

c

Modified from Schutjens et al., 2010

Figure 4.25: Stress arching: if the weight of the overburden is transferred to the sides of the reservoir (right) we get a stress arching effect. On the other hand, if the weight of the overburden does not transfer to the reservoir we have a situation like the left-hand figure.

169

4.3  Time-Lapse Refraction Seismic Would you like me to give you a formula for… success? It’s quite simple, really. Double your rate of failure… You’re thinking of failure as the enemy of success. But it isn't at all. You can be discouraged by failure – or you can learn from it. So go ahead and make mistakes. Make all you can. Because, remember that’s where you’ll find success. On the far side. Thomas John Watson, Sr. (1874–1956), Chairman and CEO of IBM

In 1962 Markvard Sellevoll at the University of Bergen, together with the Universities of Copenhagen and Hamburg, acquired a refraction seismic survey in Skagerak, south of Norway. The aim of the survey was to prove the existence of sediments in the area. The seismic refraction data clearly showed a low velocity layer approximately 0.4 km in thickness with low seismic velocities confirming sedimentary layers (see Figure 4.27).

4.3.1  The Basic Idea In the early history of exploration geophysics, the seismic refraction method was commonly used, but gradually reflection seismic, as we know it today, demonstrated its usefulness and its ability to create precise 3D images of the subsurface. Hence, the use of refraction seismic – by which we mean looking for waves that have predominantly travelled horizontally when compared to reflected waves – decreased. Here we discuss and show evidence that, although the most accurate 4D seismic results are obtained by repeated 3D reflection seismic surveys, there might be a small niche market for time-lapse refraction seismic. Landrø et al. first suggested this monitoring technique at the SEG meeting in 2004. Figure 4.26, taken from this presentation, shows the basic principle: if the oil that originally filled a reservoir is replaced with water, the velocity in the reservoir layer will change, and hence the refraction angle (and the traveltime) for the refracted wave will also change.

Figure 4.26: Example showing how refracted waves can be used to monitor an oil reservoir that is gradually getting more and more water-saturated. Velocity changes caused by pressure changes may also be detected this way.

It should be noted that the term refraction, or maybe one should say the way the word is used within geophysics, is not precise. The basic meaning is that if a wave changes

V

Ø 0

0

R

1

2

1.478 km/s

3

4

5

1.51 km/s

6

7

8

0.51 km

1.68 km/s

0.41 km

3.1 km/s

0.66

4.1 km/s

1.72

9

10

55m

11

12

13

14

Havbotnen Glacial sand

km

1

2

3 3.36

4

170

X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X 5.9 km/s X X X X X X Crystalline Basement

15 km R

Figure 4.27: The end result of the refraction seismic survey from Skagerak in 1962. Sellevoll interpreted the first low velocity layer below the seabed as a glacial sand layer. The velocity of this layer was estimated to 1,680 m/s.

direction obeying Snell’s law (sin θ1/v1 = sin θ2/v2) across an interface, the wave is refracted. The portions of the initially downgoing energy refracted along the boundary interface are called critically refracted waves. In this case, θ2 is 90°, implying that the incident angle θ1 must be greater than the critical angle sin θc = v1/v2 for waves to travel along the interface. As the critically refracted wave propagates along the interface, energy is sent back to the surface at the critical angle. We call this returning energy refracted waves or refractions. They are also called head waves since they arrive ahead Figure 4.28: Raypaths for two diving waves in a medium where the velocity increases linearly with depth. V0 = 2,000 m/s and two different k-values are used. The take-off angle with respect to the vertical axis is 30° for both cases. of the direct wave at large sourcereceiver distances. When we use the term time-lapse refraction here, we mean that we exploit head 4.3.3  Complementary to Reflection waves and in general post-critical or diving waves. Both head waves and diving waves need large offsets to be detected, and therefore we often put them into the same 4.3.2  Diving Waves category, although they are different wave types. It is also Diving waves are waves that are not head waves but, due to challenging to discriminate between diving waves and head a gradual increase in velocity with depth, they dive into the waves in field data, so it is often convenient to put them into the subsurface and then turn back again, without a clear reflection same basket. event. From the equation defining the critical angle, we see that A diving wave occurs when the velocity (v) increases linearly to generate a head wave or refracted wave the velocity of the with depth (z): v = v0 + kz, where k is the velocity gradient. second layer must be greater than that of the first layer or Using Snell’s law again (sin θ1/v1 = sin θ2/v2), it is possible to overburden. This is the first of several practical disadvantages show that the ray path will be given as a circle with a radius (R): in the time-lapse refraction method. Another disadvantage is that the horizontal extension of the 4D anomaly needs to 1 v0 R = —— = ———— be large enough to be detected by the method. A long, thin pk k • sinθi anomaly is ideal. In this way, it is clear that the refraction where p is the ray parameter method is complementary to conventional 4D seismic. sinθi v0 , and θi is the take-off angle with respect to the vertical axis for the ray. Figure 4.28 shows two examples where the solid black line (k=0.75 s-1) represents a velocity model where the velocity increases more rapidly with depth than the solid red curve (k=0.5 s-1). We observe that in a model with higher velocity gradient, the ray prefers to turn around earlier, so the offset where this ray is observed is shorter. This is exploited in Full Waveform Inversion to determine the low frequency velocity model. We clearly see that diving waves occur at large offsets. A linear velocity model is of course an idealisation, but due to compaction, it is not a bad first order approximation for a setting where the subsurface consists of sedimentary layers.

Figure 4.29: A synthetic seismic shot gather showing baseline data (left). The P-wave velocity of the reservoir layer is changing from 2,500 m/s to 2,450 m/s, and the shot gather on the right shows the difference between the base and monitor shot. The black arrow shows the top reservoir event, and the red arrow on the right plot shows that the strongest 4D signal actually occurs close to the critical offset (which is at approximately 5,000 m).

171

4.3.4  Field Examples

Zadeh and Landrø, 2011

Figure 4.31: Observed time shift for the refracted wave close to the blowout well (left) and similar display far away from the well (right). Black traces are before blowout and red traces after. Typical time shifts within the gas anomaly are of the order of 3–4 ms.

172

Zadeh and Landrø, 2011

The chalk fields in the southern part of the North Sea are good candidates for testing time-lapse refraction sensitivity. One example using the LoFS (Life of Field Seismic) data acquired over the Valhall field confirms this sensitivity (Zadeh et al., 2011). From Figure 4.30 we observe that the strong amplitude anomaly observed around 4500m offset is shifted between the first (blue line) and the sixth (red line) LoFS surveys. The offset shift is of the order of 200m, and an additional offset shift is observed for the 8th LoFS survey (black line). This shortening of the critical offset is interpreted to be caused by velocity changes close to the reservoir. Another potential use of the time-lapse refraction method is as a tool for the early detection of gas leakage or stress changes in the Figure 4.30: Field data example using LoFS data from Valhall. We observe a clear shift towards overburden. An example from an underground shorter offsets for the interpreted amplitude increase close to critical offset (4,000-4,500m). Figure blowout that occurred in well 2/4-14 in the North from Zadeh et al., 2011. Sea is shown in Figure 4.31. By studying the head wave before and after the blowout, we clearly observe a time. A significant limitation is the low penetration depth due time shift for these waves (Zadeh and Landrø, 2011). A more to strong velocity contrasts between the active layer and the comprehensive analysis is shown in Figure 4.32, where time permafrost table. shifts are estimated along a 2D profile. We clearly observe that the time shifts increase to a maximum value around 3–4 4.3.5  Time-Lapse Refraction Radar ms close to the blowout well, and decrease away from it, as Since 2003 several of the major fields offshore Norway have expected. Different colours in this plot represent different been equipped with permanent receiver arrays at the seabed. source-receiver offsets used in the time-lapse seismic analysis. The Valhall LoFS-project has now been running for 15 years, A third example of time-lapse refraction was presented by providing useful data for the operator. At a later date the Hansteen et al., 2010, where time shifts are observed close to Ekofisk, Snorre and Grane fields installed similar permanent a region where steam is injected into a heavy oil reservoir in monitoring systems. The technologies of the systems have been Alberta, Canada. When the steam is injected, the stiffness of different, but the key concept of semi-continuous monitoring of the rock is reduced, leading to a velocity decrease, which in the field during production has been the same. Typically, these turn leads to a corresponding increase in time shift. This was fields are monitored more often than conventional time-lapse sucessfully observed in this field example. seismic surveys. A frequency of 1–2 surveys a year is normal. A very different approach was used by Hilbich in 2010 where Permanent arrays can also be used to monitor other he is exploiting time-lapse refraction seismic tomography to acoustic signals in the water layer, such as micro seismic study seasonal changes in ground ice in the Alps. He concludes events, vessel noise, drilling noise and noise from ship traffic. that the method is capable of detecting temporal changes in alpine permafrost and can identify ground ice degradation. A Figure 4.32: Estimated refraction time shifts for three offsets close to the major strength of the method according to Hilbich is the high blowout well (14) – a positive time shift of up to 4 ms is observed. vertical resolution potential and the ability to discriminate phase changes between frozen and unfrozen material over

Statoil

Another potential use of a permanent array is to use a simple source fired at the platform (single gun or a small cluster of air guns may be used for this purpose), and then record refracted seismic signals from shallow (and if possible, deeper) layers, as illustrated in Figure 4.33. The time interval between each shot at the platform might be once a month, once a day or even once every hour if one wants to follow a specific event. Under normal conditions, this radar system will not detect any changes; the data from one survey is more or less identical to the one from the previous one apart from noise. A detection system needs to be developed to register abnormal signals when they occur, and how they develop over time. The potential strength of this method is that it is cheap (assuming that the cost of the permanent array is already paid) and the processing or analysis of the data is also fairly straightforward. Weaknesses are related to practical issues regarding firing a seismic air gun from a platform, and that it is challenging to locate an anomalous event at depth. The x and y coordinates of an event are easily and very precisely determined since the source and the receiver positions are known. Depth can be estimated roughly from seismic modelling. So far, this has not been tested at any field, and therefore there is a chance that refraction radar will be what it is today: an idea.

4 km 8 km

N

4.3.6  The Future of Time-Lapse Refraction At present time-lapse refraction is not well established as a 4D technique, but the few examples presented so far illustrate that the technique has potential. One weakness is that it is hard to map the time-lapse anomalies precisely in depth. Fortunately, full wave form inversion techniques have developed significantly over the last decade (Virieux and Operto, 2009, Sirgue et al., 2009). A few examples of this technique have also been presented for 4D applications (Routh et al., 2009). Therefore, there is hope that time-lapse refraction methods might develop into a more precise and accurate technique. Despite this lack of more advanced analytical methods, it is important to underline that if 4D refraction is used as a tool for detecting shallow gas leakage, the important issue is to detect and locate approximately where the leakage occurs. Detailed and accurate mapping of the 4D anomaly is not crucial.

5 km 1682 m

1794 m

Figure 4.33: Time-lapse refraction radar. Outline of the Grane field (colour bar shows depth to the reservoir), which is equipped with a permanent array at the seabed, and if we assume that a seismic source is fired every day at the platform, the refracted signals may be detected at the seabed by the array, and hence it is possible (at least in theory!) to construct a time-lapse refraction radar, sweeping over the field.

173

4.4  Measuring Seismic with Light Fibre optic sensing technology offers the potential for a cost-effective, reliable reservoir monitoring system as an alternative to seismic systems based on electronics. There is a crack in everything. That’s how the light gets in. Leonard Cohen (1934–2016)

4.4.1  Length Changes

4.4.2  The Optical Fibre

Almost all physical measurements are based on a simple principle – the measurement of a change in length. When we determine the temperature, we measure how much a column of a fluid changes due the change in temperature. Recording blood pressure is also a measurement of a length change of a fluid column. If we want to measure the mass of an object we measure the displacement of a spring. For seismic measurements we recognise the same principle: conventional hydrophones, responding to variations in acoustic pressure, convert a small length change into a piezoelectric signal that is proportional to the amplitude of the pressure wave. The output voltage is digitised and transmitted via a cable to the data recording system. However, such measurements require electrical instrumentation that may become unreliable when stationed on the sea bottom for life-of-field permanent monitoring due to possible corrosion, electrical leakage, or sensor degradation. So is there an alternative to electronics to measure seismic waves?

The answer is fibre optic technology. The fibre optic system is entirely passive with no electronic components at the wet end, and is therefore expected to be more reliable than in-sea electrical instrumentation. A strong interest has thus developed recently in using fibre optics in permanently installed reservoir monitoring systems. The optical fibre is composed of several elements: core, cladding and buffer. The core at the centre is the light-carrying element and is usually made up of a silica doped with another material to achieve the proper index of refraction. The core is surrounded by the cladding layer which is still based on silica, but with another dopant or another concentration of the dopant, again to achieve the proper index of refraction. The fibre is then coated with a protective plastic covering called the buffer. To confine the light in the core, the refractive index of the core must be greater than that of the cladding. Typically, the core and cladding values are around 1.5, with a difference of approximately 0.5%.

Fibre Optics Since the mid-1980s, optical fibres – thin flexible strands of pure silica glass, with the thickness of a human hair – have been taken up in a big way by the telecommunications industry. During the two Figure 4.34: A bundle of optical fibres. last decades, fibre optics has also made its way down into oil and gas wells for the monitoring of key parameters like temperature, pressure, flow rate, and liquid and gas phase fractions. However, fibre optic technology can also be used to measure dynamic quantities like acoustic and seismic signals. Today, optical fibres are used for seismic reservoir monitoring, where the fibre optic systems consist of several thousand channels with multi-component sensors which include

174

three-axis accelerometers and hydrophones. The low loss and high bandwidth of optical fibres enable seismic measurements over distances of tens of kilometres.

4.4.3  Fibre Optic Hydrophones and Geophones The sensors used with fibre optics are hydrophones and three-component accelerometers. A fibre optic hydrophone is composed of a cylinder of a plastic-like material, where the fibre is coiled around the cylinder. When a pressure wave is passing the cylinder, the length of the coil will be slightly changed, and this displacement can be measured with high resolution as a shift in reflection time or phase delay between two identical Bragg gratings situated on opposite sides of the hydrophone. The wavelength of the incident light must be equal to the Bragg wavelength. In this way the hydrophone converts pressure

Lasse Amundsen and Martin Landrø

Lasse Amundsen

The core of the fibre is only about 8–10 microns. This is the standard single-mode fibre used for long-range communication like telephony and cable TV. The optical system also includes a transmitter that converts an electric signal into light and sends Figure 4.35: The optical fibre it down the fibre. The transmitter is is composed of an inner a laser diode designed to emit light core, a cladding and an outer buffer. pulses at wavelengths around 1,550 nanometres (nm). The attenuation of light in an optical fibre is mainly caused by scattering resulting from minor imperfections in the glass. This attenuation is strongly dependent on the wavelength of the light. By choosing wavelengths corresponding to low attenuation (1,550 nm), it is possible to transmit optical signals over tens of kilometres with little loss. Typical losses for singlemode fibres are of the order of 0.2 decibels (dB) per kilometre. (dB is measured as 10 times the logarithm of the output power divided by the input power.)

Loss (dB/km)

0.8

0.6

OH absorption

0.4

Rayleigh scattering 0.2

IR absorption

1100

1200

1300

1400

1500

1600

1700

Wavelength (nm)

Figure 4.36: The attenuation in the fibre is a minimum of about 1,550 nm.

waves into a change in the fibre length. The change in the length is proportional to the amplitude of the seismic wave. The fibre optic accelerometer similarly modulates the fibre length as the sensor is exposed to acceleration. To achieve a directional measurement, a special design utilising two half-spheres at each side of a rod ensures that the coil length is changed only for a directional signal. If the seismic wave hits the optic accelerometer perpendicular to the rod, the coil length will not change. When the wave hits along the rod, the coil length will change, and a non-zero seismic signal is measured (Figure 4.37). An advantage of fibres is that a number of sensors can be multiplexed on one fibre. In the Optowave system this is implemented through a combination of time division multiplexing and wavelength division multiplexing. All sensors within one 4C seismic station are multiplexed using time

The Bragg Grating A fibre Bragg grating (FBG) acts as a light reflector with The refractive index is a measure for how much the speed maximum reflection at a very specific wavelength, while of light is reduced inside the fibre relative to a vacuum. For other wavelengths are transmitted. The Bragg grating is example, the glass core has a refractive index of 1.5, which ‘written’ into a short segment (about 1 cm) of the fibre by means that in the core, light travels at 1/1.5 = 0.67 times the using an intensive ultraviolet laser that alters the refractive speed of light in a vacuum. This is fast – 200,000,000 metres index of the fibre core. The core is photosensitive with per second. exposure to UV light, and its change in refractive index is Note: The refractive index of water is 1.33, corresponding a function of the intensity and duration of the exposure. to a velocity of 225,000,000 m/s. If particles in water are The grating will typically have a sinusoidal refractive accelerated to a velocity above this they will radiate, an index variation. The reflected wavelength λ, called the effect called Cerenkov radiation. Cerenkov received the Bragg wavelength, is defined by the relationship λ=2nΛ, Nobel prize in Physics in 1958 for describing this intensive where n=(n3+n2)/2 is the average refractive index of the ‘blue glow’ observed in nuclear reactors. Particles travelling grating in the fibre core and Λ is the grating period. Using faster than the speed in vacuum are referred to as tachyons – an average refractive index of 1.55 and grating period of hypothetical particles introduced by Sommerfeld. 500 nm, the grating reflects Figure 4.37: Sketch of Bragg grated fibre. at λ=1,550 nm. The reflection strength Rλ (peak reflectivity) is determined by the grating length L=NΛ, where N is the number of periodic variations, and the grating strength n3 -n2.

175

Octoplan

a

Octoplan

b

Figure 4.39: A fibre optic cable.

c

Figure 4.38: (a) Basic principle of a fibre optic hydrophone. B1 and B2 are two identical Bragg gratings which reflect light with the same wavelength as that emitted by the laser. When an acoustic pressure wave passes the fibre coil, the length of the fibre between the gratings will change by a small amount dL. This length change is precisely measured as a change in travel time (phase delay) between the Bragg reflections. The relationship between dL and the amplitude of the seismic pressure wave is linear. (b): A fibre optic hydrophone. The fibre is wrapped around a cylinder of a material that is sensitive to the pressure wave. (c) The fibre optic accelerometer.

division multiplexing, i.e., separated in time at the detector. Several 4C stations are interrogated through the same fibre using wavelength division multiplexing, i.e., every station is interrogated at a different wavelength. This requires several lasers emitting light with distinctive wavelengths to matching Bragg gratings. The hydrophone and three-component accelerometer are mounted into a fibre optic four-component seismic station or sensor package which is spliced into the fibre optic cable. The cable protects the fibres in their hostile environment. In a full-scale ocean-bottom seismic system covering an area of, say, 50–60 km2, several thousands of these stations are connected in cables. The cables are trenched in the seafloor and connected back to topside facilities on a platform. An advanced optical interrogation system is used to read out the optical signals from the sensors as they measure seismic reflections. In techno-language, a highly advanced multiplexing technique allows thousands of sensors to be interrogated by the laser instrumentation through the subsea optical fibre lead-in cable. Finally, the signals are converted to seismic formats and transferred to shore for data processing and interpretation.

176

Figure 4.40: A fibre optic OBS system. Thousands of sensors are trenched in the seafloor and connected back to topside facilities for reservoir monitoring.

It was decided early in 2018 that the giant field Johan Sverdrup (section 1.3.1.2) will be monitored by permanent fibre optic cables trenched into the seabed.

4.5  The Valhall LoFS Project Guest Contributors: Olav Barkved and Tron Golder Kristiansen (Section 4.5.1); and Einar Kjos, Kent Andorsen and Nirina Haller (Section 4.5.2) Watch every detail that affects the accuracy of your work Arthur C. Nielsen, Businessman, (1897–1981) The Valhall permanent seismic array (Life of Field Seismic; LoFS) was the first full-field permanent reservoir monitoring system to be installed, and paved the way for similar monitoring projects.

4.5.1.  Brief History of Valhall LoFS The first version of a permanent seafloor seismic array was made at the Foinaven Active Reservoir Mangement (FARM) system in 1995 in the UK sector of the North Sea. In January 2000, BP rolled out the LoFS concept to seismic contractor and manufacturing companies, in order to engage them in the creation of a potential market for permanent seismic cable technology, and a solution that included equipment from Oyo GeoSpace was selected. The Valhall field in Norway was then identified as a good candidate for trying out this type of technology. It was thought that the new system would be a major factor in improving recovery and add significant value to the field. During the summer of 2003, 120 km of seismic cables were trenched into the seafloor covering an area of 45 km2. A total of 2,400 receiver units, each with 4C sensors (three geophones and one hydrophone), recorded ~52,000 shots to create one LoFS survey. The four-component data make it possible to create images of both PP data (conventional data measuring P-waves being reflected from subsurface interfaces) and PS data (P-wave down being converted to S-waves at the interfaces and propagating back to the seabed receivers again). The cables are connected to a recording system at the central platform. The system records continuously and is controlled from onshore, and data can be sent onshore via the optical network. No part of the subsea system is exposed above the seafloor, thus avoiding any conflict with fishing activity in the area. Eighteen LoFS surveys were acquired between 2003 and 2015, with survey 19 shot in 2017. Survey 16 used redeployable cable to expand the area imaged by the LoFS array, and surveys 18 and 19 rely upon re-deployable nodes rather than the permanent array, which has served well beyond its intended life and has now become less reliable. Nodes also provide flexibility in that the area covered may be adjusted. The high number of 4D surveys can be displayed as a matrix, as shown in Figure 4.41. The matrix presentation was chosen as a simple communication tool. It is accessible from a website and is updated by scripts when new data arrives. These matrices have been produced for each generation of processing sequence and are also therefore an efficient tool to visualise improvement in seismic imaging. Each displays

Figure 4.41: Matrix image of 55 4D difference maps. Top row: difference between first monitor and base survey, second monitor survey and base survey etc. Second row: difference between second monitor survey and base survey, third monitor survey and base survey etc. These matrices are described in the text. At Valhall, water injection start-up is typically characterised by an acoustic impedance increase as gas comes back to solution when pressure increases. Figure 4.42: The Vallhall field is in the Norwegian North Sea.

177

a

Central Complex

Valhall

Hod

Top Reservoir

Figure 4.43: (a) More than 350 well paths have been made at the Valhall field. The seabed subsidence map between 1978 and 2015 (b) shows over 5m increase in water depth, while the corresponding estimated reservoir compaction based on 4D seismic data is shown in (c) (from Haller et al., EAGE 2016). map at b Subsidence seabed (2015–1978)

c

Reservoir compaction from streamer (2012–1992)

the dynamic evolution of one particular 4D attribute (in this example, acoustic impedance increase in the Tor Formation). The first row is referenced to base survey, the second row to the second monitor, and so on. Each map shows the 4D signal in the cumulative period between current monitor and reference survey. Hence the signals are maximum on the top right corner which displays the field response between 2003 and 2014. The map on row 2, column 2 is the screenshot of the water injection starting at the well in the middle of the map. At Valhall, water injector start-up is typically characterised by an acoustic impedance increase as gas comes back to solution when pressure increases, leading to increase in velocity. A similar matrix is generated for the time-shift attribute converted to reservoir compaction, and the joint interpretation allows us to characterise the waterflood efficiency. The net impact of reservoir compaction is improved recovery and production, and the value of the added production can outweigh the possible costly challenges associated with compaction. The impact of subsidence is well known from both the Ekofisk (see Section 4.2) and the Valhall fields, where increasing water depth has called for modification and replacement of existing platforms (see Figure 4.43). Subsidence related to reservoir compaction may result in reduced well life due to casing deformation, and local stress changes can introduce wellbore instability during

The Valhall Field

The Valhall field was discovered in 1975 with well 2/8-6 following an exploration campaign struggling

to solve imaging challenges due to gas in the overburden over the crest of the structure. The field was initially over-pressured and under-saturated, and is a north-west to south-east trending anticline containing an Upper Cretaceous chalk reservoir. The Valhall field was developed under primary depletion for the first 23 years of its history. The primary drive mechanism for the field is compaction supported by solution gas drive. During this time the pressure dropped sharply so that the field produced below bubble point in many areas. Dedicated water flooding has been active since early 2006 to manage pressures. After thirty-four years of production, a total of 942 MMboe has been produced out of the original estimated 3.3 Bboe in place. The oil production rate in 2016 averaged ~44,000 boepd, and the field is expected to be producing until beyond 2040.

178

drilling (Figure 4.44). Consequently, understanding the impact of compaction on the surrounding rocks all the way to the surface becomes an important task for safe and efficient resource development. 4D seismic can give an estimate of compaction and improve our understanding of spatial variations in subsidence.

4.5.2  Recent Examples from the Valhall LoFS Project In this section we look at three more recent examples illustrating how Valhall LoFS data has been utilised in the management of the Valhall field. The data lend themselves well to 3D interpretation and 4D analyses, due to careful seismic processing and imaging, with care in matching or nearly matching acquisition parameters relative to baseline surveys.

P-waves velocities are strongly affected by the presence of gas, while S-waves velocities are not (though they may respond to pressure changes). A strong, low-impedance anomaly on PP data correlating with low velocity zones on FWI (Full Waveform Inversion) velocities suggests the presence of gas, and the absence of an anomaly on PS data will strengthen this interpretation because it suggests a lithology change is not causing the anomaly. These properties of OBS data have

Figure 4.44: Full scale test reproducing casing deformation seen at Valhall following fault reactivation in the overburden shale.

been used to improve the understanding of DPZs (Distinct Permeable Zones). They are used to characterise the overburden which is divided into several DPZs and their respective seals. DPZs require proper zonal isolation because they are defined as permeable intervals, with potentially different pressure regimes and fluid content. From 2013 to 2015 LoFS OBS data were processed through an updated sequence which included anisotropic velocity

Aker BP

4.5.2.1  Improved Imaging and Mapping Gas Accumulation in Overburden

179

Haller, 2016

FWI velocities

DP27:Diatomite low velocity

Towed streamer

DPZ6

Miocene Diatomite New PP DPZ6 Miocene Diatomite New PS

DPZ6

Miocene Diatomite

N

model building using FWI and joint PP/PS tomography. This provided significantly improved images for both reservoir and overburden characterisation, as shown in Figure 4.45, which illustrates the quality of the data for characterising DPZ6. Prior to the reprocessing, LoFS data were of limited use in this interval, and it was necessary to rely on FWI velocities and streamer data to interpret this zone. Detailed geological log interpretation has indicated the existence of distinct permeable zones near top DPZ6 at around 900m TVDSS on the PP section (Figure 4.45). The joint interpretation of the multi-component seismic data, PP and PS, has revealed a complex fracture network acting as migration pathway from the underlying hydrocarbon-bearing diatomite. The low velocity anomaly (top figure 4.45) is now understood as the low resolution envelope of these fractures, which have a residual signature of gas as they have been active over geological time. This interpretation was important for field management, in particular in reassessing the drilling risks and the well’s integrity in that section of the overburden. By revealing a natural and geological migration path between one petroleum system (the diatomite) and its reservoir (the DPZ6 sands), seismic imaging has given a new impulse to the Valhall drilling campaign and future field management (Haller, 2016).

4D response at reservoir level

Andorsen et al., 2013

4D response at reservoir level

Figure 4.45: Circa 2010 FWI velocities (top), towed streamer data reprocessed in 2010 (second figure), PP seismic data (third figure) and PS seismic data (bottom). Note that the FWI and towed streamer data are not from the same exact location as the OBS data, but they are close enough to illustrate the added information provided by the OBC data in DPZ6. Earlier processing of the OBC was of similar quality to the streamer data at this location. The faults are clearly imaged in PP domain on the vertical profile as they are highlighted by bright amplitudes. They do not display bright PS amplitudes – confirming the gas interpretation.

~ 8 km2

1 km

180

NRMS Map. RMS amplitude calculated in a 240ms window centered at Top Hard Chalk seismic re ector

Figure 4.46: Old (a) and new (b) 4D seismic maps. The new processing includes Full Waveform Inversion. Notice the improvements within the area beneath the gas affected area in the central parts of the field (marked by the dashed circle).

4.5.2.2  Improved Imaging of Reservoir Beneath Gas-Affected Area The presence of gas in the overburden (as discussed in Section 4.5.2.1) makes it difficult to map 4D amplitude changes in the reservoir over much of the Valhall crest using tomographybased velocities for imaging. However, significant progress has been made in recent years through the use of Full Waveform Inversion to create an improved velocity model. Figure 4.46 shows an example demonstrating the uplift caused by this step-change improvement in imaging. The 4D response based on tomographic velocities is shown in Figure 4.46a, where a scattered and broken image in the central portion of the 4D map is visible. This is an area with a thick reservoir re-pressurised due to water injection, which should give a very strong 4D response. Gas pockets in the overburden act as lenses which distort the seismic wave-field, and when migration velocities do not represent the actual velocities, the migration process cannot properly reconstruct the wavefield to give a usable image of the subsurface. FWI produces velocity models containing much higher resolution than conventional ray-based reflection tomography. The 4D image in Figure 4.46b shows a much more coherent 4D signal in the crestal region (dashed circle), because the imaging benefitted from an FWI velocity model (Andorsen et al., 2012). Viewing 3D and 4D data in a vertical section shows that 4D PP data indicates the presence of reservoir where it cannot be interpreted on 3D data. Figure 4.47 illustrates this, comparing a vertical section of LoFS 3D PP and PS data from the 2013–2015 processing with a 2009 4D image using the first available FWI velocities. The right half of the PP static image is very difficult to interpret reliably, due to degradations in imaging quality caused by gas in the overburden. The PS data is also affected under the gas, but the quality of the image is quite good because the upgoing S-wave energy is unaffected by gas. If the downgoing energy passes through a region in the overburden with little gas, it is possible to get a good image, as is the case here. Note that to give a good image, the PS migration requires high quality P- and S-wave velocities. This was achieved by using FWI to calculate P-wave velocities, followed by joint Vp-Vs inversion to refine the P-wave velocities and define the S-wave velocities. These are the first results from Valhall with a good quality 3D static image under the gas cloud. The bottom image shows that 4D PP signal can be observed where the 3D PP imaging is poor. This allowed mapping of the reservoir interval under the gas cloud prior to receipt of the latest PS imaging, when no other image under the gas cloud was available. 4D PP products still give the most reliable prediction of reservoir presence under the gas cloud where imaging is poor. Given the quality of the 3D PS, the Valhall team has embarked on 4D PS processing and is awaiting the data for 2018.

4.5.2.3  4D Seismic Beneath Gas-Affected Area to Update Reservoir Model As described in the previous example, the quality of LoFS surveys can be strongly impacted by gas in the overburden, but often a robust 4D signal can still be extracted. Interpretation of the 4D signal relies on integrating knowledge of pressure development, production, and injection through

-2100 –

2013 static PP picture

-2200 –

-2300 –

-2400 –

-2500 –

-2600 –

-2700 –

-2800 –

-2900 – -2100 –

2013 static PS picture

-2200 –

-2300 –

-2400 –

-2500 –

-2600 –

-2700 –

-2800 –

-2900 – -2100 –

4D 2003 to 2005 (LoFS06-LoFS1)

-2200 –

-2300 –

-2400 –

-2500 –

-2600 –

-2700 –

Tor producer Hod producer

-2800 –

-2900 –

Figure 4.47: Comparison of vertical seismic sections. Top – best available static image from LoFS PP data. Middle – best available static image based on PS (converted wave) data. Bottom – 2009 4D using initial FWI results and data from 2003 and 2005, capturing depletion prior to any water injection. The right half of the static PP section is very difficult to interpret with any confidence due to degradation of imaging related to gas in the overburden. The PS static image remains reasonably good under the zone where gas affects the PP data. The 4D AI (acoustic impedance) image shows red events which indicate an increase in amplitude impedance (reservoir hardening) due to depletion. Note that there is no 4D signal associated with the right-most (blue) well because it was drilled after the seismic data were acquired (courtesy of the Valhall JV, Aker-BP and Hess)

time (including water break-through in production wells), dynamic modelling of the reservoir, and from generating synthetic 4D seismic based on the dynamic reservoir modelling and the rock properties model. This is an iterative and interdisciplinary process. Figure 4.48 illustrates an example of this type of integration from the central crest. It shows the reservoir thickness from the reservoir model, changes in water saturation (Sw), pressure (P), and gas saturation (Sg), the modelled 4D seismic response (Syn AI), and the actual 4D response (Seis AI). The time period modelled is from 2005 (LoFS6), which correlates to the onset of water injection, to 2011 (LoFS14). In general, the extent and form of the modelled 4D response correlates best with the modelled decrease in Sg (gas going back into solution) due to a pressure increase. However, changes in Sw, Sg and pressure are all affecting the rock properties, complicating the 4D response. The actual 4D signal is shown on the bottom right. The modelled 4D response differs from the actual 4D response, with a dominant

181

Andorsen et al., 2012

Figure 4.48: Changes in the Valhall Central Crest Porous Tor reservoir from 2005 to 2011 (LoFS6 to LoFS14). ∆Sw, ∆P, and ∆Sg are from the dynamic reservoir model. These are used in conjunction with our rock model to give the synthetic 4D AI response (bottom centre). The actual 4D AI from seismic is on the bottom right, and the shaded area shows where 4D data are not interpretable due to degradation from gas in the overburden.

north-east to south-west trend in the modelled response and an east-west trend in the actual response. Despite the low S/N (signal to noise) ratio under the crest, confidence in the 4D is increased because it has developed steadily with time. The multiple vintages of LoFS data also help differentiate signal

from noise: where an anomaly persists and strengthens with time, it can be confidently interpreted as signal. The 4D data were used as an input for updating the reservoir model by removing and changing barriers, changing the fractured hard ground (FHG) description, and changing the reservoir thickness. Figure 4.49 shows the modelled 4D Figure 4.49: Synthetic 4D AI response from 2005 to 2011 (LoFS6 to LoFS14) before and after updating the reservoir model response before and after based on 4D seismic response. The red outline shows the area with a robust 4D response from the seismic, and it is better updating the reservoir model. aligned with the modelled response after the updates (Andorsen, 2012). Note that in both simulations there is a strong 4D response around the G-19 water injector, and the signal is weaker in the 4D signal from the actual seismic (Figure 4.48, The red outline on these two lower right). This is attributed to degradation in the quality of the seismic signal, which decreases to the west. maps shows the area with a 4D seismic response, which can be seen to be better aligned with the synthetic response using the updated reservoir model. Acknowledgement: The authors thank the Valhall JV companies Aker BP and Hess for permission to publish this section.

182

4.6  The Ekofisk LoFS Project Guest Contributor: Per Gunnar Folstad Fast is fine, but accuracy is everything. Wyatt Earp (1848–1929), gunman, deputy sheriff of Pima County, USA.

ConocoPhillips, operator of the Norwegian PL018 licence, installed a recording system for permanent seismic monitoring at the Ekofisk field in 2010. The main objective of the system is to undertake comparative time-lapse or 4D seismic analysis for improved understanding of reservoir depletion zones and injected water expansion fronts within the reservoir interval, thereby reducing the drilling and production risks for future production wells. A second objective is to improve structural imaging of the gas-obscured crestal area which covers approximately one-third of the field. Unlike receivers in a seismic streamer, which only record compressional waves, 4C receivers located on the seafloor also record shear waves reflected from the sub-surface. Reflected shear waves are important for imaging through the gas-obscured area at the crest of Ekofisk. By combining compression and shear waves it is possible to derive more accurate elastic properties of the overburden and the reservoir. A third objective is the utilisation of 3D 4C seismic data to reduce drilling risk in the overburden.

4.6.1  4D Seismic Observations Changes in seismic signals over time (4D), as observed by repeat 3D surveys over producing fields, are a result of alterations in reservoir fluid composition and pore pressure caused by production and injection programmes. Normally, the dominant

The original content of this section was previously published as an Expanded Abstract for the EAGE PRM Conference in Oslo 2015.

4D effects are time and amplitude changes in the reservoir, as porosity remains unchanged. However, in highly porous chalk fields such as Ekofisk (Figure 4.50), 4D changes are also transmitted into the overburden due to reservoir compaction during depletion and water injection (see Section 4.2). The compaction-induced geomechanical changes in the overburden result in large 4D effects, measured as changes in two-way traveltime between surface and top reservoir. Joint interpretation of the five Ekofisk 3D seismic streamer surveys (1989, 1999, 2003, 2006 and 2008) has revealed overburden traveltime differences as large as 20 ms. Production and injection data from wells prove the overburden traveltime differences to be strongly correlated to the underlying reservoir compaction (Figure 4.51). In fact, detectable traveltime differences are observed at wells which have been active for less than a year. A second and subtler component of the 4D signal is an amplitude difference caused by impedance changes occurring as the reservoir responds to water injection and pressure depletion. Although noisy on streamer seismic data, this 4D signal is important in planning new wells that are targeting specific intra-reservoir zones. 4D seismic is being used extensively in reservoir monitoring and in the planning of new Ekofisk wells. The combination of overburden traveltime and reservoir amplitude differences

ConocoPhillips

Figure 4.50: The Ekofisk field.

183

ConocoPhillips

is used in a predictive sense to help place new producers in areas of considerable water-flooding risk. However, the subtle amplitude changes are noisy, and challenging to interpret reliably from streamer 4D seismic data.

4.6.2  Monitoring with a Permanent System By 2008, 4D seismic streamer surveys had become an established technique to help identify remaining oil zones for new production wells. The risk of encountering water-swept zones had been reduced and the net result was accelerated production and fewer redrills. Despite the complexity of installing 200 km of permanent ocean-bottom seismic cables around existing infrastructure and numerous pipeline crossings, the conclusion from a Value of Information (VOI) study was still that this would be the best solution for future seismic monitoring at Ekofisk.

Figure 4.51: 4D traveltime changes at Top Ekofisk from 1999–2003, 2003–2006 and 2006–2008. The images show a propagating water front from injector well K3 towards producers X16, X28 and M17 which were put on production in the period 2003–2005. Positive time shifts are a result of reservoir compaction and decreased seismic velocity in the sediments above top reservoir.

Key parameters in the VOI analysis were cost estimates of permanent systems versus streamer surveys, track record of 4D seismic for well planning at Ekofisk, expected improvements in repeatability from fixed receivers and the forecasted well programme and production profiles within the licence period. A permanent array of ocean-bottom receivers would enable efficient data acquisition with a source vessel once or twice a year. Included as an upside in the economical evaluation was an expected improvement in imaging of the crestal part of the field with converted (PS) waves.

4.6.3  Choosing a Fibre Optic System In 2008, ConocoPhillips invited six suppliers to bid on the provision of the Ekofisk LoFS to be installed in 2010. Three of the providers offered conventional systems incorporating either traditional geophones or MEMS (micro-electro-mechanical-

The Ekofisk Field

ConocoPhillips

ConocoPhillips

The Ekofisk field, located in block 2/4 in the Norwegian sector of the North Sea, is formed from fractured chalk of the Ekofisk and Tor formations of Early Figure 4.53: Top Ekofisk formation structure map showing the extent of the seismic obscured area (SOA). Paleocene and Late Cretaceous ages, most of which had high initial porosity in the range of 30–50%, but low permeability of 1–5 mD. The reservoir is an elliptical anticline, 11.2 km long and 5.4 km wide at 2,900– 3,300m depth. At the crest of the field is the seismic obscured area (SOA) where gas charged zones in the overburden complicate imaging with conventional seismic data (see Figures 4.51 and 4.52). Ekofisk was discovered in 1969, and has been producing since 1971. The field was originally developed by pressure depletion with an expected recovery factor of 17%. However, a large-scale water injection programme which started in 1987 has proved successful and has contributed to a substantial increase in oil recovery. The initial pressure depletion, Figure 4.52: Seismic line showing the seismic obscured area. plus water weakening of the chalk due to the injected water, has caused substantial reservoir compaction, causing the seabed to subside by up to 9m. ConocoPhillips expects to recover more than 50% of the oil originally in place, with roughly 627 MMboe (98 MMm3oe) recoverable resources remaining at the end of 2016.

184

ConocoPhillips

systems) accelerometers, with subsea analogue to digital conversion and data transmission of electrical signals by copper data cable. The other three providers offered passive systems incorporating either Bragggrating based optical sensing elements or Michelson-style optical interferometers. Digital data transmission in the passive systems is by optical fibre and all data processing is done in topside electronics. A key advantage of optical sensor technology is that the subsea components are completely passive, providing Figure 4.54: Seismic P-wave stacks from optical and electrical recording systems used for the Ekofisk test. greater durability and reliability when compared with systems that use electronic or moving-coil sensors. The fact that only 4.6.4.1  Using Accurate Time Shift Measurements a small number of optical fibres are used to collect data from Figure 4.55 shows estimated traveltime shifts for various many thousands of channels distributed over the reservoir is periods at Ekofisk. For the first period we observe time shifts not unlike a modern telecommunications system. All active up to -1 ms close to the producer P1. The negative time signal processing is contained in the topside instrumentation shifts close to P1 between December 2010 and May 2011 are package, which can be upgraded with minimum effort as interpreted as a water weakening effect on the chalk close new generations of optoelectronics and optical processing to the well. Water injected by the injector I1 is flooding hardware are introduced. However, optical systems for 4D into the area where oil is being produced, and the chemical seismic are a new technology with relative limited testing. reaction between the water and the chalk weakens the rock As part of the bid evaluation process, a system test was which starts to collapse, resulting in an increase in P-wave conducted in 2008, in which three cables, one electrical and velocity and a corresponding decrease in traveltime shift two optical, were trenched 1m into the seafloor and tied back (dt/t=-dv/v). In June 2011 the producer P1 collapsed, and to a recording system located at the Ekofisk K platform (Figure hence the observed time shifts between May and October in 4.54). Apart from some problems related to ‘overscaling’ of the this year (second vintage) are minimal. A horizontal producer optical receivers (typically occurring when the seismic source was drilled in August 2012. We observe positive time shifts is close to the receiver), the conclusion was that all systems between July 2012 and April 2013 (bottom left, Figure 4.55), performed well and within the specifications set out in the corresponding to a pore pressure increase caused by the bid. water injection. This pore pressure increase was confirmed In the end, ConocoPhillips decided to proceed with a system by pressure measurements in the well when it was drilled in based on Bragg-grating optical sensing elements. This system August 2012. consists of fibre optic seismic array cables, lead-in cables and In September 2013 the water injector I1 was shut-in until a laser interrogation instrumentation that is located on the February 2014. For this period we expect a pore pressure Ekofisk M platform. decline, corresponding to negative time shifts exactly as observed for the April 2013 to October 2013 vintage. For 4.6.4  Recent Examples from the Ekofisk LoFS Project the last time period (October 2013–May 2014), we observe In the ‘old’ days of seismic reservoir monitoring, an accuracy the effect of activating injection in I1 again: a positive of 1 ms for traveltime shifts was normal (Landrø et al., 2001). time shift corresponding to pore pressure increase. This Meunier et al. showed that for a land experiment, using fixed example demonstrates the benefit of increased accuracy. sources and fixed receivers, it was possible to measure time We can monitor the reservoir production and, maybe more shifts for a shallow reflector (180m depth) with an accuracy importantly, we can monitor the extent of the anomaly, and of 0.023 ms (standard deviation). For a reservoir depth of find out whether anomalies cross faults. 3,000m like the Ekofisk field these numbers are not realistically achievable. However, an accuracy of 0.2 ms was achieved by 4.6.4.2  Mapping Acoustic Impedance Changes using fixed receivers and repeating the shot points as precisely The second example (Figure 4.56) shows the result of a 4D as possible. This is an important step: improvement of a factor pre-stack time-lapse inversion applied to PRM-data from of 5–10 in the accuracy for measuring time-lapse traveltime May 2011 and October 2013. P3 is a horizontal producer that shifts is crucial, and represents a breakthrough in possibilities experienced increased water production in 2011. Due to this for detailed 4D interpretation. The corresponding accuracy for water influx, the well was closed in 2012. As discussed for amplitude changes is of the order of 2–3%. the previous example, the water influx resulted in weakening

185

ConocoPhillips

of the chalk, and a corresponding compaction, leading to increased acoustic impedance, since the rock becomes somewhat stiffer. The 4D anomaly is strong and clear in a relatively thin zone (30m) close to the well. A weaker anomaly is observed for the neighbouring formations above the formation where lowest perforations of the well occur.

4.6.5  Status and Use

ConocoPhillipsGrane

So far, the seismic permanent fibre optic system that was installed at the Ekofisk field in 2010 has delivered accurate and useful measurements. The PRM system is also utilised for overburden monitoring purposes, in order to ensure safe production and the early detection of anomalous pressure build-ups. Recent research has also demonstrated that the Figure 4.55: Time shift for six different PRM-difference data sets. Time shifts have been measured for the seabed array can exploit background entire Ekofisk Formation, and the colour bar scale is between -1 and 1 ms. low frequent random noise to map, for accuracy of the time-lapse seismic measurements, which are instance, shear wave velocity changes in the layers close to utilised for detailed reservoir management purposes. the seabed. However, the main impact is of course increased

186

Figure 4.56: 4D acoustic impedance changes estimated between 2011 and 2013. The vertical profile (left) shows a clear anomaly close to the P3 well perforations in the lower Ekofisk Formation, and weaker anomalies for the upper formations. The right-hand figure shows a map view of the same producer (P3) and the corresponding acoustic impedance increase.

4.7  The Snorre PRM Project Guest Contributor: Mark Thompson Out of intense complexities, intense simplicities emerge.

In December 2012 Statoil deployed approximately 500 km of seismic cable for a PRM installation covering an area of 195 km2, totalling about 10,000 seismic stations, on the Snorre licence, representing the largest PRM installation to date, as shown in Figure 4.57. By October 2014 the PRM system had been installed and the first seismic acquired. In 2017, the sixth PRM survey was acquired. The seismic source effort covers an area of 410 km2 with a total of 4,050 km source sail line-kilometres. The PRM operation at Snorre has demonstrated that it is possible to monitor production effects, with only some months interval between surveys, and that it is further possible to differentiate between water and gas during a water alternating gas (WAG) cycle. Additionally, fast turnaround of PRM allows for speedy feedback on newly completed well operations.

4.7.1  Why PRM? In the quote above, from many years ago, Winston Churchill managed to summarise what PRM is in the 21st century. Conventional 4D seismic was previously acquired too infrequently to map the rapid and complex changes

Statoil

Winston Churchill (1874–1965)

Acknowledgement: The authors thank Statoil AS and the Snorre licence for permission to publish this section, and the many persons who have contributed to this work. The views and opinions expressed here are those of the operator and are not necessarily shared by the licence partners. Figure 4.57: The Snorre PRM receiver array layout covers an area of approximately 200 km2.

taking place in the Snorre reservoir. Painstaking interpretation and the disciplined application of conventional 4D seismic to try to understand and acquire knowledge of the Snorre reservoir during production turned out to be insufficient. New approaches to reservoir management had to be developed to map the underlying simplicity of the observed complexity. The key is frequent monitoring in order to understand the behaviour of the simple constituent parts underlying the reservoir complexities. With PRM the geoscientist can see previously unseen 4D patterns and realise reservoir management opportunities. As time progresses, we believe that PRM will bring simplicity and efficiencies to our daily lives at a fraction of the cost of conventional 4D. Let’s explain a bit more. Snorre utilises WAG to optimise the sweep efficiency of the injection programme, where the WAG cycles in a WAG injector can typically vary between two and 12 months. At the same time, prior to PRM, the traditional seismic time-lapse surveys on Snorre were acquired with a three-year interval between

Harald Pettersen/Statoil

Figure 4.58: The Snorre A platform.

187

187

b

c

Statoil

a

the surveys. Due to the fast-changing production and reservoir conditions, it turned out to be difficult to interpret the 4D data and relate the 4D signal to production effects. The reservoir changes in some cases could add up to a strong 4D signal, and in other cases the reservoir changes could more or less cancel the seismic 4D effect (Figure 4.59). In addition, pressure variations and related seismic time-shift effects added to the complexity. To be able to understand the total 4D response and use the result both qualitatively and quantitatively, the mixed 4D effects needed to be decomposed into individual components. To this end, it was recognised that improved 4D data quality and more frequent data acquisition would simplify Snorre’s complex 4D story.

4.7.2  Monitoring a Complex WAG Cycle The ability to monitor a complex WAG cycle (Figure 4.60) was demonstrated by the observations of injector well A. The first survey was acquired in the spring of 2014, the second survey in the autumn of 2014, and the third survey in the spring of 2015. Well A injected water during the period from spring to autumn in 2014, encompassing the first two seismic PRM surveys. During this period the water injection is clearly mapped seismically as an amplitude increase between the two surveys, as displayed in Figure 4.61. Later, prior to the third PRM survey, the well converted to gas injection, which is clearly indicated seismically by an amplitude decrease between the second and third surveys. When comparing the seismic difference between the first and third surveys, the amplitude increase due to water injection interacts destructively with the amplitude decrease due to gas injection during this 12-month period of monitoring. This leads to no observable amplitude

b

Figure 4.60: Injection profile for well A, alternating between water and gas injection, with the corresponding PRM surveys during spring 2014, autumn 2014, and spring 2015. The different colours correlate to different sleeves which regulate flow into the different reservoir units.

changes seismically. The increased amplitude between spring 2014 and autumn 2014 was interpreted to be the result of the increased pressure in the area due to the water injection of an existing well. The 4D signal progresses southwards through a fault, which had previously been considered restrictive to reservoir communication, with the 4D signal now terminating against another fault further south. With the new frequent 4D data available, the fault, further south, is now interpreted to be more sealing than previously assumed. Between autumn 2014 and spring 2015 the decrease in amplitude observed on the seismic, as well as its areal extent, was interpreted as a decrease in injection pressure, which also confirmed southward progression, and the southerly sealing fault. These observations were used as input into the geomodel and to validate reserve calculations for future wells to be drilled in the area. Figure 4.61: Amplitude increase (red) between spring 2014 and autumn 2014 (a), and amplitude decrease (blue) between autumn 2014 and spring 2015 (b). The amplitude effect is largely cancelled between spring 2014 and spring 2015 (c).

c

Statoil

a

Statoil

Figure 4.59: Seismic 4D maps from Snorre. (a): Water injection in a well between 1997 and 2001 is associated with amplitude dimming (blue colours). (b): Example of complex 4D amplitude response from mixed gas and water injection during 2001–2006. In 2005, significant amounts of gas were injected. Increased gas saturation leads to amplitude brightening (red colours). (c): The WAG cycle during 2006–2009 has largely cancelled the seismic 4D effect. The lesson is that with a large time span between seismic surveys, it becomes difficult to interpret the 4D effect related to the complex injection and production history.

188

Snorre

Several new wells had been planned in this segment, based on the earlier assumption of a sealing fault just south of the injector. Due to this new information, both the location and the need for these planned new wells are being re-evaluated. In this example, frequent 4D seismic data from PRM can potentially save the cost of drilling a well into an already produced area, or decrease the risk of drilling an injector well into an area that does not communicate with the producer.

history is complex and many of the injectors have been backproduced. Different reservoir levels are actively opened and closed to optimise the production and pressure support.

Statoil

The Snorre field is a large oil field located in blocks 34/4 and 34/7 of the Tampen area of the Norwegian North Sea and is situated in 300–350m water depth. The field was discovered in 1979 and sanctioned in 1988, Figure 4.62: Snorre geosection. with production start-up in 1992. It covers an area of approximately 200 km2 with a series of reservoir units, which are approximately 1 km thick. The field consists of several large rotated fault blocks as shown in Figure 4.62. The reservoir at a depth of 2–2.7 km contains sandstones belonging to the Lower Jurassic and Triassic Statfjord and Lunde Formations. The reservoir has a complex structure characterised by alluvial channels and internal flow barriers. Sand connectivity and communication across faults are issues. The field is currently producing from two platforms and a subsea template with reservoir pressure maintained by water, gas and water alternating gas (WAG) injection. The production and injection

of injection, which provides an important calibration point for the further 4D interpretation work, including seismic modelling and matching to simulation model. Figure 4.63: Amplitude difference map showing an amplitude increase (red) after six weeks of injection.

Another example, which demonstrates the importance of frequent 4D seismic acquisition, fast turnaround, and quick delivery of data for interpretation, is shown in Figure 4.63. Injector B started to inject water into the toe section just six weeks before the autumn 2015 acquisition. At the time there was uncertainty associated with the performance of the zone isolation within the well, as well as the communication pathway for that segment. The seismic data shows a brightening effect (red), which is interpreted as a pressure increase (Figure 4.63). The 4D signal extends southwards from the injector towards the producers, while no 4D effects are observed in the north where there has been no production. Fast turnaround of PRM data enabled early confirmation that zone isolation worked as planned with no leakage to other reservoir units. The PRM data also showed that faults (green lines in figure) do act as barriers. Observations from the corresponding well data show the pressure has increased from 280 to 320 bars during this period

Statoil

4.7.3  Frequent 4D Seismic Acquisition

189

4.8  The Grane PRM Project Guest Contributor: Rigmor Elde Sometimes one sees things clearly years afterwards than one could possibly at the time. Agatha Christie, 'The Mysterious Mr. Quin'

The main objective of 4D seismic monitoring at the Grane field is to increase oil recovery. The PRM system was installed in order to better understand the dynamic behaviour of the reservoir by getting more frequent and fresher 4D seismic with even better quality than could be acquired using towed streamers. Additional objectives such as the monitoring of overburden integrity around shallow injectors can be considered a benefit of PRM technology (Roy et al., 2015). Improved repeatability between surveys and fast turnaround for seismic data processing are important success criteria when using PRM technology. In the start-up phase these criteria have fulfilled expectations. Preliminary measurements in a window right above the reservoir zone show a reduction of 50% in NRMS (Normalised Root Mean Square error), from ~24% for streamer vs streamer to ~12% for PRM vs PRM. By the autumn of 2017 seven PRM surveys had been acquired on Grane and production effects observed confirm the usefulness of frequently acquiring high quality 4D seismic data. Below we give two different examples of the application and value of 3D and 4D PRM seismic data respectively.

4.8.1  Better Imaging Within Platform Shadow Area

of particular interest and concern, as this well has had a very complex injection history. Previous 4D streamer seismic surveys (2005, 2007, 2009, 2011 and 2013) had not been able to monitor the area properly because of missing or highly uncertain data, as they were undertaken very close to the Grane Platform (i.e. in the platform shadow area) where towed streamer seismic data is difficult to acquire and image. PRM seismic data, however, is less affected by the platform shadow since the seismic stations are situated closer to the structure, and yield better results near the platform than the corresponding streamer data. During the first PRM survey (PRM0), in autumn 2014, a pronounced depression in the Top Utsira reflector, about 75m in diameter and 10ms in vertical extent, was observed (Figure 4.64) just above the injection point of the shallow injector. On the seismic data acquired before the start of production (1993) the Top Utsira reflector did not show this depression. Based on this observation on the PRM data, together with the injection history, it was decided to stop water injection in this well. A detailed seabed survey concluded that there was no leakage to the seabed, though due to uncertainty about the nature of the overburden damage, and how it might develop if injection was continued, it was decided to keep the well shut in. In order to explain what had happened in the overburden around this injector a seismic modelling project was undertaken, the aim of which was to test the most likely scenarios leading to the overburden change.

4.8.2  PRM After a Few Days of Gas Injection The value of ‘fresh’ 4D data was demonstrated by monitoring the gas expansion resulting from a newly drilled gas injector, drilled shortly before of the second PRM survey, undertaken Statoil

Data quality in the platform shadow area has been a problem with towed streamer acquisitions but this has improved with PRM datasets. The monitoring of shallow waste injectors is part of the scope of PRM, in this case looking for ‘out of zone injection’ and potential leakages to the seafloor. When the initial preliminary fast track data from the first PRM survey arrived in February 2015, the overburden was scanned for potential differences in the seismic signature that could relate to production/injection changes. The overburden around a specific water injector was

Acknowledgement: The authors thank the Grane licence partners for permission to publish this section, and the many persons who have contributed to this work. The views and opinions expressed here are those of the operator and are not necessarily shared by the licence partners.

190

Figure 4.64: (a): Close-up of the injector (white line) on the Top Utsira TWT map. The estimated diameter (~75m) of the depression and the distance to the platform (~450m) are superimposed. (b): PRM0 seismic data along the shallow injector well path. The Top Utsira reflector interpretation is shown in yellow, and the area of the anomalous depression is outlined with a question mark. Note the injection point just below the Top Utsira depression.

Statoil

during the spring of 2015 (PRM1) and which took approximately three weeks to acquire. During the acquisition of PRM1, this well started to inject gas. The 4D observations Figure 4.65: Difference between PRM1 and PRM0. Left: Instantaneous amplitude map. Right: Cross-section view with relative acoustic demon­strated the impedance difference along the gas injector. The well path is shown in black with the injection interval in green and the production clear effects of gas packers in red. The data show a clear 4D seismic response of gas replacing oil. injection around the injection point (Figure 4.65), and showed that the production These observations have expanded the use of 4D seismic packers were operating as expected. In addition, it was possible from the traditional sphere of well planning into monitoring to map the directional trend of the gas movement. the status of well completions.

The Grane Field The Grane oil field is located in the Norwegian North Sea about 200 km north-west of Stavanger, Norway, and contains heavy (19°API) oil with no initial gas cap. The expected oil in place volumes amount to about 220 MMm³ with estimated reserves of 144 MMm³. The field has been successfully producing since start-up in September 2003, and the present production (June 2017) is approximately 75,000 bopd.

The reservoir is made up of late Paleocene Heimdal sands at a depth of ~1,700m, which are encased by potentially unstable Lista Formation shales. The top and base reservoir surfaces are very rugose due to deformation/remobilisation and injection of the sands. The field is produced by artificial gas injection drive, and new wells are placed close to base reservoir to recover deep, remaining oil.

Harald Pettersen/Statoil

Figure 4.66: Risers at the Grane platform.

191

192

Chapter 5

Broadband Seismic Technology and Beyond You know what the issue is with this world? Everyone wants some magical solution to their problem and everyone refuses to believe in magic.

CGG

From Alice’s Adventures in Wonderland – a novel written in 1865 by English mathematician Charles Lutwidge Dodgson (1832–1898) under the pseudonym Lewis Carroll.

Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road. One such new technology is broadband seismic. Want to be part of the steamroller or the road? In this chapter, we give you an introduction to what broadband seismic has to offer.

193

5.1  The Drive for Better Bandwidth and Resolution This section gives an introduction to resolution and the benefits of hunting for low frequencies in particular. Technology presumes there’s just one right way to do things and there never is. Robert M. Pirsig (1928–), American writer and philosopher Most people think of broadband with regard to tele­communi­ cations, in which a wide band of frequencies is available to transmit information. This large number of frequencies means that information can be multiplexed and sent on many different frequencies or channels within the band concurrently, allowing more information to be transmitted in a given amount of time (much as more lanes on a highway allow more cars to travel on it at the same time). In seismic exploration broadband refers to a wider band of frequencies being recorded and utilised than in conventional seismic exploration. In the marine case the conventional acquisition system is said to give a useable bandwidth of typically between 8–80 Hz, whereas broadband seismic systems

are claimed to give useable frequencies from as low as 2.5 Hz up to 200 Hz or more for shallow targets. On land, marine vibrators today can produce signal frequencies down to 1.5 Hz. In this chapter we will discuss the seismic vendors’ various new broadband solutions – a combination of leading equipment, unique acquisition techniques and proprietary data processing technology. As the quote at the start of this section says, it is fair to state that there is not one right way to do things. Each of the vendor’s solutions has unique capabilities.

5.1.1  Organ Pipes Did you know that organ pipes have gone through a technological development similar to that of broadband

Figure 5.1: The world’s largest pipe organ was built between May 1929 and December 1932 by the Midmer-Losh Organ Company of Merrick, Long Island, New York. It weighs 150 tons, boasts seven manuals and has 1,439 stop keys, 1,255 speaking stops, 455 ranks, and 33,112 pipes! The most impressive stop on the organ would have to be the 16 ft Ophicleide, which is the world’s loudest stop. This stop has six times the volume of the loudest train whistle.

194

Lasse Amundsen

Figure 5.2: Temporal resolution. Left: Resolution increases with the maximum frequency. The number of octaves is 2,3 and 4 for the black, blue and red spectra, respectively. Right: Resolution is relatively insensitive to the minimum frequency. The number of octaves are 2,3 and 4, but the resolution is the same. A key learning is that side lobe reduction is obtained by adding low frequencies.

seismic? The frequency f of an organ pipe is f = v/ λ, where v is the speed of sound in air (340 m/s) and λ is the wavelength. Let L be the length of the pipe. The longest possible wavelength equals 2L and 4L for open and closed pipes respectively. The maximum wavelength thus is λ = 4L, and the corresponding minimum frequency equals f = v/4L. One of the biggest organs in the world is the Boardwalk Hall Auditorium organ in Atlantic City. It is equipped with 33,112 pipes, and the biggest pipe has a length of 64 ft. This is an open pipe so the corresponding lowest frequency is around 8 Hz. A closed pipe of the same length would give a lower frequency of 4 Hz. But the story does not stop at 4 Hz. The lowest produced note is obtained by combining a stopped 64 ft and stopped 422 ⁄ 3 ft pipe to produce a resultant 256 ft pipe which gives 2 Hz! This is far below the threshold of the human ear, which is approximately 16 Hz. So what is the point in this focus on low frequencies for organ pipes? Can we feel the low frequencies directly on our body, or is it a combination of hearing and body feeling? Anyhow, there is a strong similarity between the design of big organ pipes and today’s developments in broadband seismic. As geophysicists, we would be thrilled if our marine seismic system produced frequencies truly from 2 Hz and upwards. We would want to activate all the pipes of the organ in Atlantic City, and especially the big pipes! The low frequencies are of particular interest for deep imaging, inversion and high-end interpretation.

5.1.2  Temporal Resolution Improving bandwidth and resolution has been a priority since the early days of the seismic method – to see thinner beds, to image smaller faults, and to detect lateral changes in lithology. Although sometimes used synonymously, the terms bandwidth and resolution actually represent different concepts. Bandwidth

describes simply the breadth of frequencies comprising a spectrum. This is often expressed in terms of octaves. Commonly referred to in music, an octave is the interval between one frequency and another with half or double its frequency. As an example, the frequency range from f1 to f2 > f1 represents one octave if f2 = 2f1. The range from 4 to 8 Hz represents one octave of bandwidth, as do ranges 8–16 Hz, and 16–32 Hz. Also, the range from f1 to f0 < f1 represents one octave if f0 = f1/2. In a classic empirical study, Kallweit and Wood (1982) found a useful relationship between bandwidth and resolution. For a zero-phase wavelet with at least two octaves of bandwidth, they showed that the temporal resolution TR in the noise-free case could be expressed as TR = 1⁄(1.5fmax), where fmax is the maximum frequency in the wavelet. Other definitions are possible, but for two octaves or more of bandwidth the clue is that one can approximately relate temporal resolution to the highest, and only the highest, frequency of a wavelet. This leads to some very useful and quite accurate predictions. Examples are that wavelet breadth is TB = 1⁄(0.7fmax) and peak-to-trough is TPT = 1⁄(1.4fmax). Figure 5.2 demonstrates the expected improvement in resolution associated with increasing fmax. We see that when the maximum frequency value is increased while holding the minimum frequency fmin fixed, sharper temporal wavelets are obtained. Meanwhile, the right side of the figure shows three more temporal wavelets where the fmin value is changed while holding fmax fixed. We see that the main lobe shows hardly any change while the side lobes diminish as fmin is lowered. Thus, filling in the low frequencies gives wavelets with less pronounced side lobe amplitudes. It is attractive because it is smoother, and with less side lobe energy in the wavelet it is unlikely that a small-amplitude event will be lost amid the side lobes from neighbouring, large-amplitude reflections.

195

5.1.3  Resolution and Wavelength The classic definition of resolution, given by the famous geophysicist Bob Sheriff, is the ability to distinguish two features from one another. The seismic method is limited in its ability to resolve or separate small features that are very close together in the subsurface. The definition of ‘small’ Figure 5.3: Different limits of vertical resolution (d–f, after Kallweit and Wood, 1982; Zeng, 2008) for a is governed generally by the seismic step-wise acoustic impedance (AI) profile (a) giving rise to two reflection events R of same polarity (b). Resolution can be increased by increasing the high-frequency content of the seismic wavelet. wavelength, λ = v/f. Geophysicists can do little about a rock’s velocity, but they can change the wavelength by working hard to change The stepwise AI profile in Figure 5.3 represents a somewhat the frequency. Reducing the wavelength by increasing the uncommon model for reservoir geophysics. The model of frequency helps to improve both temporal/vertical and spatial/ greater interest is the wedge model (see 5.4), which gives horizontal resolution. Resolution thus comes in two flavours. reflections of equal amplitude but opposite polarity. In this case The temporal resolution refers to the seismic method’s ability λ/4 is known as the tuning thickness that corresponds to the to distinguish two close seismic events corresponding to maximum composite amplitude. different depth levels, and the spatial resolution Figure 5.4: Wedge model (a) and its resolution. Observe that low frequencies (b) in the wavelet is concerned with the ability to distinguish and significantly reduce the side-lobe effects, improving interpretability. recognise two laterally displaced features as two distinct adjacent events. a Figure 5.3 depicts two similar layers separated by interfaces. The measurable seismic signals that they produce may show as separate, distinguishable signals when they are well separated – a condition we call ‘resolved’. When the interfaces are close together, however, their effects on the seismic signals merge and it is difficult or maybe impossible to tell that two rather than just one interface is present – this condition we call ‘unresolved’. The problem of resolution is to determine how to separate resolved from unresolved domains. The yardstick for seismic resolution is the b dominant wavelength λ. A much used definition of ‘resolvable limit’ is the Rayleigh limit of resolution: the bed thickness must be a quarter of the dominant wavelength. This resolution limit is in agreement with conventional wisdom for seismic data that are recorded in the presence of noise and the consequent broadening of the seismic wavelet during its subsurface journey. The dominant wavelength generally increases with depth because the velocity increases and the higher frequencies are more attenuated than lower frequencies. The λ/4 limit is considered by many the c geophysical principle regarding the limiting resolution we can expect in determining how thin we can resolve bedding layers from seismic. However, we might be able to go beyond this limitation by focusing in on frequency’s ability to tune in on the layer thickness. For reflectors separated by less than λ/4 thickness, the amplitude of the composite reflection depends directly on the thickness of the reflecting layer. This composite amplitude variation can be used for estimating the thickness of arbitrary thin beds.

196

Figure 5.5: Spatial wavelets – the result of migration of bandlimited data from a point diffractor. (a): Effect on temporal maximum frequency on spatial wavelets that have the same minimum frequency, 10 Hz. Spatial resolution increases linearly with temporal frequency. (b): Wavelets with the same maximum temporal frequency, 40 Hz, are relatively insensitive to the minimum frequency. However, side-lobe effects are reduced with lower frequencies.

a 10-40 Hz

10-80 Hz

10-120 Hz

10-40 Hz

5-40 Hz

2.5-40 Hz

b

5.1.4  Missing Low Hertz Hurts Interpretation Wedge modelling is often used to understand the vertical (thin bed) resolution limits of seismic. The wedge model is a high AI layer embedded in a low AI surrounding. As used in Figure 5.4, it consists of 11 traces, and thickens from 1 ms on the left to 51 ms on the right. Each trace (not shown) contains two reflectivity spikes of equal amplitude but opposite sign. The top spike is located at the top of the wedge. The bottom spike is located at the bottom of the wedge. To investigate the effect of low frequencies in the wavelet, we filter the model traces with impedance wavelets of 2.5–50 Hz and 10–50 Hz. Note the side-lobe reduction and improved interpretability of the wedge when low frequencies are filled in. This example is a key result showing the benefit of low frequencies as it nicely illustrates the value for seismic interpretation.

5.1.5  Spatial Resolution Equally as important as temporal resolution is the issue of spatial resolution. The most common way to discuss spatial resolution is in the context of Fresnel zones. However, that discussion typically deals with resolution before seismic migration – the noble mathematical art of transforming seismic

data into a ‘true’ image of the subsurface. We are interested in resolution after migration. To this end, we follow the method proposed by Berkhout (1984) for quantifying spatial resolution, via the use of the post-migration ‘spatial wavelet’. The spatial wavelet is simply the migrated point diffractor’s response after bandlimiting the temporal spectrum. The two key parameters dictating the nature of the spatial wavelet are the temporal bandwidth and the spatial aperture. Figure 5.5a shows three spatial wavelets for a single point diffractor in a constant-velocity medium. The temporal bandwidths are varied in the same way as in Figure 5.2, i.e. the maximum frequency value is varied while holding the minimum frequency fixed. As expected, the larger fmax values yield sharper spatial wavelets. Thus, better temporal resolution also leads to better spatial resolution. Figure 5.5b shows three more spatial wavelets for the same medium. This time the fmin value is changed while holding fmax fixed. Again, the trends are the same as in the temporal case. In other words, the main lobe shows hardly any change while the side lobes diminish as fmin is lowered. The side lobes are difficult to see in the figure, but a closer look would reveal that the side lobes diminish with lower fmin.

197

5.2  Exorcising Seismic Ghosts I wouldn’t describe myself as lacking in confidence, but I would just say that – the ghosts you chase you never catch. John Malkovich (1953– ), American actor, producer and director

5.2.1  Broadband Seismic TowedStreamer Methods Conventional marine seismic acquisition techniques record data over a useable frequency range of typically 8 to 80 Hz. A seismic section with this bandwidth can resolve reflectors separated by about 20–25 ms. This resolution is adequate for mapping medium to large geological structures with simple stratigraphy. However, in areas having complex geological features, conventional data hide the details required to fully assess the exploration potential. One of the major factors hampering marine seismic resolution is the ghost effect – the result of reflections from the sea surface. Ghost reflections interfere constructively or destructively with the primary reflections, thereby limiting useable bandwidth and data integrity due to notches at particular frequencies. The notch frequencies depend on the respective depths of the source and receiver. For example, a depth of 6m produces a notch at 0 and 125 Hz, while a 20m depth produces notches at 0, 37.5, 75, and 112.5 Hz (see Figure 5.12). Some control can be exercised in acquisition by varying the depth of the source and streamers, but the full bandwidth remains compromised. Acquisition and processing solutions

198

Country Life Magazine

Ghosts have always held some degree of fascination, thanks to address the receiver ghost problem were introduced in the to hundreds of books and movies devoted to the subject. One mid-1950s, with significant technological progress having of the scariest films of all time is The Exorcist, a 1973 horror been made since late 2000s. Major developments include the film which deals with the demonic possession of a young girl hydrophone-geophone (P-Z) streamer; over/under streamers; and her mother’s desperate attempts to win back her daughter slanted streamer; GeoStreamer, P-Z; BroadSeis, variants of the through an exorcism. Ghosts are also present in seismic slanted streamer; and the multicomponent streamer, IsoMetrix. recordings and must similarly be exorcised. Hydrophones measure pressure (P) changes, while To understand how geophysical priests exorcise seismic geophones and accelerometers are sensitive to particle motion. ghosts, either through developing sensor technologies or new The hydrophone is omni-directional and measures the sum of data processing methods, you need to understand what ghosts upgoing and downgoing pressure waves. The vertically oriented are and how they band-limit seismic data. Like John Malkovich, geophone (Z) has directional sensitivity and measures their geophysicists do not lack confidence, but we believe our difference. profession may never be fully successful in catching and exorcising the seismic ghosts. Figure 5.6: The ghostly Brown Lady of Raynham Hall, caught in a (possibly not authentic) photograph in 1936. The real-world problem is too tough. Here, we give you the history.

USPTO

5.2.2  Significant Developments

By 1956, Haggerty from Texas Instruments had patented methods for cancelling the ghost reflections and reverberations in the water-layer by deploying hydrophones and geophones, and over/under streamers. His analysis was based on the theory of standing waves. Today, history shows that Haggerty was 50 years ahead of his time. Pavey and Pearson from Sonic Engineering Company described in 1966 how the frequency components that are cancelled on hydrophone Figure 5.7: Drawing from Pavey and Pearson (1966), who proposed combining hydrophone and recordings due to the ghost can be replaced by the geophone measurements to attenuate the effect of ghost reflections from the sea surface. use of geophones. However, it was found that the combination of P and Z signals tended to degrade the S/N-ratio of the lower frequencies in the seismic band. The Z-sensor had high noise caused by the specific mounting of the sensor and the rotation of the towed cable. Berni, working for Shell Oil (1982–85), developed the geophone principle further. The first successful P-Z marine streamer, GeoStreamer, developed by PGS, Figure 5.8: Hydrophone and geophone measurements at 13m depth with amplitude was described by Vaage et al. (2005). In addition to spectra from Vaage et al. (2005). The noisy low-frequency part of the geophone signal (53) the deghosting advantage, the GeoStreamer reduces is recalculated from the recorded pressure signal below ~18 Hz using the Z-P model and weather noise and increases acquisition efficiency by merged with the non-noisy geophone signal before deghosting. extending the weather acquisition window. In 1976 Parrack from Texaco patented the use of streamers that are spaced apart vertically to cancel the downgoing ghost signals from the sea surface. The first practical 2D applications of the over/under streamer method started in 1984 in the North Sea, driven by ideas from Sønneland in Geco. The method was introduced as a means of reducing the weather downtime by deploying two streamers on top of each other at large depths, such as 18m and 25m, to minimise the effect of swell noise. In addition, it allowed deghosting. The over/ under acquisition, however, had limited applications during that period due to deficiencies in marine acquisition technology related to lack of streamer control in vertical and horizontal planes. With the introduction of new marine acquisition technology that has accurate positioning and advanced streamer control, the over/under method was applied during the mid-2000s, mainly for 2D applications. Slant streamer marine acquisition was first proposed by Ray and Moore Figure 5.9: Drawing from Parrack (1976), who proposed using (1982) from Fairfield Industries. The novel idea of this method was to have over/under streamers to cancel the downgoing ghost signal. The two upper traces show that the ghost (45) arrives first on variable receiver depths along streamers and, inherently, variable ghosts from the upper cable (35) then on the lower cable (36). By aligning receiver to receiver, and to take advantage of this in the stacking process. the ghost arrivals and subtracting, the trace (51) is ghost-free. However, slant streamer acquisition was not successful at that time due to inadequate data processing algorithms, particularly for the ghost-removal process. Today, variants of the slant streamer idea have been implemented by CGGVeritas (BroadSeis) and by WesternGeco (ObliQ). The concept of multicomponent towed streamers was introduced by Robertsson et al. (2008). The system, called IsoMetrix, measures pressure with hydrophones, and particle acceleration in y- and z-directions with micro electromechanical systems (MEMS) accelerometers. Based on these measurements, cross-line wavefield reconstruction and deghosting can be performed.

5.2.3 Ghosts The effect of the sea or land surface is a well-known obstacle in seismic exploration and has been addressed since the outset of reflection seismology (Leet, 1937). Van Melle and Weatherburn (1953) dubbed the reflections from energy initially reflected above the level of the source, by optical analogy,

Figure 5.10: Drawing from Ray and Moore (1982), who proposed a ‘high resolution, high penetration marine seismic stratigraphic system’ where a slanted cable gathers seismic reflections so that the primary and ghost reflections from a common interface are gradually spaced apart.

199

Lasse Amundsen Lasse Amundsen

Figure 5.11: A plane wave with angle ϑ to the surface is incident at the receiver (red circle). The ghost is delayed with time τ = 2z/(c cosϑ). The rays that are normal to the plane wave denote the direction of the wave.

20 m

6m

‘ghosts’. Lindsey (1960) presented a ghost removal or deghosting solution by observing that a downgoing source signal of unit amplitude followed by a ghost with time lag τ0 =2z/c, here represented in frequency domain by the function G = 1 + r0 exp(iωτ0), can be eliminated theoretically by applying the inverse filter D = 1/G to the data: D G = 1 . Here, r0 is the reflection coefficient at the overlying boundary, k = ω/c is the wavenumber, ω=2πf is the circular frequency, f is the frequency, c is the propagation velocity, and z is the source depth. In marine seismic surveying, the time domain pressure pulse that is emitted by the single airgun in the vertical direction is called the pressure signature. The pressure pulse that travels upward from the source is reflected downward at the sea surface and joins the initially downward-travelling pressure pulse. This delayed pulse, reflected at the sea surface, is called the source ghost. Also on the receiver side the sea surface acts as an acoustic mirror, causing receiver ‘ghost’ effects in recorded seismic data. While the reflections from the subsurface move upward at the receiver, the receiver ghosts end their propagation moving downward at the receiver.

5.2.4  Ghost Effect on P

Hydrophone

Geophone

Figure 5.13: Receiver ghost responses for hydrophone and geophone at 18.75m depth. The geophone has maximum response (+6 dB) where the hydrophone has notch, and vice versa. For the low frequencies, the real geophone signal is too noisy, and deghosting is achieved in processing using the geophone-hydrophone model.

200

Lasse Amundsen

Figure 5.12: Ghost responses that modulate pressure recordings for deep-tow at 20m and shallow-tow at 6m. The ghost amplifies some frequencies (amplitude >0 dB) and attenuates other frequencies (amplitude
Introduction to Exploration Geophysics with Recent Advances

Related documents

352 Pages • 185,911 Words • PDF • 59.9 MB

46 Pages • 1,428 Words • PDF • 277.8 KB

329 Pages • 88,570 Words • PDF • 15 MB

134 Pages • 64,167 Words • PDF • 6.6 MB

289 Pages • 91,544 Words • PDF • 4.2 MB

19 Pages • 7,437 Words • PDF • 2 MB

292 Pages • 114,484 Words • PDF • 3.2 MB