In seismic interpretation, pixel-level labels of various rock structures can be time-consuming and expensive to obtain. As a result, there oftentimes exists a non-trivial quantity of unlabeled data that is left unused simply because traditional deep learning methods rely on access to fully labeled volumes. To rectify this problem, contrastive learning approaches have been proposed that use a self-supervised methodology in order to learn useful representations from unlabeled data. However, traditional contrastive learning approaches are based on assumptions from the domain of natural images that do not make use of seismic context. In order to incorporate this context within contrastive learning, we propose a novel positive pair selection strategy based on the position of slices within a seismic volume. We show that the learnt representations from our method out-perform a state of the art contrastive learning methodology in a semantic segmentation task.
During exploration for oil and gas, seismic acquisition technology outputs a large amount of data in order to obtain 2D and 3D images of the surrounding subsurface layers. Despite the potential advantages that come with access to this huge quantity of data, processing and subsequent interpretation remains a major challenge for these companies (Mohammadpoor and Torabi, 2020). Interpretation of seismic volumes is done in order for geophysicists to identify relevant rock structures in regions of interest. Conventionally, these structures are identified and labeled by trained interpreters, but this process can be expensive and labor intensive. This results in the existence a large amount of unlabeled data alongside a smaller number that has been fully interpreted. To overcome these issues, work has gone into using deep learning (Di et al., 2018) to automate the interpretation process. However, a major problem with any conventional deep learning setup is the dependence on having access to a large pool of training data. As discussed, this dependency is not reliable within the context of seismic. In order to overcome this reliance on labeled data as well as leverage the potentially larger amount of unlabeled data, contrastive learning (Le-Khac et al., 2020) has emerged as a promising research direction. The goal of contrastive learning approaches is to learn distinguishing features of data without needing access to labels. This is done through algorithms that learn to to associate images with similar features (positives) together and disassociate images with differing features (negatives) apart. Traditional approaches, such as (Chen et al., 2020), do this by taking augmentations from a single image and treating these augmentations as the positives, while all other images in the batch are treated as the negative pairs. These identified positive and negative pairs are input into a contrastive loss that minimizes the distance between positive pairs of images and maximizes the distance between negative pairs in a lower dimensional space. These approaches work well within the natural image domain, but may exhibit certain flaws within the context of seismic imaging.
For example, naive augmentations could potentially distort the textural elements that constitute different classes of rock structures. A better approach for identifying positive pairs of images would be by considering the position of instances within the volume. We can observe this in Figure 1 where seismic images that exist closer to each other in a volume exhibit more structural components in common than those that are further apart. Therefore, these images that are closer to each other within a volume have similar features that a contrastive loss would be able to distinguish from features of other classes of rocks. In this work, we propose to take advantage of the correlations between images close to each other in a volume through a contrastive learning methodology. Specifically, we partition a seismic volume during training into smaller subsets and assign the slices of each subset the same volume based label. We utilize these volume based labels to train an encoder network with a supervised contrastive loss (Khosla et al., 2020). Effectively this means that the model is trained to learn to associate images close in the volume together and disassociate images that are further apart. From the representation space learnt by training in this manner, we fine-tune an attached semantic segmentation head using the available ground truth labels.
The proposed contributions of this paper are: 1. We introduce contrastive learning within the context of a seismic semantic segmentation task. 2. We introduce a novel positive pair selection strategy for seismic on the basis of generated volumetric position labels.
The original usage of deep learning for seismic interpretation tasks was within the context of supervised tasks (Di et al., 2018) where the authors performed salt-body delineation. Further work into supervised tasks included semantic segmentation using deconvolution networks (Alaudah et al., 2019a). Deep learning was also utilized for the task of acoustic impedance estimation (Mustafa et al., 2020; Mustafa and AlRegib, 2020). However, it was quickly recognized that labeled data is expensive and training on small datasets leads to poor generalization of seismic models. For this reason, the research focus switched to methods without as high of a dependence on access to a large quantity of labeled data. This includes (Alaudah and AlRegib, 2017; Alaudah et al., 2019b, 2017) where the authors introduced various methods based on weak supervision
Figure 1: This figure shows seismic images from different regions of the Netherlands F3 Block. It is clear that images that come from adjacent slices in the volume have more structural features in common than those that are far apart in the volume. of structures within seismic images. Other work introduced semi-supervised methodologies such as (Alfarraj and AlRegib, 2019) for the task of elastic impedance inversion. (Lee et al., 2018) introduced a labeling strategy that made use of well logs alongside seismic data. (Shafiq et al., 2018a) and (Shafiq et al., 2018c) introduced the idea of leveraging learnt features from the natural image domain. Related work (Shafiq et al., 2022) and (Shafiq et al., 2018b) showed how saliency could be utilized within seismic interpretation. More recent work involves using strategies such as explainability (Prabhushankar et al., 2020) and learning dynamics analysis (Benkert et al., 2021). Despite the potential of pure self-supervised approaches, there isn’t a significant body of work within the domain of seismic. Work such as (Aribido et al., 2020) and (Aribido et al., 2021) showed how structures can be learnt in a self-supervised manner through manipulation of a latent space. (Soliman et al., 2020) created a self and semi-supervised methodology for seismic semantic segmentation. More recent work (Huang et al., 2022) introduced a strategy to reconstruct missing data traces. The most similar work to ours occurs within the medical field where (Zeng et al., 2021) uses a contrastive learning strategy based on slice positions within an MRI and CT setting. Our work differs from previous ones due to the introduction of a contrastive learning strategy based on volume positions within a seismic setting.
'일상' 카테고리의 다른 글
Target-oriented elastic full-waveform inversion (0) | 2022.06.27 |
---|---|
A machine approach to predicting pressure response in sands (0) | 2022.06.27 |
a small disinterested shrug (0) | 2022.06.27 |
이천 설봉공원 분수음악회 (0) | 2021.11.11 |
티스토리 첫줄 (0) | 2021.11.08 |
댓글