Search

US-12626353-B2 - Machine learning-based defect analysis reporting and tracking

US12626353B2US 12626353 B2US12626353 B2US 12626353B2US-12626353-B2

Abstract

Methods, systems, and computer program products are provided for defect detection in industrial inspections. In one embodiment, captured images are ingested from a robotic platform equipped with imaging sensors. The ingested images are analyzed using an image analysis pipeline and a plurality of labeled images to generate a plurality of segmentation masks. In an environment shown in at least a subset of the plurality of ingested images, a plurality of environmental conditions are simulated to create an augmented plurality of labeled images. A defect analysis model is then trained with the augmented plurality of labeled images and the plurality of segmentation masks.

Inventors

  • Sunil Manikani
  • Olga Domanova
  • Krishna Kumar Narayanan Nair
  • Federico Sporleder
  • Ali Rezaei
  • Nasser Ghorbani

Assignees

  • SCHLUMBERGER TECHNOLOGY CORPORATION

Dates

Publication Date
20260512
Application Date
20231215

Claims (20)

  1. 1 . A method for defect detection in industrial inspections, the method comprising: ingesting a plurality of images captured from a robotic platform equipped with imaging sensors to obtain a plurality of ingested images, wherein each of the plurality of ingested images depicts an object having a defect, and the robotic platform is a drone or an unmanned autonomous vehicle; analyzing the plurality of ingested images using an image analysis pipeline to generate a plurality of segmentation masks and a plurality of labeled images; simulating a plurality of environmental conditions in an environment shown in at least a subset of the plurality of ingested images to create an augmented plurality of labeled images, wherein the augmented plurality of labeled images comprises a plurality of different levels of added noise simulating different levels of visual interference in the environment; and training a defect analysis model with the augmented plurality of labeled images and the plurality of segmentation masks to identify a future defect in an industrial environment having varying visual interference.
  2. 2 . The method of claim 1 , wherein ingesting the plurality of images further comprises: capturing the plurality of images of operations in the industrial environment from the robotic platform.
  3. 3 . The method of claim 1 , wherein the robotic platform is integrated with a cloud-based server for real-time data processing and storage.
  4. 4 . The method of claim 1 , wherein analyzing the plurality of images further comprises: automatically or semi-automatically labeling the plurality of images by the image analysis pipeline that is configured to recognize and classify the defect.
  5. 5 . The method of claim 1 , wherein simulating the plurality of environmental conditions further comprises: adjusting a plurality of parameters in the plurality of ingested images to mimic a plurality of operational conditions; wherein the plurality of environmental conditions comprise lighting variations, weather, wear of a camera, movement of the robotic platform, or a combination thereof, and wherein the plurality of environmental conditions comprise the levels of noise simulating different levels of visual interference.
  6. 6 . The method of claim 1 , wherein training the defect analysis model with the augmented plurality of labeled images reduces overfitting of the defect analysis model.
  7. 7 . The method of claim 1 , further comprising: continually training the defect analysis model as a plurality of new images are ingested, the plurality of new images comprising at least one new defect.
  8. 8 . The method of claim 1 , further comprising: receiving an input from a user defining the defect, wherein the defect comprises a leak, a crack, a structural weakness, or a combination thereof, and the input comprises one or more orientations, one or more positions, and one or more sizes of the defect; and updating defect identification and classification algorithms of the defect analysis model based on the input from the user.
  9. 9 . The method of claim 1 , further comprising: applying a super resolution technique to the augmented plurality of labeled images.
  10. 10 . The method of claim 9 , wherein the super resolution technique comprises processing the augmented plurality of labeled images to increase a resolution from the plurality of ingested images.
  11. 11 . The method of claim 1 , further comprising: ingesting a second set of images captured from a second robotic platform, wherein the second robotic platform is a drone or an unmanned autonomous vehicle; processing the second set of images with the defect analysis model to determine an identified defect, comprising: analyzing the second set of images using the image analysis pipeline to generate a second plurality of segmentation masks and a second plurality of labeled images; simulating the plurality of environmental conditions in a second environment shown in at least a second subset of the second set of images to create a second augmented plurality of labeled images; splitting the augmented plurality of labeled images and the second augmented plurality of labeled images into quadrilles; and determining statistics about the quadrilles; and generating an analytical report from the identified defect, wherein the analytical report includes at least one statistic of the determined statistics about the quadrilles of the identified defect.
  12. 12 . The method of claim 11 , wherein generating the analytical report includes compiling data on a type, a size, and a location of the identified defect.
  13. 13 . A system for defect detection in industrial inspections the system comprising: a robotic platform equipped with imaging sensors, wherein the robotic platform is a drone or an unmanned autonomous vehicle; a computer processor; and a non-transitory computer-readable storage medium storing program code, which when executed by the computer processor, performs a plurality of operations comprising: ingesting a plurality of images captured from the robotic platform to obtain a plurality of ingested images, wherein each of the plurality of ingested images depicts an object having a defect; analyzing the plurality of ingested images using an image analysis pipeline to generate a plurality of segmentation masks and a plurality of labeled images; simulating a plurality of environmental conditions in an environment shown in at least a subset of the plurality of ingested images to create an augmented plurality of labeled images, wherein the augmented plurality of labeled images comprises a plurality of different levels of added noise simulating different levels of visual interference in the environment; and training a defect analysis model with the augmented plurality of labeled images and the plurality of segmentation masks to identify a future defect in an industrial environment having varying visual interference.
  14. 14 . The system of claim 13 , wherein ingesting the plurality of images further comprises: capturing the plurality of images of operations in an industrial environment from the robotic platform.
  15. 15 . The system of claim 13 , wherein analyzing the plurality of images further comprises: automatically or semi-automatically labeling the plurality of images by the image analysis pipeline that is configured to recognize and classify the defect.
  16. 16 . The system of claim 13 , wherein simulating the plurality of environmental conditions further comprises: adjusting a plurality of parameters in the plurality of ingested images to mimic a plurality of operational conditions, wherein the plurality of environmental conditions comprise lighting variations, weather, wear of a camera, movement of the robotic platform, or a combination thereof.
  17. 17 . The system of claim 13 , further comprising: continually training the defect analysis model as a plurality of new images are ingested, the plurality of new images comprising at least one new defect.
  18. 18 . The system of claim 13 , further comprising: receiving an input from a user defining the defect, wherein the defect comprises a leak, a crack, a structural weakness, or a combination thereof, and the input comprises one or more orientations, one or more positions, and one or more sizes of the defect; and updating defect identification and classification algorithms of the defect analysis model based on the input from the user.
  19. 19 . The system of claim 13 , in the operations for other comprise: ingesting a second set of images captured from a second robotic platform; processing the second set of images with the defect analysis model to determine an identified defect, comprising: analyzing the second set of images using the image analysis pipeline to generate a second plurality of segmentation masks and a second plurality of labeled images; simulating the plurality of environmental conditions in a second environment shown in at least a second subset of the second set of images to create a second augmented plurality of labeled images; splitting the augmented plurality of labeled images and the second augmented plurality of labeled images into quadrilles; and determining statistics about the quadrilles; and generating an analytical report from the identified defect, wherein the analytical report includes at least one statistic of the determined statistics about the quadrilles of the identified defect.
  20. 20 . A non-transitory computer-readable storage medium storing program code, which when executed by a computer processor, performs a plurality of operations comprising: ingesting a plurality of images captured from a robotic platform to obtain a plurality of ingested images, wherein each of the plurality of ingested images depicts an object having a defect, and the robotic platform is a drone or an unmanned autonomous vehicle; analyzing the plurality of ingested images using an image analysis pipeline to generate a plurality of segmentation masks and a plurality of labeled images; simulating a plurality of environmental conditions in an environment shown in at least a subset of the plurality of ingested images to create an augmented plurality of labeled images, wherein the augmented plurality of labeled images comprises a plurality of different levels of added noise simulating different levels of visual interference in the environment; and training a defect analysis model with the augmented plurality of labeled images and the plurality of segmentation masks to identify a future defect in an industrial environment having varying visual interference.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a nonprovisional application of, and thereby claims benefit to, U.S. Provisional application 63/387,529 filed on Dec. 15, 2022, which is incorporated herein by reference in its entirety. BACKGROUND In the oil and gas industry, the terms “upstream,” “midstream,” and “downstream” refer to the various stages of the process, from extracting raw materials to delivering the final products to consumers. Upstream involves the exploration and production of crude oil and natural gas. Activities in this stage include searching for potential underground or underwater oil and gas fields, drilling exploratory wells, and then drilling and operating the wells that recover and bring the crude oil or raw natural gas to the surface. Upstream is often known for its elevated risk and high investment, as well as for its technological innovation in exploration and extraction techniques. Midstream refers to the transportation, storage, and processing of oil and gas. After extraction, the raw materials are transported to refineries, which can be done through pipelines, tanker ships, or rail. Storage facilities are also considered part of the midstream sector. Processing might include the refining of crude oil or the purifying of natural gas. The midstream sector serves as the link between the remote locations of crude oil and gas reserves and the downstream sector. Downstream involves the refining of petroleum crude oil and the processing and purifying of raw natural gas, as well as the marketing and distribution of products derived from crude oil and natural gas. The downstream industry provides consumers with a wide range of finished products, including gasoline, diesel oil, jet fuel, natural gas, plastics, and a variety of other energy sources and materials. This sector is characterized by its focus on product distribution and retailing aspects. Each of these sectors has its own unique challenges and focuses, from the high-risk, high-investment world of exploration in the upstream sector to the process and marketing-intensive activities of the downstream sector. Industrial inspection for maintaining the safety and efficiency of various facilities have traditionally been performed manually. This approach, however, poses challenges in terms of accessibility, accuracy, and efficiency. In particular, industries like oil and gas may have more robust and safe inspection methods, where operations in remote or hazardous environments are commonplace. Recent advancements in robotics and image processing have provided opportunities to improve these inspections. Previous attempts at automating industrial inspections have included the use of drones or wheeled robots equipped with cameras. Robots equipped with cameras and sensors can access difficult areas, but may lack the sophisticated software to accurately identify defects such as leaks, cracks, or structural weaknesses. These systems, while offering improved access to challenging areas, still largely depend on human operators for image analysis. Current systems in the market mainly rely on basic image capture followed by manual analysis, which is time-consuming and prone to human error. The use of standard convolutional neural networks (CNNs) in some systems has improved defect recognition, but these models are often limited by the quality and diversity of the training data, especially under varied environmental conditions. SUMMARY In general, embodiments are directed to methods, systems, and computer program products for defect detection in industrial inspections. In one embodiment, a method for defect detection includes ingesting images captured from a robotic platform equipped with imaging sensors. The method includes analyzing the ingested images using an image processing pipeline to generate a plurality of segmentation masks and a plurality of labeled images. The method additionally includes simulating a plurality of environmental conditions in an environment shown in at least a subset of the plurality of ingested images to create an augmented plurality of labeled images. The method further includes training a defect analysis model with the augmented plurality of labeled images and the plurality of segmentation masks. Other aspects of the invention will be apparent from the following description and the appended claims. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 shows a defect analysis system, in accordance with one or more embodiments. FIG. 2 shows an example flow diagram in accordance with one or more embodiments. FIGS. 3A and 3B show a computing system, in accordance with one or more embodiments. FIG. 4 shows a flow chart for defect detection in industrial applications, in accordance with one or more embodiments. FIGS. 5A-5H show a set of processed images demonstrating the steps of an image analysis pipeline, in accordance with one or more embodiments. FIGS. 6A, 6B, 6C, and 6D show a series of photographs of an oil stain, in a