Search

CN-121982691-A - Ship draught dynamic detection method and system based on multi-mode fusion

CN121982691ACN 121982691 ACN121982691 ACN 121982691ACN-121982691-A

Abstract

The invention relates to the technical field of ship draft measurement, and provides a ship draft dynamic detection method and system based on multi-mode fusion. The method comprises the steps of obtaining a multi-mode ship image and/or video to obtain a ship visible light image and a ship near infrared image, carrying out feature level fusion on the ship visible light image and the ship near infrared image to obtain a fusion image, carrying out key region detection on the fusion image by utilizing a pretrained water line detection model adopting YOLOv to mark and/or cut a key water line region in the fusion image to obtain a target image, carrying out hull and water body segmentation on the target image corrected by utilizing a SegFormer network added with a NAM attention mechanism, obtaining hull character features and water line pixel features through edge detection, constructing a visual ranging model containing character coordinates based on the hull character features, and inputting the water line pixel features into the visual ranging model to obtain a ship water line detection result.

Inventors

  • CHENG LIANGLUN
  • ZHOU HAI
  • ZHAN XUHONG
  • LIN JUNKAI
  • LIN FUZHOU
  • GAN WEI
  • YIN YUNPENG
  • WANG ZHUOWEI

Assignees

  • 广东工业大学

Dates

Publication Date
20260505
Application Date
20251218

Claims (10)

  1. 1. The ship draft dynamic detection method based on multi-mode fusion is characterized by comprising the following steps of: acquiring a multi-mode ship image and/or video to obtain a ship visible light image and a ship near infrared image; Performing feature fusion on the ship visible light image and the ship near infrared image to obtain a fusion image; Carrying out key region detection on the fusion image by using a pretrained waterline detection model YOLOv which is adopted, and marking and/or cutting the key waterline region in the fusion image to obtain a target image; Performing hull and water body segmentation on the target image subjected to image correction by utilizing SegFormer networks added with NAM attention mechanisms, and obtaining hull character features and waterline pixel features through edge detection; And constructing a visual ranging model containing character coordinates based on the character features of the ship body, and inputting the waterline pixel features into the visual ranging model to obtain a ship draft detection result.
  2. 2. The ship draft dynamic detection method according to claim 1, wherein feature level fusion of the ship visible light image and the ship near infrared image includes: preprocessing the ship visible light image and the ship near infrared image; respectively extracting texture features, edge contour features and/or gradient features of a target ship from the ship visible light image and the ship infrared image; And taking the texture features, the edge contour features and/or the gradient features extracted from the ship infrared image as dominant features, taking the texture features, the edge contour features and/or the gradient features extracted from the ship visible light image as verification features, and jointly inputting the verification features into a fusion model based on a transducer for feature fusion to obtain a fusion image.
  3. 3. The method of claim 1, wherein for the waterline detection model, the loss function comprises SIoU loss functions expressed as: ; Wherein, the The weighted proportion of the loss function, and the expression of the shape cost is as follows: ; Wherein, the To define a fixed value for shape cost, t is the dimension currently traversed, w is the width of the prediction box, h is the height of the prediction box, The IOU is an object measurement standard, and the expression is as follows: ; wherein A is a model prediction frame, and B is a real frame.
  4. 4. The ship draft dynamic detection method according to claim 1, wherein the SegFormer network includes an encoder module and a decoder module, wherein the decoder module includes NAM modules connected in sequence.
  5. 5. The ship draft dynamic detection method according to claim 1, wherein the constructing a visual ranging model including character coordinates based on the hull character features includes the steps of: identifying character features of the ship body by using a CRATT text detection algorithm to obtain character coordinates; Removing abnormal data in the character coordinates by a statistical method, and obtaining a corresponding relation between the character coordinates and a water level value by random robust regression to obtain the visual ranging model, wherein the expression is as follows: ; The method comprises the steps of obtaining a model fitting parameter k and b, obtaining a model fitting parameter y, obtaining a predicted waterline value S, obtaining a pixel coordinate of an edge line between a ship body and a water body, obtaining an average value of the y values of the waterline pixel characteristics, and inputting the average value into a visual ranging model to obtain a ship draught detection result.
  6. 6. The method for dynamically detecting the draught of a ship according to any one of claims 1 to 5, further comprising the steps of: And for the target image marked and/or cut in the key waterline area, invoking a preset correction algorithm to carry out image correction on the target image.
  7. 7. The method of claim 6, wherein invoking a preset correction algorithm to perform image correction on the target image comprises at least one of: (1) Inputting the target image into a DEWARPNET-based deep learning model, predicting and distributing displacement vectors to each pixel point in the target image by the deep learning model, and correcting each pixel according to the displacement vectors to obtain an inclination corrected target image; (2) Invoking a defogging algorithm based on dark channel priori, and suppressing high-frequency noise through Gaussian bilateral filtering to obtain a noise-reduced target image; (3) Invoking an optical flow model to predict the motion trail of waves and predicting the motion trail of a ship body and a water body; and correcting each pixel according to the prediction result to obtain a clear target image.
  8. 8. A ship draught dynamic detection system based on multi-mode fusion, which is applied to the ship draught dynamic detection method according to any one of claims 1 to 7, and is characterized by comprising the following steps: The multi-mode acquisition terminal is configured to acquire ship images and/or videos to obtain ship visible light images and ship near infrared images; the feature fusion module is configured to perform feature level fusion on the ship visible light image and the ship near infrared image to obtain a fusion image; The waterline detection module is provided with a trained waterline detection model and is configured to detect a key area of the fused image, and the key waterline area is marked and/or cut in the fused image to obtain a target image; the image correction module is configured to judge the complex scene of the ship visible light image and/or the ship near infrared image, obtain a complex scene tag, and call a preset correction algorithm according to the complex scene tag to perform image correction on the target image; A detection module, on which a SegFormer network added with NAM attention mechanism is mounted, configured to perform hull and water body segmentation on the target image corrected by the image, and obtain hull character features and waterline pixel features by edge detection, and And the ranging module is configured to construct a visual ranging model containing character coordinates based on the character features of the ship body, and input the waterline pixel features into the visual ranging model to obtain a ship draft detection result.
  9. 9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform all or part of the steps of a method for dynamically detecting the draught of a vessel as claimed in any one of claims 1 to 7.
  10. 10. A storage medium having stored thereon computer readable instructions which when executed by a processor perform all or part of the steps of a method for dynamically detecting draught of a vessel according to any of claims 1 to 7.

Description

Ship draught dynamic detection method and system based on multi-mode fusion Technical Field The invention relates to the technical field of ship draft measurement, in particular to a ship draft dynamic detection method and system based on multi-mode fusion. Background The draft of the ship is a core index for measuring the loading state of the ship, and the detection precision of the draft is directly related to the safety, efficiency and compliance of the marine ship transportation. Accurate draft detection is a primary line of defense for preventing overload of vessels, loss of stability, and capsizing accidents. The overweight of the ship can change hydrodynamic characteristics, so that the control failure and the avoidance capacity are reduced, and the method is one of main inducements of stranding and collision accidents of the inland waterway. Secondly, draft data is an authoritative basis for goods handover and freight settlement, and the maximum cargo capacity can directly promote shipping income within the safety limit, and inaccurate draft readings are easy to cause economic disputes between a cargo owner and a shipside, and even cause supervision and punishment of port countries. The International Maritime Organization (IMO) international load line convention clearly requires that the vessel be validated for load compliance by draft data, and the offending vessel will be penalized for retention, fines and the like. In addition, reasonable draft management can avoid leaking fuel oil and cargo oil caused by ship stranding or structural damage, reduce pollution to marine ecology, and can improve ship operability, reduce navigation resistance and improve fuel economy by adjusting head-tail draft difference. The current ship draft detection is mainly based on traditional manual observation, namely, water gauge scales on two sides of a ship body are required to be observed by naked eyes, and the observation precision is easily affected by sight shielding, wave disturbance, light change and personnel experience under severe environments such as stormy waves, night, rain and fog. In addition, manual observation requires boarding or dinghy close-range operation, presents safety risks, and lacks objective data recording and traceability. At present, a ship draft detection method based on deep learning is also proposed, for example, a Mask R-CNN network is adopted to segment a ship image, a U-Net network is introduced to locally and finely segment a waterline region, then a water gauge character position is detected through a maximum stable extremum region algorithm, a ResNet residual error network is combined to realize the recognition of ship waterline characters, and finally draft calculation is executed. However, the method is insufficient in robustness to complex environments, too depends on visible light images, is easy to misjudge the outline of an obstacle as a waterline when the obstacle or a ship body residual water mark exists on the water surface, causes errors in waterline segmentation, and cannot achieve accurate extraction of a waterline region and characters with high precision under a scene with low visibility and high interference. Disclosure of Invention The invention provides a ship draught dynamic detection method and system based on multi-mode fusion, which are used for overcoming the defect of low ship draught detection precision in the complex environment in the prior art. In order to solve the technical problems, the technical scheme of the invention is as follows: a ship draft dynamic detection method based on multi-mode fusion comprises the following steps: acquiring a multi-mode ship image and/or video to obtain a ship visible light image and a ship near infrared image; Performing feature level fusion on the ship visible light image and the ship near infrared image to obtain a fusion image; Carrying out key region detection on the fusion image by using a pretrained waterline detection model YOLOv which is adopted, and marking and/or cutting the key waterline region in the fusion image to obtain a target image; Performing hull and water body segmentation on the target image subjected to image correction by utilizing SegFormer networks added with NAM attention mechanisms, and obtaining hull character features and waterline pixel features through edge detection; And constructing a visual ranging model containing character coordinates based on the character features of the ship body, and inputting the waterline pixel features into the visual ranging model to obtain a ship draft detection result. Preferably, the feature level fusion of the ship visible light image and the ship near infrared image includes: preprocessing the ship visible light image and the ship near infrared image; respectively extracting texture features, edge contour features and/or gradient features of a target ship from the ship visible light image and the ship infrared image; And taking the texture features, the