Search

CN-117292144-B - Sonar image simulation method based on generation countermeasure network

CN117292144BCN 117292144 BCN117292144 BCN 117292144BCN-117292144-B

Abstract

The invention discloses a sonar image simulation method based on a generated countermeasure network, which comprises the following specific steps of S1, taking an original sonar image as a high-resolution image, preprocessing the original sonar image to obtain a low-resolution image, and forming a data set by the high-resolution image and the low-resolution image, S2, creating a generated countermeasure network model, wherein the generated countermeasure network model comprises a sonar image generation network module and a sonar image discrimination network module, generating a super-resolution image by the sonar image generation network module based on the low-resolution image in the data set, and outputting a discrimination result by the sonar image discrimination network module based on the generated super-resolution image and the high-resolution image in the data set, S3, training the generated countermeasure network model based on the data set to obtain a trained generated countermeasure network model, and S4, simulating the sonar image based on the trained generated countermeasure network model. The invention can improve the definition of the existing sonar image through the constructed generation countermeasure network model and generate the sonar image with higher resolution.

Inventors

  • ZHOU XIN
  • GUO AIBIN
  • SUN BIN

Assignees

  • 大连海事大学

Dates

Publication Date
20260508
Application Date
20231023

Claims (5)

  1. 1. A sonar image simulation method based on a generation countermeasure network is characterized by comprising the following specific steps: S1, taking an original sonar image as a high-resolution image, preprocessing the original sonar image to obtain a low-resolution image, and forming a data set by the high-resolution image and the low-resolution image; s2, creating a generated countermeasure network model, wherein the generated countermeasure network model comprises a sonar image generation network module and a sonar image discrimination network module, the sonar image generation network module generates a super-resolution image based on a low-resolution image in the data set, and the sonar image discrimination network module is used for outputting discrimination results based on the generated super-resolution image and the high-resolution image in the data set; s3, training the generated countermeasure network model based on the data set to obtain a trained generated countermeasure network model; s4, simulating a sonar image based on the generated countermeasure network model after training; the sonar image generation network module comprises a first input module, a feature extraction module, a feature learning module and a first output module; The first input module comprises a first convolution layer and a first PReLU activation function layer, and is used for extracting the edge and texture characteristics of an input low-resolution image to output a first characteristic image and transmitting the first characteristic image to the characteristic extraction module; The feature extraction module is used for extracting the edge and texture features of the first feature map to output a second feature map, and transmitting the second feature map to the feature learning module; The feature extraction module comprises a multi-scale convolution module, wherein the multi-scale convolution module comprises a first branch, a second branch, a third branch, a fourth branch and a fifth branch; The first branch comprises a1 multiplied by 1 first branch convolution layer and a first branch cavity convolution layer, and is used for extracting the edge and texture characteristics of the first feature map, wherein the size of the receptive field of the edge and texture characteristics is in a range of 3 multiplied by 3; The second branch comprises a1 multiplied by 1 second branch convolution layer, a second branch asymmetric convolution layer and a second branch cavity convolution layer, and is used for extracting the edge and texture characteristics of the first feature map, wherein the size of the edge and texture characteristics is 9 multiplied by 9; the third branch comprises a1 multiplied by 1 third branch convolution layer, a third branch asymmetric convolution layer and a third branch cavity convolution layer, and is used for extracting the edge and texture characteristics of the first feature map, wherein the size of the edge and texture characteristics is 9 multiplied by 9; The fourth branch comprises a1 multiplied by 1 fourth branch convolution layer, a fourth branch asymmetric convolution layer and a fourth branch cavity convolution layer, and is used for extracting the edge and texture characteristics of the first feature map, wherein the size of the edge and texture characteristics is 15 multiplied by 15; the fifth branch performs splicing operation on the feature graphs output by the four branches along the channel dimension to obtain feature graphs with the number of 2 times of original channels, then adjusts the feature graphs with the number of 2 times of original channels into feature graphs with the number of 1 times of original channels by using a 1X 1 fifth branch convolution layer, sums the feature graphs with the first feature graphs along the channel dimension through jump connection, and outputs a second feature graph through a ReLU activation function layer; the feature learning module is used for extracting the shape of the second feature map and the overall structural feature of the object to output a third feature map, and transmitting the third feature map to the first output module; The feature learning module comprises L densely connected residual blocks, L is more than or equal to 2, each residual block comprises a second convolution layer and a second PReLU activation function layer, a first residual block takes the second feature map as input, an L-th residual block takes a feature map output after summation operation is carried out on the second feature map and feature maps respectively output by the L-1 residual blocks, and a third feature map is output; The first output module is used for upsampling the third characteristic map to output a super-resolution image and transmitting the generated super-resolution image to the sonar image discrimination network module; The second convolution block comprises a third convolution layer and a first BN layer, the third convolution layer maps the third feature map to the high-resolution image in the data set according to a rule that texture, edge and context information of the third feature map correspond to the high-resolution image, and the first BN layer is used for carrying out batch normalization processing on output of the third convolution layer, outputting a fourth feature map and transmitting the fourth feature map to the up-sampling module; Each up-sampling module comprises a fourth convolution layer, a sub-pixel convolution layer and a third PReLU activation function layer, wherein the fourth convolution layer is used for splitting and outputting r 2 low-resolution feature images in a fourth feature image, the sub-pixel convolution layer carries out sub-pixel convolution on the basis of r 2 low-resolution feature images to generate a high-resolution feature image with the size of r 2 , r is the amplification factor, and the high-resolution feature image is transmitted to the third convolution block after passing through the third PReLU activation function layer; the third convolution block comprises a fifth convolution layer and is used for converting abstract high-level features contained in the high-resolution feature map output by the up-sampling module into pixel representations of final output images and outputting super-resolution images, wherein the abstract high-level features comprise shape and appearance features related to object types; the sonar image discrimination network module comprises a second input module, a plurality of convolution modules and a second output module; the second input module is used for extracting the edges and texture features of the super-resolution image and the high-resolution image in the data set, respectively outputting a fourth feature map and a fifth feature map, and transmitting the fourth feature map and the fifth feature map to the convolution module; The second input module comprises a sixth convolution layer and a fourth PReLU activation function layer, wherein the sixth convolution layer is used for extracting the edge and texture characteristics of the super-resolution image and the high-resolution image in the dataset, and then outputting a fourth feature map and a fifth feature map through the fourth PReLU activation function layer; Each convolution module comprises a seventh convolution layer, a second BN layer and a first leak ReLU activation function layer, wherein the seventh convolution layer is used for extracting the shape and the integral structure characteristics of an object of the fourth characteristic image and the fifth characteristic image, the second BN layer is used for carrying out batch normalization processing, and a sixth characteristic image and a seventh characteristic image are output after the second BN layer passes through the first LeakyReLU activation function layer; the second output module is used for calculating Wasserstein distance between the sixth characteristic diagram and the seventh characteristic diagram; the second output module comprises a first full-connection layer, a second leak ReLU activation function layer and a second full-connection layer which are sequentially connected, the first full-connection layer is used for outputting feature vectors of a sixth feature map and a seventh feature map, the second leak ReLU activation function layer adds the feature vectors of the sixth feature map and the feature vectors of the seventh feature map pixel by pixel and outputs the feature vectors, and the second full-connection layer calculates Wasserstein distance based on the feature vectors output by the second leak ReLU activation function layer and outputs the Wasserstein distance, namely a real value.
  2. 2. The sonar image simulation method based on generating an countermeasure network according to claim 1, wherein in S3, the specific training step of generating the countermeasure network model includes: s31, initializing the sonar image generation network module and the sonar image discrimination network module, inputting a low-resolution image in the data set into the sonar image generation network module, generating a super-resolution image through the sonar image generation network module, training by the sonar image discrimination network module based on the super-resolution image and a high-resolution image in the data set, minimizing an antagonism loss function of the sonar image discrimination network module, and updating parameters of the sonar image discrimination network module; s32, training the sonar image generation network module based on the updated parameters of the sonar image discrimination network module, minimizing a content loss function of the sonar image generation network module, and updating the parameters of the sonar image generation network module; and S33, repeating the training process, and when the loss function formed by the countermeasure loss function and the content loss function converges, obtaining a generated countermeasure network model after training is finished.
  3. 3. The sonar image simulation method based on the generated countermeasure network according to claim 2, wherein in S31, when the sonar image discrimination network module trains based on the super-resolution image and the high-resolution image in the dataset, the difference between the high-resolution image in the dataset and the generated super-resolution image distribution is judged by judging the waserstein distance between them, and a countermeasure loss function is set, the countermeasure loss function is expressed as: ; in the formula, For the wasperstein distance, For gradient penalty, D (I HR ) represents the discrimination network score of the high resolution sonar image, D (G (I LR )) represents the discrimination network score of the generated super resolution image, The weight coefficients representing the gradient penalty, E is the expected value operator, D (I HR ) is a target value for discriminating the gradient of the output of the network module to the high resolution image I HR with respect to I HR , 1 for the reference gradient norm.
  4. 4. A sonar image simulation method based on a generation countermeasure network according to claim 3, wherein in S32, based on the updated parameters of the sonar image discrimination network module, the sonar image generation network module is trained, the feature extraction is performed on the high resolution image in the data set and the generated super resolution image through the pretrained VGG19 network, the content loss of the generated super resolution image and the high resolution image in the data set is calculated, and the content loss function is set as follows: , in the formula, Is a super-resolution image generated by a generating network, I LR and I HR represent a low-resolution image and a high-resolution image respectively, And For the size of the feature map of the jth convolutional layer prior to maximum pooling of the ith layer in the VGG19 network, Representing feature extraction functions in the VGG19 network, x and y represent coordinate locations on the feature map.
  5. 5. The sonar image simulation method based on generating an countermeasure network according to claim 4, wherein in S33, when a loss function composed of the countermeasure loss function and the content loss function converges, the training ends, the loss function is expressed as: 。

Description

Sonar image simulation method based on generation countermeasure network Technical Field The invention relates to the technical field of sonar image simulation, in particular to a sonar image simulation method based on a generation countermeasure network. Background The study of image simulation has undergone various stages from simple line segment, regular shape synthesis, to regular image synthesis, such as texture image and face image synthesis, to complex natural image synthesis, such as picture synthesis in dataset ImageNet, from the 60 th century. Along with the gradual increase of the image data volume, the computing capability of modern computers is improved, the natural image simulation technology is mature, the quality of the simulated natural image is improved, but the sonar image is rare because of the complexity of the underwater environment, and the simulation technology of the sonar image is still deficient. The purpose of the submarine sonar image simulation is to research a method capable of generating submarine sonar images with high quality. Image generation is in fact a probabilistic modeling of the image, which can be generalized to the application of generating models, most of which are based on maximum likelihood estimation, with appropriate correct parameters calculated for the selected model so that the data likelihood function value in the training data set is maximized. The traditional sonar image simulation method utilizes a computer simulation technology to carry out modeling simulation according to an imaging mechanism and image characteristics of the sonar, so that the two simulation methods are established on the basis of model construction, professional field knowledge is required, when a scene where a simulation object is located lacks an available geometric model, the two traditional methods are difficult to generate a sonar image, the simulation effect has great dependence on the accuracy of model construction and the selection of parameters, and the simulation object is difficult to adjust parameters due to the factors such as various structures, complex imaging process, clutter and the like, so that the quality of the generated sonar image is low. Disclosure of Invention The invention provides a sonar image simulation method based on a generation countermeasure network, which aims to solve the problems that when image simulation is carried out in the prior art, the simulation result has larger dependence on the accuracy of model construction and the selection of parameters, and the parameters of a simulation model are influenced by a simulation object and are difficult to adjust, so that the quality of the generated sonar image is lower. In order to achieve the above object, the technical scheme of the present invention is as follows: A sonar image simulation method based on a generated countermeasure network comprises the following specific steps: S1, taking an original sonar image as a high-resolution image, preprocessing the original sonar image to obtain a low-resolution image, and forming a data set by the high-resolution image and the low-resolution image; s2, creating a generated countermeasure network model, wherein the generated countermeasure network model comprises a sonar image generation network module and a sonar image discrimination network module, the sonar image generation network module generates a super-resolution image based on a low-resolution image in the data set, and the sonar image discrimination network module is used for outputting discrimination results based on the generated super-resolution image and the high-resolution image in the data set; s3, training the generated countermeasure network model based on the data set to obtain a trained generated countermeasure network model; and S4, simulating a sonar image based on the generated countermeasure network model after training. Specifically, in S2, the sonar image generation network module includes a first input module, a feature extraction module, a feature learning module, and a first output module; the first input module is used for extracting the edge and texture characteristics of the input low-resolution image to output a first feature map, and transmitting the first feature map to the feature extraction module; The feature extraction module is used for extracting the edge and texture features of the first feature map to output a second feature map, and transmitting the second feature map to the feature learning module; the feature learning module is used for extracting the shape of the second feature map and the overall structural feature of the object to output a third feature map, and transmitting the third feature map to the first output module; the first output module is used for up-sampling the third feature map to output a super-resolution image, and transmitting the generated super-resolution image to the sonar image discrimination network module; the sonar image discrimination