CN-122027750-A - Universal domain steganalysis method and system for color image
Abstract
The invention discloses a general domain steganography analysis method and a general domain steganography analysis system for color images, which relate to the technical field of image steganography analysis, and the general domain steganography analysis method comprises the steps of constructing an airspace and JPEG domain exclusive double-domain steganography data set from an Alaska II data set, training an SGD (generalized g-d) optimizer of a split domain personalized parameter based on an original DRNet model, performing progressive migration learning on a low-embedding-rate image, and selecting a final model according to an exclusive rule to complete double-domain steganography analysis; the DRNet model comprises 62 high-pass filter preprocessing layers, main-branch parallel blocks and classification modules, and the main branches realize feature fusion through multiplication and addition. The invention realizes the universal domain steganalysis of the color image, improves the detection precision of low embedding rate, has the detection accuracy and AUC value obviously superior to the existing model, and meets the actual Internet color image steganalysis requirement.
Inventors
- WANG XIAOWEN
- FU TONG
- CAO YI
Assignees
- 无锡学院
Dates
- Publication Date
- 20260512
- Application Date
- 20260401
Claims (10)
- 1. The universal domain steganalysis method for the color image is characterized by comprising the following steps of: S1, a color image is selected from an Alaska II public reference data set to construct a basic data set, the basic data set is randomly sampled and divided into a training set, a verification set and a test set according to the proportion of 6:1:3, a proprietary steganography algorithm and steganography strategy combination are respectively adopted aiming at a airspace and a JPEG domain, secret information is embedded into an original image at a specified embedding rate to construct a double-domain steganography data set, and the original image and a corresponding steganography image are combined into an image pair; S2, training based on an original color image steganography analysis network DRNet model, adopting a random gradient descent SGD optimizer, setting a personalized parameter system for airspace and JPEG domain to perform model parameter optimization, and training a low-embedding-rate image by adopting a progressive transfer learning strategy to minimize a loss function value, wherein the DRNet model comprises a preprocessing layer formed by combining 62 high-pass filters, a main-path-branch parallel block and a classification module, wherein the main-path-branch parallel block adopts a main-path channel fusion strengthening and branch channel-pixel joint attention positioning parallel structure, and realizes main-path feature fusion in a multiplication and addition mode; S3, applying the trained models to test set evaluation, performing two-class prediction on each test image whether secret information is contained, calculating model detection accuracy based on a prediction result, and selecting a final model by adopting exclusive rules aiming at basic training and low-embedding-rate migration learning of airspace and JPEG domains.
- 2. A color image universal domain steganalysis method according to claim 1, wherein the step S1 further comprises: S11, aiming at airspace steganography analysis, 10000 TIF color images with 256 multiplied by 256 pixels are selected from Alaska II data sets to construct a basic data set, two color image steganography strategies of CMD-C and GINA are used, S-UNIWARD and HILL self-adaptive steganography algorithms are respectively applied to construct a steganography image data set according to embedding rates of 0.2, 0.3 and 0.4 bpc, and four airspace steganography data sets of CMD-C-S-UNIWARD, CMD-C-HILL, GINA-S-UNIWARD and GINA-HILL are finally constructed; s12, randomly extracting 10000 pieces of 256 multiplied by 256 pixels from an Alaska II data set aiming at JPEG domain steganography analysis, constructing a basic data set by using a JPEG color image with quality factors of 75 and 95 respectively, and constructing a JPEG domain steganography image data set by adopting a J-UNIWARD and a UED algorithm under the embedding rates of 0.2, 0.3 and 0.4 bpnzac; S13, after the steganographic images of the airspace and the JPEG domain are obtained, the steganographic images and the corresponding original images are respectively combined into image pairs, and the image pairs are sent into a DRNet model for training.
- 3. A method of color image generic domain steganalysis according to claim 1, characterized in that said step S2 comprises the sub-steps of: S21, setting parameters of an SGD optimizer of basic training, wherein the parameters of the SGD optimizer are momentum parameters of 0.9, weight attenuation coefficients of 0.0005, and total training rounds of 250 epochs, wherein the initial learning rate of a airspace is 0.01, the initial learning rate of a JPEG domain is attenuated to be 1/10 in 120 th and 210 th rounds, the initial learning rate of the JPEG domain is 0.02, and the learning rate is attenuated to be 1/10 in 80 th, 140 th and 190 th rounds; S22, a progressive migration learning strategy of the low-embedding-rate image comprises the steps of enabling an airspace training round to be 70 epochs, enabling an initial learning rate to be 0.001, enabling learning rates to be attenuated to be 1/10 at the 30 th and 55 th epochs, enabling a JPEG domain training round to be 100 epochs, enabling the initial learning rate to be 0.005, and enabling the learning rate to be attenuated to be 0.0005 at the 80 th epochs.
- 4. The method of claim 1, wherein the pre-processing layer of DRNet models is a combined structure of 62 high-pass filters, and a three-step process including channel separation, high-pass filter processing and channel splicing is adopted, and the method specifically comprises: The method comprises the steps of firstly carrying out channel separation on an input color image to obtain feature graphs of three channels, then using 62 high-pass filters in UCNet to act on each channel, carrying out point-by-point convolution on pixel values of each channel to obtain initial noise residual errors, limiting the dynamic range of the noise residual errors through a truncated linear function, and finally integrating the residual error features of each channel in a channel splicing mode.
- 5. The method for color image universal domain steganalysis according to claim 1, wherein in the main-branch parallel block of DRNet model, main-branch is responsible for multi-channel information fusion and steganalysis feature fine granularity enhancement, specifically comprising: The information fusion of different channels is completed through a Type 1 block, wherein the Type 1 block is formed by combining 1X 1-3X 3-1X 1 convolution layers, and the first 1X 1 convolution layer is formed by grouping convolution and combination with channel shuffling operation, so that information exchange among different channels is enhanced; Fine granularity reinforcement is carried out on the fused features through 2 Type 2 blocks, the Type 2 blocks are residual information enhancement blocks, a main network of the Type 2 blocks is composed of a 1X 1 convolution layer, a 3X 3 detail enhancement convolution layer and a 1X 1 convolution layer, and the 3X 3 detail enhancement convolution layer comprises 5 convolutions which are deployed in parallel, namely common convolution, center difference convolution, angle difference convolution, horizontal difference convolution and vertical difference convolution; The enhanced features are condensed and the number of parameters is reduced through a Type 3a block, wherein the Type 3a block is a residual error downsampling block, a backbone network of the Type 3a block is sequentially formed by a3×3 convolution layer, a3×3 convolution layer and an average pooling layer, and channel dimensions are adjusted through a1×1 convolution layer on jump connection.
- 6. The method according to claim 1, wherein in the main-branch parallel block of DRNet model, the branch is responsible for focusing the steganographic channel and the pixel region precisely, specifically comprising: performing dimension reduction processing on an input feature map through a Type 5 block, and adjusting the size and the channel number of the feature map to be consistent with the output of a main path, wherein the Type 5 block comprises a3×3 convolution layer, a ReLU function and a3×3 convolution layer with the step length of 2; then, by using a channel-pixel attention mechanism, firstly modeling the importance of a specific channel through a channel attention mechanism module, highlighting important channels by learning importance weights of different channels, and inhibiting channels with less steganography modification; modeling the importance of a specific pixel through a pixel attention mechanism module, and dynamically adjusting the attention distribution of the model to the input image through learning the weight of each pixel point in the feature map; And multiplying the obtained attention weight with the feature map on the main road element by element to realize the attention screening of the main road features.
- 7. The method for color image universal domain steganalysis according to claim 1, wherein in the main-branch parallel block of DRNet model, the specific way of main-branch feature fusion is: And multiplying the branch attention weight with the main road feature map element by element to obtain a weighted feature map, and adding the weighted feature map and the main road original feature map element by element to form residual connection, so that feature enhancement is realized, and the gradient vanishing problem of a deep network is relieved.
- 8. The method for color image universal domain steganalysis according to claim 1, wherein the classification module integrates and congeals the features obtained by the previous module processing through two residual blocks, namely a Type 4 block and a Type 3b block; the Type 4 block is a typical residual bottleneck structure, and the main network of the Type 4 block is formed by a1×1 convolution layer, namely a3×3 convolution layer and a1×1 convolution layer, wherein the 3×3 convolution layer uses group convolution, so that the number of parameters can be effectively reduced; the dimension is adjusted by a convolution layer with the size of 3 multiplied by 3 and the step length of 2 on the jump connection, so that the input and the output of the module can be directly added; The Type 3b block is the same as the Type 3a block and is a residual downsampling block commonly used in the field of image steganography analysis, a backbone network of the residual downsampling block sequentially comprises a 3×3 convolution layer, a 3×3 convolution layer and an average pooling layer, a BN layer and a ReLU function are combined after the first convolution layer, the second convolution layer is combined with the BN layer, and the channel dimension is adjusted through the 1×1 convolution layer on jump connection; The 128 feature maps are then transformed into 128× (128+1)/2-dimensional feature vectors using global covariance pooling, followed by classification by the full join layer and softmax functions.
- 9. The method according to claim 1, wherein the step S3 further comprises: For airspace steganography analysis and JPEG domain steganography analysis, taking a model with highest accuracy on a verification set after the last learning rate is attenuated as a final model; and during transfer learning, selecting the model with the highest accuracy on the verification set in all rounds as a final model.
- 10. A color image universal domain steganalysis system performed using the color image universal domain steganalysis method according to any one of claims 1-9, comprising the following modules: The data set construction module is used for selecting a color image from an Alaska II public reference data set to construct a basic data set, randomly sampling and dividing the basic data set into a training set, a verification set and a test set according to the proportion of 6:1:3, respectively adopting a proprietary steganography algorithm and steganography strategy combination aiming at a airspace and a JPEG domain, embedding secret information into an original image at a specified embedding rate to construct a double-domain steganography data set, and combining the original image and a corresponding steganography image into an image pair; The model training module is used for training based on an original color image steganography analysis network DRNet model, adopting a random gradient descent SGD optimizer and setting a personalized parameter system for a space domain and a JPEG domain to perform model parameter optimization, and adopting a progressive transfer learning strategy training for a low-embedding-rate image to minimize a loss function value, wherein the DRNet model comprises a preprocessing layer formed by combining 62 high-pass filters, a main-path-branch parallel block and a classification module, and the main-path-branch parallel block adopts a main-path channel fusion strengthening and branch channel-pixel joint attention positioning parallel structure and realizes main-path feature fusion in a multiplication and addition mode; the model evaluation module is used for applying the trained model to test set evaluation, executing two kinds of classification prediction on each test image whether secret information is contained or not, calculating model detection accuracy based on a prediction result, and selecting a final model by adopting exclusive rules aiming at basic training and low embedding rate migration learning of a airspace and a JPEG domain.
Description
Universal domain steganalysis method and system for color image Technical Field The invention relates to the technical field of image steganalysis, in particular to a method and a system for color image universal domain steganalysis. Background Image steganography is a technique that embeds secret information into a digital image, aiming at realizing hidden transmission. As a counter to the image steganography technique, the core goal of image steganography analysis is to detect whether secret information is contained in an input image. Currently, research of a main stream image steganalysis algorithm is mainly focused on the field of gray level images, however, in a practical application scene, the application range of gray level images is relatively limited, and the main stream image steganalysis algorithm is mainly applied to the professional fields of medical images, history files and the like. In contrast, color images have a duty ratio of over 90% in internet image resources, which has become a major expression form of digital images, making color image steganalysis research of great practical significance. The prior art discloses a color image digital steganography and an analysis method thereof, wherein the invention is disclosed in the patent application with the publication number of CN103745479B, the step of embedding secret information comprises the steps of generating a universal image data buffer area and a data steganography factor, performing a series of function conversions on the color image and the data steganography factor to generate a color image containing steganography data, and the step of extracting the secret information comprises the steps of generating the universal image data buffer area, performing a series of function conversions on the color image containing steganography data and reading the steganography data. By solidifying the steganographic data, the steganographic speed is improved, and the data steganographic CPU time and the steganographic data CPU reading time are reduced. The method supports various color image formats, adopts thread pool technology, improves the concurrent number of data steganography and steganography data reading, and enhances the data robustness for preventing steganography attack. Aiming at the scheme, the traditional color image steganography analysis method comprises two steps of feature extraction and classification, the feature is built and adjusted by relying on manual experience, the time and the labor are consumed, and the novel self-adaptive steganography algorithm is difficult to resist. The existing color image steganalysis method has the defects that differences of steganalysis noise distribution of different channels are not considered, targeted attention is lacked, the method is insensitive to the channel differences, complicated image textures and smooth areas are not distinguished, the steganalysis areas cannot be focused accurately, and after noise residual errors are extracted through preprocessing, simple convolution cascade processing is adopted, information interaction among channels is difficult to promote, and steganalysis signals are scattered. Disclosure of Invention Aiming at the technical defects, the invention aims to provide a general domain steganalysis method and system for color images. S1, selecting a color image from Alaska II public reference data set to construct a basic data set, randomly sampling and dividing the basic data set into a training set, a verification set and a test set according to the proportion of 6:1:3, respectively adopting a proprietary steganography algorithm and steganography strategy combination aiming at a airspace and a JPEG domain, embedding secret information into an original image at a designated embedding rate to construct a double-domain steganography data set, and combining the original image and a corresponding steganography image into an image pair. S2, training based on an original color image steganography analysis network DRNet model, adopting a random gradient descent SGD optimizer, setting a personalized parameter system for airspace and JPEG domain to perform model parameter optimization, training a low-embedding-rate image by adopting a progressive migration learning strategy to minimize a loss function value, wherein the DRNet model comprises a preprocessing layer formed by combining 62 high-pass filters, a main-path-branch parallel block and a classification module, wherein the main-path-branch parallel block adopts a main-path channel fusion strengthening and branch-path channel-pixel joint attention positioning parallel structure, and realizes main-path feature fusion in a multiplication and addition mode. S3, applying the trained models to test set evaluation, performing two-class prediction on each test image whether secret information is contained, calculating model detection accuracy based on a prediction result, and selecting a final model by adopting exclusive