CN-115293950-B - Anti-piracy hidden watermark embedding and extracting network framework based on minimum dependency hiding and application method thereof
Abstract
The invention relates to the multimedia technical field and the information security field of digital watermarks, in particular to an anti-robbery and anti-camera hidden watermark technology based on deep learning, and specifically relates to an anti-robbery and anti-camera hidden watermark embedding and extracting network frame based on minimum dependency hiding and a use method thereof. The invention is suitable for desktop application environment, and has high watermark embedding efficiency, watermark-containing image quality and screen shooting robustness. According to the invention, the probability judging module is innovatively adopted to orderly combine disturbance modules of different attacks, so that the diversity and the balance of training samples for network optimization are greatly enhanced, and the robustness of the watermark under various attack conditions is greatly improved.
Inventors
- ZHANG XINYI
- SONG JIAWEI
- LIU CHUNXIAO
Assignees
- 浙江工商大学
Dates
- Publication Date
- 20260505
- Application Date
- 20220818
Claims (6)
- 1. Anti-piracy dark watermark embedding method based on minimum dependency hiding, and network framework based on minimum dependency hiding for embedding and extracting anti-piracy dark watermark, wherein the network framework is formed by a preprocessing network with a U-shaped network structure With edge extraction operators And multiple convolution structure coding network Decoding network consisting of 6 convolutional layers Cascade formed in coding network And decoding network A group of image disturbance layers capable of being made micro Disturbance layer Mainly comprises two main types of transformation including image geometric transformation and image pixel transformation, wherein the image geometric transformation comprises amplification, reduction, rotation and image distortion which are all realized by affine transformation, the image pixel transformation comprises Gaussian blur, JPEG micro-compression, brightness and saturation adjustment and Gaussian noise, the image geometric transformation is carried out before the image pixel transformation in the model training process, and a random judging module is applied before each image disturbance module Judging whether to apply the image disturbance module according to the random number, The method specifically comprises the following steps: (1) First, secret information is to be recorded Data normalization is carried out, the color value is normalized to be 0.5 of both variance and mean value, and standardized secret information is obtained Then, the standardized secret information Input preprocessing network Obtaining a single-channel characteristic image ; (1) Wherein, the Represents the secret information that has been standardized, Representing a preprocessing network The single-channel characteristic image is output, Representing a preprocessing network Is a function of the arithmetic function of (a); (2) Image of the support Input encoding network First, the carrier image is formed Conversion from RGB color space to YUV color space, followed by edge extraction operator Extracting carrier images Y channel edge features of (2); edge extraction operator The specific expression of (2) is as follows (2) Then, the carrier image is formed Y-channel edge features and single-channel feature images of (2) Stacking, and performing convolution twice to obtain a three-channel characteristic diagram Finally, three-channel characteristic diagram With carrier image RGB channels of (1) are stacked, and a residual image is obtained after one convolution operation Thereby obtaining the watermark-containing image ; (3) Wherein, the A residual map is represented and is used to represent a residual map, The image of the carrier is represented and, Representing a coding network Is used as a function of the arithmetic function of (a), Representing a preprocessing network Outputting a single-channel characteristic image; (4) Wherein, the The image of the watermark is represented by a watermark, A residual map is represented and is used to represent a residual map, Representing a carrier image; Using image quality loss Constraining watermark-containing images Image quality of (i.e.) (5) Wherein, the 、 、 Respectively represent the loss of visual similarity LPIPS Loss of two norms And structural similarity metric SSIM loss The weight of the weight is respectively 0.1, 0.1 and 0.5; The visual similarity LPIPS loss is represented by the following specific formula (6) Wherein, the The image of the carrier is represented and, The image of the watermark is represented by a watermark, Representing a watermark-containing image LPIPS th in a feature computation network Pixel coordinate point on layer The feature vector at which the feature vector is located, Representing a carrier image In a feature computing network, the th Pixel coordinate point on layer The feature vector at which the feature vector is located, Representing LPIPS th in a feature computation network The weights of the layer feature vectors are used, Representing a carrier image Is defined by the width of the (c) a, Representing a carrier image Is provided for the length of (a), In order to find the sum operator, Representing a two-norm function; representing the loss of the two norms, the specific formula is as follows (7) Wherein, the The image of the carrier is represented and, The image of the watermark is represented by a watermark, Representing a two-norm function; The SSIM loss of the structural similarity measurement index is represented by the following specific formula (8) Wherein the method comprises the steps of The image of the carrier is represented and, The image of the watermark is represented by a watermark, Representing the mean value of the function of the operation, The standard deviation operation function is represented by a standard deviation, Representing a covariance operation function; 、 Is two constants, wherein , , , , 。
- 2. The anti-piracy dark watermark embedding method of claim 1, wherein the network framework is applied to desktop anti-piracy.
- 3. The anti-theft camera dark watermark embedding method of claim 1, which is implemented by using PyTorch platform on PC carrying NVIDIA Geforce 2080Ti GPU and Intel Core i 7-9700.00 GHz CPU, training and testing on DIV2K public data set, training a pre-training model under the condition of no geometric attack, and adding geometric attack training sample on the basis to further train and regulate.
- 4. Anti-piracy dark watermark extraction method based on minimum dependency concealment, and network framework based on minimum dependency concealment anti-piracy dark watermark embedding and extraction, wherein the network framework is formed by a preprocessing network with a U-shaped network structure With edge extraction operators And multiple convolution structure coding network Decoding network consisting of 6 convolutional layers Cascade formed in coding network And decoding network A group of image disturbance layers capable of being made micro Disturbance layer Mainly comprises two main types of transformation including image geometric transformation and image pixel transformation, wherein the image geometric transformation comprises amplification, reduction, rotation and image distortion which are all realized by affine transformation, the image pixel transformation comprises Gaussian blur, JPEG micro-compression, brightness and saturation adjustment and Gaussian noise, the image geometric transformation is carried out before the image pixel transformation in the model training process, and a random judging module is applied before each image disturbance module Judging whether to apply the image disturbance module according to the random number; The method comprises directly inputting the image to be extracted into a decoding network Obtaining watermark extraction results Setting training parameters, constructing loss function by means of carrier image Constraining watermark-containing images And by means of secret information Constrained watermark extraction results Optimizing the neural network model parameters; the method comprises the following specific steps: (1) Will contain the watermark image Or distorted watermark-containing image after shooting attack Direct input decoding network Obtaining watermark extraction results ; (20) (21) Wherein, the The image of the watermark is represented by a watermark, The result of the watermark extraction is indicated, Representing a decoding network Is used as a function of the arithmetic function of (a), Representing distorted aqueous images in a coding network And decoding network Between, for the water-bearing image Using a set of microscopable image-perturbation layers Obtaining distorted watermark-containing images Training the shooting robustness of the watermark; (9) Wherein, the The image of the watermark is represented by a watermark, Representing a distorted watermark-containing image, Representing an image disturbance layer operation function; (2) Loss of recovery using watermark Constrained decoding network Output watermark extraction result The specific formula is as follows (22) Wherein, the And Respectively represent information entropy loss And weighted cross entropy loss The weights of the (2) are respectively 0.1 and 1; The specific formula of the information entropy loss is as follows (23) Wherein, the A logarithmic operation function with a base number of 2; Representing a decoding network The watermark extraction result is output; a sum operator; represents the weighted cross entropy loss, and the specific formula is as follows (24) Wherein, the Representing secret information The number of pixels in (a); a logarithmic operation function with a base number of 2; Representing a decoding network The watermark extraction result is output; a sum operator; representing secret information The sample weight belonging to the character information, Representing secret information Sample weights not belonging to character information in the formula is as follows (25) (26) Wherein, the A function of a norm is represented, Representing statistical secret information The number of pixels belonging to the character information, Representing secret information The number of pixels in (a); (3) Final loss function The specific formula is as follows (27) Wherein, the Indicating a loss of recovery of the watermark, Representing a loss of image quality.
- 5. The method for extracting a pirate-resistant dark watermark as claimed in claim 4, wherein if an image perturbation module is used For geometric transformation of images, the water-bearing images are required Secret information If the image disturbance module is used for image pixel transformation, only the water-contained print image is needed Judging whether the image disturbance module is applied according to the random number, wherein the specific formula is as follows (10) Wherein, the Representing the operational function of the random decision block, Representing the type of image perturbation module, The representation type is Is provided with an image disturbance module for the image disturbance module, The representation type is Image perturbation module of (a) Is a picture of the input image; 、 is constant, is not fully equal in each image disturbance module, and satisfies ; Representing a random integer generation function, Representing a modulo operator; Disturbance layer In particular a geometric perturbation module Fuzzy disturbance module Gaussian noise disturbance module JPEG compression disturbance module Brightness adjustment disturbance module Is a sequential combination of (a); geometric disturbance module The specific formula of (2) is as follows (11) Representing the operational function of the random decision block, A warp perturbation module is represented and, A rotational disturbance module is represented and is shown, Representing the scaled-down perturbation module, Representing the amplified disturbance module(s), The representation type is Image perturbation module of (a) Is used for the input image of the (c), Representing logical or operators; Distortion disturbance module The specific formula of (2) is as follows (12) Wherein, the Representation of a distortion perturbation module Is used for the input image of the (c), 、 Is a random number with the value range of 0.1 to 0.5; rotary disturbance module The specific formula of (2) is as follows (13) Wherein, the Representing a rotational disturbance module Is used for the input image of the (c), The rotation angle is represented and is a random number with the value range of 0 to 360; Reduced disturbance module The specific formula of (2) is as follows (14) Wherein, the Representation scaling perturbation module Is used for the input image of the (c), 、 A random number with a constant value of 2 or 3; Amplifying disturbance module The specific formula of (2) is as follows (15) Wherein, the Representation amplifying disturbance module Is used for the input image of the (c), 、 Taking the value of 0.5 and 0.3 as constants respectively; Gaussian noise disturbance module The specific formula of (2) is as follows (16) Wherein, the Representing gaussian noise disturbance modules Is used for the input image of the (c), Representing and inputting images Noise images of equal size that meet a standard normal distribution, A normal distribution random number representing a value between 0 and 0.05; fuzzy disturbance module The specific formula of (2) is as follows (17) Wherein, the Representation fuzzy perturbation module Is used for the input image of the (c), The convolution function is represented as a function of the convolution, Representing a size of A Gaussian convolution kernel with a mean value of 1 and a standard deviation of 2; JPEG compression disturbance module The specific formula of (2) is as follows (18) Wherein, the Representing JPEG compression perturbation module Is used for the input image of the (c), Representing a JPEG micro-compressible function; The JPEG compression quality is represented and is a random number with a value 80,85,90,95,99; Brightness adjustment disturbance module The specific formula of (2) is as follows (19) Wherein, the Representing brightness adjustment perturbation modules Is a picture of the input image; Representing a brightness parameter constant, wherein the value is 0.4; representing a tone parameter constant, wherein the value is 0.3; Representing a uniformly distributed image generation function, the first parameter being an upper uniformly distributed function distribution limit, the second parameter being a lower uniformly distributed function distribution limit, the third parameter representing the number of channels, the resolution of the generated image being equal to the input image The same; The brightness adjustment intensity constant is represented and the value is 0.5.
- 6. The anti-theft camera dark watermark extraction method of claim 4 or 5 is realized by using PyTorch platforms on a PC carrying NVIDIA Geforce 2080Ti GPU and Intel Core i 7-9700.00 GHz CPU, training and testing are carried out on DIV2K public data sets, a pre-training model is trained under the condition of no geometric attack, and geometric attack training samples are added on the basis to further train and regulate.
Description
Anti-piracy hidden watermark embedding and extracting network framework based on minimum dependency hiding and application method thereof Technical Field The invention relates to the multimedia technical field and the information security field of digital watermarks, in particular to an anti-robbery and anti-camera hidden watermark technology based on deep learning, and specifically relates to an anti-robbery and anti-camera hidden watermark embedding and extracting network frame based on minimum dependency hiding and a use method thereof. Background With the rapid development of the internet and communication technology, the remote desktop technology enables the confidential information and the display screen to be separated in space, brings convenience to people and brings information safety hidden trouble to the confidential information. In order to prevent the common desktop piracy problem in the remote desktop information security problem and maintain the visual experience of users, a common image hidden watermark technology is used for adding identity information and time labels of logged-in users to desktop images. In recent years, the problem of screen theft sometimes occurs, and great economic loss is caused for enterprises. In the present situation, the data leakage prevention and traceability capability in the internet sharing environment is still weak, and the data security problem frequently occurs in various places. Data leakage is always happening, and "inside ghost" often has various ways of revealing high risk data, especially unstructured data leakage ways like screen recording, screenshot, and candid shooting. Aiming at the problems, an efficient desktop dark watermark embedding algorithm with screen robustness is indispensable, and has good application prospect. Disclosure of Invention In order to solve the problems, the invention designs an anti-piracy and anti-darkness watermark embedding and extracting network framework based on minimum dependency hiding. The anti-theft camera-shooting hidden watermark embedding and extracting network framework based on minimum dependency hiding is formed by cascading a preprocessing network W P with a U-shaped network structure, an encoding network W E with an edge extraction operator k and a multiple convolution structure and a decoding network W D consisting of 6 convolution layers. In order to make the network robust for screen shots, a set of microscopic image perturbation layers NL are applied between the encoding network W E and the decoding network W D. The disturbance layer NL mainly comprises two main types of transformation, namely image geometric transformation and image pixel transformation, wherein the image geometric transformation comprises amplification, reduction, rotation and image distortion, the image geometric transformation is realized by affine transformation, and the image pixel transformation comprises Gaussian blur, JPEG micro-compression, brightness and saturation adjustment and Gaussian noise. The invention also provides application of the network frame in desktop anti-theft shooting and embedding and extracting methods thereof. An anti-robbery camera-hidden watermarking method based on minimum dependency hiding comprises the following steps: s1, designing an anti-piracy dark watermark embedding and extracting network frame based on minimum dependency hiding, wherein the anti-piracy dark watermark embedding and extracting network frame is formed by cascading a preprocessing network W P with a U-shaped network structure, an encoding network W E with an edge extracting operator k and a multiple convolution structure and a decoding network W D consisting of 6 convolution layers. In addition, in order for the network to achieve robustness of screen shots, a set of microscopic image perturbation layers NL are applied between the encoding network W E and the decoding network W D. S2, preprocessing an image, and carrying out data normalization on the color value of the secret information; S3, training parameter setting; s4, training a model, inputting secret information and a carrier image into the neural network designed in the step S1, and optimizing parameters of the neural network model through a loss function; the step S4 is realized and trained in the deep learning framework PyTorch; Wherein image data used for training the neural network model in computer vision is referred to as a training image, where the training image includes a carrier image and secret information. An input image is calculated through a neural network model to generate an output image, and model parameters of the neural network are updated through calculation of a loss function reflecting the difference between the output image and a target image, so that the targets of model parameter tuning and network learning are realized, and the training process of the neural network model is realized. Firstly, carrying out data standardization on secret informa