CN-121981911-A - Method for removing net-shaped shielding object in image
Abstract
The invention discloses a method for removing net-shaped shielding objects in an image, which relates to the technical field of image processing and comprises the steps of carrying out super-pixel segmentation based on color item and structure item energy functions on the image to be processed, iteratively exchanging pixels to obtain a super-pixel set, minimizing the fusion energy functions by adopting a graph segmentation method to obtain a connected region set, sorting and screening according to the mean value and variance of the connected region to form a net-shaped shielding object sample set, extracting joint features consisting of color histogram features and rotation-invariant local binary pattern texture probability features from the super-pixel, carrying out two classification based on a support vector machine to obtain a net-shaped shielding object mask, and carrying out iterative restoration on the shielding region by adopting a full-variation model under smoothness constraint and edge continuity constraint according to the mask to output a restoration image. The mesh-shaped shielding object can be removed without providing depth information by fully utilizing the information of the image, so that extra calculation is reduced, and the long and thin strip-shaped shielding object is repaired more smoothly by a full-variance method.
Inventors
- HUANG WENYAN
- SUN YUAN
Assignees
- 深圳市智坤动力科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260204
Claims (10)
- 1. A method of removing a mesh-like obstruction in an image, comprising: Performing super-pixel segmentation based on an energy function of a color item and a structural item on an image to be processed, and iteratively exchanging pixels to obtain a super-pixel set; Performing a graph cut method on the super-pixel set to minimize a fusion energy function and obtain a connected region set; sorting and screening the connected areas according to the mean value and the variance of the connected areas to serve as a net shelter sample set; Extracting joint features consisting of color histogram features and rotation-invariant local binary pattern texture probability features from the super pixels, and performing two-classification on the joint features based on a support vector machine to obtain a net-shaped shielding mask; And iteratively repairing the shielding region by adopting a total variation model under the smoothness constraint and the edge continuity constraint according to the mask, and outputting a repairing image.
- 2. The method for removing mesh occlusion in an image according to claim 1, wherein performing superpixel segmentation of the image to be processed based on energy functions of color terms and structure terms comprises initializing and dividing the image to be processed into superpixel units of preset shapes, constructing the energy functions comprising the color terms and the structure terms, exchanging pixels between adjacent superpixels with energy function change amounts as criteria in an iterative process, and obtaining a superpixel set meeting color consistency and boundary smoothness constraints.
- 3. A method of removing mesh occlusions in an image according to claim 2, wherein the color term is constructed based on normalized probability distribution of pixels in super-pixels on a preset color bin, taking the sum of squares of the normalized probability distribution as a color consistency measure; the structural item is constructed based on normalized probability distribution of pixels in the neighborhood of the super pixel boundary on the attribution of each super pixel, and the sum of squares of the normalized probability distribution is used as the boundary smoothness measurement; The joint constraint on color consistency and boundary smoothness is achieved by adjusting the weights of the color and structural terms.
- 4. The method of removing mesh occlusions from an image of claim 2, wherein performing a graph cut method on the set of superpixels to minimize a fusion energy function comprises constructing a fusion energy function consisting of a data item for measuring differences in color space between adjacent superpixels within a same label connected region and a smoothing item for applying a penalty according to color similarity when adjacent superpixel labels are inconsistent, resulting in a fused connected region set.
- 5. The method of claim 1, wherein sorting the connected regions according to the mean and variance of the connected regions to form a mesh-like occlusion sample set comprises calculating a region mean and a region variance of each connected region, and determining the connected regions having a region mean not greater than a first threshold and a region variance not greater than a second threshold as the mesh-like occlusion sample set.
- 6. The method for removing mesh-like occlusions in an image according to claim 1, wherein the extracting of the joint features comprises dividing a value range into a plurality of intervals for each color channel, counting the proportion of pixels in super pixels falling into each interval to form color histogram features, calculating a rotation-invariant local binary pattern for each color channel, counting the occurrence probability of each codeword to form texture probability features, and splicing the color histogram features and the texture probability features of each color channel to form the joint features.
- 7. The method of claim 1, wherein classifying the combined features based on a support vector machine comprises performing kernel mapping on the combined features to construct a classification hyperplane using a Gaussian kernel function, marking corresponding superpixels as mesh-type or non-mesh-type masks according to classification results output by the classification hyperplane, and generating the mesh-type mask from a set of superpixels marked as mesh-type masks.
- 8. The method for removing mesh-like occlusion in an image according to claim 1, wherein iteratively repairing an occlusion region under smoothness constraint and edge continuity constraint by using a total variation model according to a mask comprises constructing an energy functional including a total variation regularization term for constraining spatial smoothness of a repair result and a data fidelity term for constraining continuity of a repair region boundary and a known pixel, and updating an occlusion region pixel value by iteratively solving an extremum of the energy functional to output a repaired image.
- 9. The method for removing mesh occlusion in an image of claim 1, further comprising determining a self-weave pattern of the connected region after the collection of connected regions is obtained, the self-weave pattern comprising a confirmation, an observation and an exclusion pattern, wherein the determination of the self-weave pattern is based on a sequence relationship of separation of self-weave inside the region from a boundary of the region compared with adjacent super-pixel similarity and an interweaving structure existence determination based on a contiguous topology of the connected region; The method comprises the steps of reserving a confirmation state communication region in a net-shaped shelter sample set, removing an exclusion state communication region, removing an observation state communication region, storing the observation state communication region in a to-be-judged set, training a support vector machine classifier based on the removed net-shaped shelter sample set, performing two classification on the joint characteristics of super pixels by adopting the trained support vector machine classifier to obtain an initial net-shaped shelter mask, and reprocessing the initial net-shaped shelter mask according to a weave self-evidence state to obtain the net-shaped shelter mask.
- 10. The method of removing mesh occlusions from an image of claim 9, wherein the determination of the weave pattern self-identity comprises: Constructing a connected region internal adjacent super-pixel pair set and a connected region boundary cross-region adjacent super-pixel pair set based on graph cut similarity definition, solving an internal similarity minimum value and a boundary similarity maximum value, and setting internal and external inversion self-consistent marks as true under the condition that the internal similarity minimum value is larger than the boundary similarity maximum value, otherwise, setting false; Judging that an interweaving node exists in the connected region adjacency graph, wherein the interweaving node is provided with at least three mutually independent adjacency branches, and the main directions of the adjacency branches cover two mutually perpendicular direction class sets, and the existence mark of the staggered interweaving structure is true, otherwise, the interweaving node is false; When the internal and external reverse self-consistent mark is true and the interweaving structure existence mark is true, the self-evidence state of the fabric is determined to be a confirmation state, when the internal and external sequence self-consistent mark is false, the self-evidence state of the fabric is determined to be an exclusion state, and the rest conditions determine that the self-evidence state of the fabric is an observation state.
Description
Method for removing net-shaped shielding object in image Technical Field The invention relates to the technical field of image processing, in particular to a method for removing a net-shaped shielding object in an image. Background With the rapid development of technology, people can take pictures through tools such as digital cameras, mobile phones and the like at any time and any place. In many situations, however, many occlusions on the picture affect the quality of the picture due to scene limitations. For example, photographing animals in zoos requires a very thick wire mesh. For another example, in many gyms there are many wires that isolate the spectators from the athlete. The presence of these meshes greatly affects the quality of the photograph and substitutes for the large amount of unwanted information that the photographer is reluctant to introduce. It is counted that the number of pixels occupied by such a mesh may often occupy more than 20% of the total number of pixels in the picture, in which case it is necessary to detect and repair the mesh to obtain an image of the mesh after occlusion. The method comprises the steps of (1) sensing the net, (2) dividing the net, and (3) removing the restoration of the blocked image. The perception of the mesh is to distinguish the mesh from the image, whereas in actual photographing, the mesh is often the "foreground" and the picture we want to get is the "background". The perception of the mesh is then in fact the segmentation of the foreground from the background, but unlike the past, here one really looks at "background" and not "foreground". The division of the shielding object is an important means for extracting the net-shaped shielding object. This is because the mathematical characteristics of the mesh are often small local variances, and the texture is checkered. If the mesh-like obstruction is identified as a whole, there is great difficulty in that the mathematical and textural characteristics of the area of the mesh, if simply searched for with a rectangular frame, will vary with the contents of the square. Thus, it is necessary to divide the shade apart, detect each patch, and thus the characteristics of the mesh are more visible than the background and easier to identify and process. The image restoration after shielding is removed relates to image completion. Image complement is actually the prediction and estimation of the missing part of an image using local pixel information of the pixel location area (or the global information of the image) to expect to achieve the effect of spurious. In a de-reticulated mask, since the mesh has been detected, this portion needs to be replaced by the masked portion, and the image restoration of the de-reticulated mask is actually performed on the mesh mask portion. The method finds the part closest to the net-shaped shielding object through graph cutting, classifies positive samples, fully utilizes the information in the image without other external information, and has higher accuracy. Disclosure of Invention Based on the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a method for removing a mesh-like mask in an image, so as to solve the above-mentioned technical problems. In order to achieve the above purpose, the invention provides a method for removing a netlike shelter in an image, comprising the following steps: Performing super-pixel segmentation based on an energy function of a color item and a structural item on an image to be processed, and iteratively exchanging pixels to obtain a super-pixel set; Performing a graph cut method on the super-pixel set to minimize a fusion energy function and obtain a connected region set; sorting and screening the connected areas according to the mean value and the variance of the connected areas to serve as a net shelter sample set; Extracting joint features consisting of color histogram features and rotation-invariant local binary pattern texture probability features from the super pixels, and performing two-classification on the joint features based on a support vector machine to obtain a net-shaped shielding mask; And iteratively repairing the shielding region by adopting a total variation model under the smoothness constraint and the edge continuity constraint according to the mask, and outputting a repairing image. The invention further provides that the super-pixel segmentation of the image to be processed based on the energy functions of the color item and the structure item comprises the steps of initializing and dividing the image to be processed into super-pixel units with preset shapes, constructing the energy functions comprising the color item and the structure item, and exchanging pixels between adjacent super-pixels by taking the change amount of the energy functions as a criterion in the iterative process to obtain a super-pixel set meeting the constraint of color consistency and boundary smoothness