CN-120070156-B - Image processing method and device
Abstract
The embodiment of the application provides an image processing method and device, which relate to the technical field of terminals, and enable electronic equipment to acquire a mask image corresponding to a clear image and a ternary image corresponding to the clear image, determine the position of edge features through the mask image, determine a transition region comprising the edge features through the ternary image, further enable the electronic equipment to acquire an edge region corresponding to the position of the edge features from the clear image based on the position of the edge features, fuse the edge region into a blurring image, and solve the problem of false blurring of edges or false blurring of backgrounds in the blurring image.
Inventors
- Mo Yanquan
Assignees
- 荣耀终端股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20231123
Claims (11)
- 1. An image processing method, the method comprising: Acquiring a first image in response to a photographing operation; obtaining a blurring image corresponding to the first image, wherein the blurring image is obtained by blurring a background in the first image; The method comprises the steps of inputting a first image into a first model, and outputting a mask image corresponding to the first image and a ternary image corresponding to the first image, wherein the first model comprises a shared editor, a semantic segmentation branch, a detail prediction branch and a fusion branch, the shared editor is used for extracting image features of the first image, the semantic segmentation branch is used for carrying out semantic segmentation on the first image and obtaining a foreground in the first image, the detail prediction branch is used for obtaining edge features in the first image, the fusion branch is used for fusing the foreground in the first image and the edge features in the first image to obtain the mask image corresponding to the first image and the ternary image corresponding to the first image, the mask image corresponding to the first image comprises the position of the edge features, and the ternary image corresponding to the first image comprises a transition region obtained based on the edge features; And determining a first area formed by the edge features from the first image by utilizing the positions of the edge features obtained based on the mask image, and covering the first area into the blurring image to obtain a target image, wherein the first area is positioned in the transition area.
- 2. The method according to claim 1, wherein the method further comprises: the first model is trained by: inputting a training image into the sharing editor to obtain image characteristics of the training image; Inputting the image features of the training image into the semantic segmentation branches to obtain the output results of the semantic segmentation branches; Acquiring a Laplace high-frequency image corresponding to the training image, and inputting the Laplace high-frequency image into the detail prediction branch to obtain an output result of the detail prediction branch; inputting the output result of the semantic segmentation branch and the output result of the detail prediction branch into the fusion branch to obtain a predicted mask diagram and a predicted ternary diagram; And obtaining the trained first model when the difference between the predicted mask map and the mask map true value meets a first loss function and the difference between the predicted ternary map and the ternary map true value meets a second loss function.
- 3. The method according to claim 2, wherein the method further comprises: carrying out Gaussian blur processing on the edges of the true values of the mask map to obtain a second image; And obtaining the trained semantic segmentation branch when the difference between the output results of the second image and the semantic segmentation branch meets a fourth loss function.
- 4. A method according to claim 2 or 3, characterized in that the method further comprises: Obtaining an image in the transition region from the mask diagram true value according to the transition region in the ternary diagram true value to obtain a third image; and obtaining the trained detail prediction branch when the difference between the third image and the output result of the detail prediction branch meets a third loss function.
- 5. The method according to claim 2, wherein the method further comprises: acquiring a second region containing edge features from the training image based on a semantic segmentation method; screening the second region by using the connected region to obtain a third region; Expanding the third region to obtain a fourth region; Expanding the mask map true value to obtain an expanded mask map; Acquiring the intersection of the expanded mask map and the fourth region to obtain a first transition region; The ternary diagram true value is determined based on the first transition region and a foreground in the expanded mask diagram.
- 6. The method of claim 1, wherein the edge features comprise one or more of a portrait hair, a knitting wool in a sweater, or an animal hair, or a plant branch.
- 7. The method of claim 1, wherein the acquiring the first image in response to the photographing operation comprises: and responding to the photographing operation, acquiring at least two images with different exposure time, and performing image fusion on the at least two images with different exposure time to obtain the first image.
- 8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the computer program is caused by the processor to perform the method of any of claims 1-7.
- 9. A computer readable storage medium storing a computer program, which when executed by a processor causes a computer to perform the method of any one of claims 1-7.
- 10. A computer program product comprising a computer program which, when executed by a processor, causes the computer to perform the method of any of claims 1-7.
- 11. A chip comprising a processor for reading instructions stored in a memory, which instructions, when executed by the processor, cause the chip to implement the method of any of the preceding claims 1-7.
Description
Image processing method and device Technical Field The present application relates to the field of terminal technologies, and in particular, to an image processing method and apparatus. Background With the popularization and development of the internet, the functional demands of electronic devices are becoming more and more diverse. For example, the electronic device can support not only a shooting function, but also blurring processing on a shooting picture, so that a user can see a blurring image with clear foreground and blurred background, and the shooting picture has a better sense of space. However, due to the limitation of the hardware module of the electronic device, the blurring image may have the situation of edge blurring or background blurring. Disclosure of Invention The embodiment of the application provides an image processing method and device, which are used for improving blurring effect on edge characteristics. In a first aspect, an embodiment of the present application provides an image processing method, where a first image is acquired in response to a photographing operation, a virtual image corresponding to the first image, a position of an edge feature in the first image, and a transition region formed by the edge feature in the first image are acquired, the virtual image is obtained after a background in the first image is subjected to virtual processing, a first region formed by the edge feature is determined from the first image by using the position of the edge feature, and the first region is fused into the virtual image to obtain a target image, where the first region is located in the transition region. The electronic equipment can accurately screen the first region formed by the clear edge features from the first image through the positions of the edge features and the first region, and solve the problem that the edge is mistakenly virtual or the background is mistakenly clear in the virtual image by fusing the first region into the virtual image. In one possible implementation manner, before the step of acquiring the position of the edge feature in the first image and the transition region formed by the edge feature in the first image, the method further includes acquiring a mask map corresponding to the first image and a ternary map corresponding to the first image, where the mask map corresponding to the first image includes the position of the edge feature, and the ternary map corresponding to the first image includes the transition region. The mask map corresponding to the first image may be the first mask map described in the embodiment of the present application, and the ternary map corresponding to the first image may be the first ternary map described in the embodiment of the present application. The electronic equipment can acquire the mask image corresponding to the clear image and the ternary image corresponding to the clear image, the position of the edge feature is determined through the mask image, the transition region comprising the edge feature is determined through the ternary image, and then the electronic equipment can acquire the edge region corresponding to the position of the edge feature from the clear image based on the position of the edge feature, and fuse the edge region into the blurring image, so that the situation that the edge in the blurring image is mistakenly virtual or the background is mistakenly clear is solved. In a possible implementation manner, obtaining a mask image corresponding to the first image and a ternary image corresponding to the first image comprises inputting the first image into a first model, outputting the mask image corresponding to the first image and the ternary image corresponding to the first image, wherein the first model comprises a sharing editor, a semantic segmentation branch, a detail prediction branch and a fusion branch, the sharing editor is used for extracting image features of the first image, the semantic segmentation branch is used for carrying out semantic segmentation on the first image and obtaining a foreground in the first image, the detail prediction branch is used for obtaining edge features in the first image, and the fusion branch is used for fusing the foreground in the first image and the edge features in the first image to obtain the mask image corresponding to the first image and the ternary image corresponding to the first image. The electronic equipment can output a mask image capable of identifying clear edges and a ternary image capable of framing edge features in a transition area through the target neural network model, so that the accurate position and range of the edge features can be determined by using the mask image and the ternary image. In a possible implementation manner, the method further comprises the steps of inputting a training image into the shared editor to obtain image features of the training image, inputting the image features of the training image into the semanti