CN-121998834-A - Image processing method and related device
Abstract
The embodiment of the application provides an image processing method and a related device, and relates to the technical field of terminals. The method comprises the steps of processing image contents of a first type region in an image by adopting a first fuzzy filtering algorithm to obtain first image contents, processing image contents of a second type region in the image by adopting a second fuzzy filtering algorithm to obtain second image contents, wherein the first type region and the second type region belong to regions outside the depth of field of the image, depth jump in the second type region is gentler than that in the first type region, different filtering radiuses are supported in the first fuzzy filtering algorithm for different pixel positions, and the filtering radiuses of all pixel positions in the second fuzzy filtering algorithm are the same. In this way, the first fuzzy filtering algorithm is adopted to process the region with larger depth jump in the image, so that the filtering radius corresponding to each pixel point position can be inconsistent, and the edge part of the processed image can be more natural.
Inventors
- Mo Yanquan
Assignees
- 荣耀终端股份有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20241108
Claims (14)
- 1. An image processing method, comprising: Acquiring an image; Processing the image content of the first type region in the image by adopting a first fuzzy filtering algorithm to obtain first image content; Processing the image content of a second type region in the image by adopting a second fuzzy filtering algorithm to obtain second image content, wherein the first type region and the second type region belong to regions outside the depth of field of the image, the depth jump in the second type region is gentler than that in the first type region, different filtering radiuses are supported to be adopted for different pixel positions in the first fuzzy filtering algorithm, and the filtering radiuses of all pixel positions in the second fuzzy filtering algorithm are the same; And obtaining a processed image according to the first image content, the second image content and the image content of the region in the depth of field in the image.
- 2. The method as recited in claim 1, further comprising: Obtaining the filter radius of each pixel point position in the image depth field area according to the depth information and/or parallax information of the image; And obtaining the first type region and the second type region according to the filter radius of each pixel point position in the image depth field region, wherein the first type region comprises a first region with the filter radius difference of adjacent pixel point positions being larger than or equal to a difference threshold value, and the second type region comprises a second region with the filter radius difference of adjacent pixel point positions being smaller than the difference threshold value.
- 3. The method as recited in claim 2, further comprising: the first type region further comprises a third region, wherein the third region is an edge expansion region obtained by expanding the edge of the first region outwards by a preset pixel.
- 4. The method of any of claims 1-3, wherein the out-of-depth region comprises a region having a depth less than or equal to a front depth threshold and a region having a depth greater than or equal to a rear depth threshold, wherein the in-depth region comprises a region having a depth greater than the front depth threshold and less than the rear depth threshold; The front depth of field threshold and the rear depth of field threshold are related to one or more of the following information, namely the focal length of a camera shooting the image, the minimum allowable circle of confusion of the camera shooting the image, the aperture size of the camera shooting the image and the depth corresponding to the focal position.
- 5. The method of claim 4, wherein the front depth of field threshold satisfies the following formula: The post depth of field threshold satisfies the following formula: Wherein F represents a focal length of a camera capturing the image, δ represents a minimum allowable circle of confusion size of the camera capturing the image, Z f represents a depth corresponding to a focal position, and F represents a diaphragm size of the camera capturing the image.
- 6. The method according to any one of claim 2 to 5, wherein, The filter radius of the out-of-depth region satisfies the following formula: Wherein Z (x, y) represents the depth information of the image, F represents the focal length of the camera capturing the image, Z f represents the depth corresponding to the focal position, and F represents the aperture size of the camera capturing the image.
- 7. The method of any of claims 2-6, wherein the depth of the out-of-depth region is configured to be a fixed value.
- 8. The method of any of claims 1-7, wherein prior to processing with a first blur filtering algorithm for image content of a first type of region in the image and a second blur filtering algorithm for image content of a second type of region in the image, the method further comprises: And carrying out brightness enhancement on a highlight region of the region outside the depth of field of the image, wherein the highlight region comprises a region with brightness larger than a brightness threshold value in the region outside the depth of field.
- 9. The method of claim 8, wherein the highlight region satisfies the following formula: mask1=clip(tanh(10·(Image in -Thr)),0,1)·mask2 Wherein Image in represents the Image, thr represents a second threshold value, multiplying, mask2 represents the out-of-depth region, clip is used to limit the value of tanh (10· (Image in -Thr)) to the interval [0,1], and tanh represents the hyperbolic tangent function; and brightness of the highlight region is improved, so that the following formula is satisfied: Image out =Image in ·(1+mask1·ratio enhance ) Where, represents a multiplication operation, ratio enhance represents an enhancement coefficient.
- 10. The method according to any one of claims 1-9, wherein the image content of the region within the depth of field comprises image content obtained by edge filtering an image of the region within the depth of field in the image.
- 11. An electronic device is characterized by comprising a processor and a memory; The memory stores computer-executable instructions; the processor executing computer-executable instructions stored in the memory to cause the electronic device to perform the method of any one of claims 1-10.
- 12. A computer readable storage medium storing a computer program, which when executed by a processor implements the method according to any one of claims 1-10.
- 13. A system on a chip comprising at least one processor and a communication interface, the communication interface and the at least one processor being interconnected by a wire, the at least one processor being configured to execute a computer program or instructions to perform the method of any of claims 1-10.
- 14. A computer program product comprising a computer program which, when run, causes a computer to perform the method of any of claims 1-10.
Description
Image processing method and related device Technical Field The present application relates to the field of terminal technologies, and in particular, to an image processing method and a related device. Background With the development of electronic devices, the electronic devices can process an image, for example, the electronic devices can perform blurring processing on the image, and generate a blurring image with a foreground effect, where the foreground effect is a visual effect represented by making a part of content in the image clear and a part of content blurred. However, the edge portion of the blurring map may be unnatural such as cracking, hardening, or the like. Disclosure of Invention The embodiment of the application provides an image processing method and a related device, which are applied to the technical field of terminals and can lead the edge part of a blurring picture to be soft and the halation to be natural. In a first aspect, an embodiment of the present application provides an image processing method. The method comprises the following steps: the method comprises the steps of obtaining an image, processing image contents of a first type area in the image by adopting a first fuzzy filtering algorithm to obtain first image contents, processing image contents of a second type area in the image by adopting a second fuzzy filtering algorithm to obtain second image contents, wherein the first type area and the second type area belong to areas outside the depth of field of the image, depth jump in the second type area is gentler than that in the first type area, different filtering radiuses are supported to be adopted for different pixel positions in the first fuzzy filtering algorithm, the filtering radiuses of all pixel positions in the second fuzzy filtering algorithm are the same, and the processed image is obtained according to the first image contents, the second image contents and the image contents of the areas inside the depth of field in the image. In this way, under the condition that the region with larger depth jump in the image is processed by adopting the first fuzzy filtering algorithm, the filtering radius corresponding to each pixel point position in the image can be inconsistent, so that the edge part of the processed image is soft and the halation is natural. In a possible implementation manner, a filtering radius of each pixel position in an image field depth region is obtained according to depth information and/or parallax information of an image, a first type region and a second type region are obtained according to the filtering radius of each pixel position in the image field depth region, wherein the first type region comprises a first region with a difference value of the filtering radius of adjacent pixel positions being greater than or equal to a difference value threshold, and the second type region comprises a second region with a difference value of the filtering radius of adjacent pixel positions being smaller than the difference value threshold. In this way, the first type region and the second type region are obtained through the filter radius, and preparation can be made for processing the first type region and the second type region by adopting different fuzzy filtering algorithms respectively. In a possible implementation manner, the first type region further includes a third region, and the third region is an edge expansion region obtained by expanding the edge of the first region outwards by a preset number of pixels. In this way, the abrupt change between the first type region and the second type region can be reduced by expanding the edge of the first region outwards by a preset number of pixels, smoother transition can be realized between the first type region and the second type region, fusion of the first type region and the second type region in a subsequent method is facilitated, distortion of the edges of the first type region and the second type region is prevented, and further transition of the processed image is soft. In one possible implementation, the out-of-depth region comprises a region with depth less than or equal to a front depth-of-field threshold and a region with depth greater than or equal to a rear depth-of-field threshold, the in-depth region comprises a region with depth greater than the front depth-of-field threshold and less than the rear depth-of-field threshold, wherein the front depth-of-field threshold and the rear depth-of-field threshold are related to one or more of the following information, namely the focal length of a camera for shooting an image, the minimum allowable circle of confusion of the camera for shooting the image, the aperture size of the camera for shooting the image and depth information corresponding to the focal position. In this way, the depth is compared with the relation between the front depth threshold and the rear depth threshold to obtain the region in the depth of field and the region outsid