Search

CN-122027903-A - Image processing method and device

CN122027903ACN 122027903 ACN122027903 ACN 122027903ACN-122027903-A

Abstract

The application provides an image processing method and device, and relates to the technical field of image shooting. The method includes that a first device can acquire a first image, wherein the first image comprises a shooting object. The first device may also display a second image in response to an editing operation for the first image. The second image is generated based on the first image, the region of the second image where the shooting object is located shows a clear effect, the background region of the second image shows a motion blur effect according to the first blur direction, and the background region is a region outside the region where the shooting object is located. By editing the shot image, the image with clear effect in the region where the shot object is located and with motion blur effect in the background region according to the blur direction can be obtained. The film forming rate of the panning or shaking effect is improved, and the use experience of a user is improved.

Inventors

  • ZENG JUNJIE
  • ZHA YUFENG
  • YANG BIN
  • WANG MIAOFENG
  • WANG YINTING

Assignees

  • 华为技术有限公司

Dates

Publication Date
20260512
Application Date
20241122

Claims (17)

  1. 1. An image processing method, applied to a first device, comprising: acquiring a first image, wherein the first image comprises a shooting object; Displaying a second image in response to an editing operation for the first image; The second image is generated based on the first image, the region where the shooting object is located in the second image shows a clear effect, the background region in the second image shows a motion blur effect according to a first blur direction, and the background region is a region outside the region where the shooting object is located.
  2. 2. The method according to claim 1, wherein the method further comprises: When optical flow information corresponding to the first image is detected, determining an optical flow direction in the optical flow information as the first blurring direction; And acquiring the first blurring direction under the condition that optical flow information corresponding to the first image is not detected.
  3. 3. The method of claim 2, wherein the acquiring the first blur direction comprises: In response to an input operation for the first image, determining a blurring direction corresponding to the input operation as the first blurring direction; or determining the first blurring direction according to the first image and the direction rule.
  4. 4. A method according to any of claims 1-3, wherein the background area in the second image exhibits a motion blur effect according to the first blur intensity, the method further comprising: Displaying an updated second image in response to a change operation for the second image; The changing operation is used for changing the blurring direction and/or blurring strength of the motion blurring effect, the updated second image is generated based on the first image, the area where the shooting object is located in the updated second image shows a clear effect, the background area in the updated second image shows the motion blurring effect according to the second blurring direction and/or the second blurring strength, and the background area is an area outside the area where the shooting object is located.
  5. 5. The method of any of claims 1-4, wherein the capturing an object in the second image includes a first object and a second object, the first object being in an area that exhibits a clear effect, the method further comprising: Displaying an updated second image in response to a switching operation for a photographic subject in the second image; The updated second image is generated based on the first image, a region where the second object is located in the updated second image shows a clear effect, a background region in the updated second image shows a motion blur effect according to the first blur direction, and the background region is a region outside the region where the second object is located.
  6. 6. The method of any of claims 1-5, wherein displaying a second image in response to an editing operation for the first image comprises: Displaying an image editing interface, wherein the image editing interface comprises the first image and a shaking effect control corresponding to the first image; And responding to the triggering operation for the pan effect control, and displaying the second image.
  7. 7. The method of any of claims 1-6, wherein the acquiring a first image comprises: Acquiring metadata of the first image in response to a starting operation of a shooting function, wherein the metadata comprises a plurality of frames of RAW graphs; And generating the first image according to the metadata.
  8. 8. The method of claim 7, wherein the method further comprises: and sending the metadata corresponding to the first image to second equipment so that the second equipment generates the second image.
  9. 9. The method of claim 8, wherein the sending metadata corresponding to the first image to a second device comprises: And sending metadata corresponding to the second image and the first image to second equipment so that the second equipment can update the second image to present a motion blur effect.
  10. 10. The method of claim 8 or 9, wherein the second device comprises a server or a trusted device of the first device.
  11. 11. The method of claim 7, wherein generating the first image from the metadata comprises: Determining a target frame RAW diagram according to a reference frame RAW diagram and an adjacent frame RAW diagram in the multi-frame RAW diagram, wherein the adjacent frame RAW diagram comprises two frame RAW diagrams adjacent to the reference frame RAW diagram; processing the target frame RAW image by utilizing a YUV domain algorithm to obtain a reference frame YUV image; And encoding the reference frame YUV image to generate the first image.
  12. 12. The method of claim 11, wherein the method further comprises: Processing the adjacent frame RAW image by utilizing a YUV domain algorithm to obtain a target adjacent frame YUV image; Performing main body detection on the first image to obtain first detection information; performing main detection on the YUV images of the target adjacent frames to obtain second detection information; performing optical flow detection on YUV images of the target adjacent frames; determining a main area of the first image and a background area of the first image according to the first image, the first detection information, the second detection information and the optical flow information under the condition that the optical flow information of the YUV image of the target adjacent frame is detected; and carrying out blurring processing on the background area of the first image, and overlapping the background area of the first image after blurring processing with the main area of the first image to generate the second image.
  13. 13. The method of claim 12, wherein the first detection information comprises one or more of a first detection frame of the subject, a region size corresponding to the first detection frame, or coordinates of the first detection frame, and the second detection information comprises one or more of a second detection frame of the subject, a region size corresponding to the second detection frame, or coordinates of the second detection frame.
  14. 14. The method of claim 11, wherein the method further comprises: Processing the adjacent frame RAW image by utilizing a YUV domain algorithm to obtain a target adjacent frame YUV image; performing optical flow detection on YUV images of the target adjacent frames; under the condition that optical flow information of YUV images of the target adjacent frames is detected, processing the first image and the optical flow information by utilizing a motion segmentation algorithm to obtain a main body segmentation result; and inputting the subject segmentation result and the optical flow information into a first model to generate the second image.
  15. 15. An electronic device comprising a display screen for displaying an image, a memory coupled to the processor, and one or more processors, wherein the memory has stored therein computer program code comprising computer instructions that, when executed by the processor, cause the electronic device to perform the image processing method of any of claims 1-14.
  16. 16. A computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the image processing method of any of claims 1-14.
  17. 17. A computer program product comprising instructions which, when executed by an electronic device, cause the electronic device to perform the image processing method of any of claims 1-14.

Description

Image processing method and device Technical Field The present application relates to the field of image capturing technologies, and in particular, to an image processing method and apparatus. Background With the increasing popularity of camera functions of electronic devices, requirements of people on performance, effects and the like of the camera functions are also increasing. Increasingly, users begin to shoot moving objects by panning or panning, and it is desirable to obtain a shot image in which the moving objects are clear and the background is blurred. When a user shoots in a panning or shaking mode, the user needs to keep the electronic equipment (such as a mobile phone) as stable as possible, accurately judge the movement speed and movement direction of a shooting object, and then shoot the shooting object which moves along with the shooting object. However, this photographing method requires a high user, and increases the difficulty of photographing a target image (such as a panning image). Meanwhile, in the shooting process, the condition that the electronic equipment shakes or shakes easily occurs, so that the quality of the shot object image is poor. Disclosure of Invention The application provides an image processing method and device, which can obtain an image with a clear effect in a region where a shooting object is located and a motion blur effect in a background region according to a blur direction by editing the shot image. The film forming rate of the panning or shaking effect is improved, and the use experience of a user is improved. In order to achieve the above purpose, the application adopts the following technical scheme: In a first aspect, the present application provides an image processing method, applied to a first device, the method comprising: The first device may acquire a first image including a photographic subject. The first device may also display a second image in response to an editing operation for the first image. The second image is generated based on the first image, the region where the shooting object is located in the second image shows a clear effect, the background region in the second image shows a motion blur effect according to a first blur direction, and the background region is a region outside the region where the shooting object is located. Therefore, the embodiment of the application can obtain the image with the shaking effect by editing the shot image without requiring a professional shooting method of the user, namely the image with the clear effect of the region where the shot object is located and the image with the motion blur effect of the background region according to the first blur direction. The film forming rate of the panning or shaking effect is improved, and the use experience of a user is improved. In one implementation manner, when the first device detects optical flow information corresponding to the first image, the first device may determine an optical flow direction in the optical flow information as the first blur direction. The first device may acquire the first blur direction without detecting optical flow information corresponding to the first image. In this way, in the embodiment of the present application, when the first device detects optical flow information corresponding to the first image, the optical flow direction in the optical flow information may be determined as the first blur direction. The optical flow direction can characterize the direction of motion of the photographic subject. In this way, the blurring direction of the motion blurring effect in the subsequently generated second image coincides with the motion direction of the photographic subject. The reality of the image with the shaking effect and the visual experience of the user are improved. Further, the first device may acquire the first blur direction without detecting optical flow information corresponding to the first image. Even if the subject is not in a moving state, an image having a panning effect can be generated. The film forming rate of the panning or shaking effect is improved, and the use experience of a user is improved. In one implementation manner, in the process of acquiring the first blurring direction, the first device responds to an input operation for the first image, and determines a blurring direction corresponding to the input operation as the first blurring direction. Or determining the first blurring direction according to the first image and the direction rule. Thus, in the embodiment of the application, the first device can acquire the first fuzzy direction through the two modes, so that the setting of the fuzzy direction by the user is supported, and the automatic determination of the first fuzzy direction is also supported. Even in the case where a photographic subject that is not in a moving state is included in the first image, an image having a panning effect can be presented. At the same time, the user can also change