CN-122027880-A - Image generation method and electronic equipment
Abstract
The application provides an image generation method and electronic equipment. The method comprises the steps of obtaining at least one image frame, wherein the image frame is used for shooting the same motion scene, the motion scene comprises at least one motion body, each image frame comprises a body area corresponding to the motion body and a non-body area except for the motion body, generating a target image based on a target non-body area in the non-body area and a target body area in the body area, wherein in the same shooting period, the exposure time of the target non-body area is larger than that of the target body area, or the resolution of the target non-body area is larger than that of the target body area in the same image frame. Therefore, when the electronic equipment shoots a motion scene, the generated image is high in quality, and the visual experience of a user can be improved.
Inventors
- XIAO BIN
- Suo Yaji
- LI HUAIQIAN
Assignees
- 荣耀终端股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20241112
Claims (14)
- 1. An image generation method, applied to an electronic device, comprising: acquiring at least one image frame, wherein the image frame is used for shooting the same motion scene, the motion scene comprises at least one motion body, each image frame comprises a body area corresponding to the motion body, and non-body areas except for the motion body; And generating a target image based on a target non-subject area in the non-subject areas and a target subject area in the subject areas, wherein the exposure time of the target non-subject area is larger than that of the target subject area in the same shooting period, or the resolution of the target non-subject area is larger than that of the target subject area in the same image frame.
- 2. The image generation method according to claim 1, wherein the acquiring at least one image frame includes: Displaying a first interface in a camera application program, and displaying a view finding window on the first interface; And responding to the view finding window to view the motion scene, and shooting the motion scene to acquire the image frame.
- 3. The image generation method according to claim 1, wherein the acquiring at least one image frame includes: Displaying a first interface in a camera application program, displaying a view finding window on the first interface, and displaying a first control on the first interface, wherein the view finding window is used for finding a view of the motion scene, and the first control is used for shooting the motion scene in the view finding window; And responding to the first operation of the first control, shooting the motion scene to acquire the image frame.
- 4. A method of generating an image as claimed in claim 2 or 3, wherein the image frames comprise at least one first image frame and at least one second image frame, the acquiring at least one image frame comprising: In the case of shooting the motion scene based on a first zoom mode, in each shooting period, acquiring one first image frame and one second image frame based on a single-frame progressive stabilized image processing mode, wherein the exposure time of the first image frame is greater than that of the second image frame, and the zoom magnification corresponding to the first zoom mode is smaller than or equal to a first threshold.
- 5. The image generation method according to claim 4, further comprising, after the acquiring the at least one image frame: Performing motion detection processing on the first image frame to determine the non-main body region corresponding to the first image frame, wherein the non-main body region corresponding to the first image frame is used as the target non-main body region; and performing motion detection processing on the second image frame to determine the main body area corresponding to the second image frame, wherein the main body area corresponding to the second image frame is used as the target main body area.
- 6. The image generation method according to claim 5, wherein the generating a target image based on a target non-subject region of the non-subject regions, and a target subject region of the subject regions, comprises: Performing image fusion processing on the target non-main body area and the target main body area corresponding to the same shooting period to generate at least one first fusion image frame, wherein each shooting period corresponds to one first fusion image frame; and performing time-domain multi-frame fusion processing on the first fusion image frame to generate the target image.
- 7. A method of generating an image according to claim 2 or 3, wherein said acquiring at least one image frame comprises: In the case of shooting the motion scene based on a second zoom mode, N image frames are acquired based on a hexadecimal Hex image processing mode (N is a positive integer greater than or equal to 1), wherein the Hex image processing mode shoots one image frame at a time, the shooting time sequence of each image frame is different, and the zoom magnification corresponding to the second zoom mode is greater than a first threshold.
- 8. The image generation method according to claim 7, wherein after the acquiring of the at least one image frame, further comprising: selecting any one image frame from N image frames as a reference frame, and taking N-1 image frames except the reference frame as comparison frames; determining whether the moving body in the reference frame moves relative to the moving body in the reference frame; Determining a motion intensity of the moving body in the reference frame based on the moving body in the reference frame in a case where it is determined that the moving body in the reference frame moves; Modifying an image processing mode of the main body area corresponding to the N image frames into a pixel merging binding image processing mode and/or a four-pixel merging Quad image processing mode based on different motion intensities to generate at least one first processing image frame, wherein each contrast frame corresponds to one first processing image frame, has the binding image processing mode, and/or the main body area of the Quad image processing mode is used as the target main body area, and the non-main body area with the Hex image processing mode is used as the target non-main body area.
- 9. The image generation method according to claim 8, wherein the generating a target image based on a target non-subject region of the non-subject regions, and a target subject region of the subject regions, comprises: and performing time-domain multi-frame fusion processing on the first processing image frame to generate the target image.
- 10. The image generation method according to claim 8, wherein the determining whether the moving body in the reference frame moves with respect to the moving body in the reference frame includes: And performing a motion detection process for the reference frame to determine whether the motion subject in the reference frame is moving relative to the motion subject in the reference frame, wherein the motion detection process includes at least one of a frame difference process, an optical flow process, and a background subtraction process.
- 11. The image generation method according to claim 8, wherein the determining the motion intensity of the moving body in the reference frame based on the moving body in the reference frame includes: the motion subject in the reference frame and the motion subject in the reference frame are processed based on a frame difference method to determine the motion intensity of the motion subject in the reference frame.
- 12. The image generation method according to claim 8, wherein the modifying the image processing mode of the main area corresponding to the N image frames to the pixel merging binding image processing mode based on the different motion intensities, and/or the four-pixel merging Quad image processing mode to generate at least one first processed image frame includes: Dividing the N image frames into high-intensity image frames and low-intensity image frames according to the motion intensity, wherein the motion intensity corresponding to the high-intensity image frames is greater than a first intensity threshold value, and the motion intensity corresponding to the low-intensity image frames is less than or equal to the first intensity threshold value; And modifying an image processing mode of the main area corresponding to the high-intensity image frame into the binding image processing mode, and/or modifying an image processing mode of the main area corresponding to the low-intensity image frame into the Quad image processing mode, so as to generate the first processing image frame.
- 13. An electronic device comprising a touch screen, a memory and one or more processors, the touch screen, the memory coupled to the processors, wherein the memory has stored therein computer program code comprising computer instructions that, when executed by the processors, cause the electronic device to perform the image generation method of any of claims 1-12.
- 14. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the image generation method of any of claims 1-12.
Description
Image generation method and electronic equipment Technical Field The present application relates to the field of terminal technologies, and in particular, to an image generating method and an electronic device. Background A user typically captures, via an electronic device, some motion scenes, which refers to scenes in which a subject is in motion, e.g., motion scenes may include moving objects (e.g., running people, running vehicles), changing environments (e.g., blowing leaves), sporting events (e.g., football matches), etc. If the electronic device shoots a moving scene by adopting exposure time aiming at a static scene, the shot image has the effect of motion blur, and the quality of the image generated by the electronic device is poor. If the effect of motion blur is required to be avoided, the electronic device may reduce the exposure time in the motion scene, however, the reduction of the exposure time may result in a degradation of the signal-to-noise ratio of the captured image, and the quality of the image generated by the electronic device is poor. Therefore, when the electronic equipment shoots a moving scene, the generated image has poor quality, and the visual experience of a user is affected. Disclosure of Invention The embodiment of the application provides an image generation method and electronic equipment, which can generate higher-quality images when the electronic equipment shoots a motion scene, thereby improving the visual experience of a user. In a first aspect, an embodiment of the present application provides an image generating method, applied to an electronic device, where the image generating method is used for acquiring at least one image frame, where the image frame is used for capturing a same motion scene, the motion scene includes at least one motion subject, each image frame includes a subject area corresponding to the motion subject, and a non-subject area other than the motion subject, and generating a target image based on a target non-subject area in the non-subject area and a target subject area in the subject area, where in a same capturing period, an exposure time of the target non-subject area is greater than an exposure time of the target subject area, or in a same image frame, a resolution of the target non-subject area is greater than a resolution of the target subject area. According to the method provided by the application, the main body area corresponding to the moving main body and the non-main body areas except the moving main body are divided in the acquired image frames, the target main body area with higher image quality is selected from the main body areas, and the target non-main body area with higher image quality is selected from the non-main body areas for image fusion processing, so that a target image with higher quality can be generated, and the visual experience of a user is improved. In one implementation, acquiring at least one image frame includes displaying a first interface in a camera application and displaying a viewfinder window on the first interface, and capturing the motion scene in response to the viewfinder window to acquire the image frame. By adopting the implementation mode, when the user faces the view finding window to the motion scene, the electronic equipment can automatically find the view to acquire the image frame, and process the image frame to enable the image frame to be available for subsequent use. Therefore, as the electronic equipment captures the motion scene when the user does not perform shooting operation, the shooting efficiency is improved, and the shooting experience is enhanced. In one implementation, acquiring at least one image frame includes displaying a first interface in a camera application, displaying a viewfinder window on the first interface, and displaying a first control on the first interface, wherein the viewfinder window is used for finding a motion scene, the first control is used for shooting the motion scene in the viewfinder window, and shooting the motion scene in response to a first operation of the first control to acquire the image frame. By adopting the implementation mode, the electronic equipment responds to the user operation to view, so that the image frame acquired by the electronic equipment meets the user requirement. In one implementation, the image frames comprise a first image frame and a second image frame, and acquiring at least one image frame comprises acquiring one first image frame based on a single-frame progressive-stabilized image processing mode and one second image frame in each shooting period under the condition that a moving scene is shot based on a first zooming mode, wherein the exposure time of the first image frame is longer than that of the second image frame, and the zooming magnification corresponding to the first zooming mode is smaller than or equal to a first threshold value. With the implementation manner, under the basic zoom magnification (for exam