Search

US-20260127712-A1 - MULTI-FRAME EDGE-ENHANCED DEGHOSTING

US20260127712A1US 20260127712 A1US20260127712 A1US 20260127712A1US-20260127712-A1

Abstract

A method includes selecting a reference frame and a non-reference frame from a plurality of image frames. The method also includes generating a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame. The method further includes generating a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map. The method also includes generating a blend map based on the reference and non-reference frames. The method further includes modifying the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map. In addition, the method includes blending the reference and non-reference frames based on the modified blend map to generate an output image.

Inventors

  • Weidi Liu
  • Gunawath Dilshan GODALIYADDA
  • Nguyen Thang Long Le
  • John W. Glotzbach
  • Hamid R. Sheikh

Assignees

  • SAMSUNG ELECTRONICS CO., LTD.

Dates

Publication Date
20260507
Application Date
20241107

Claims (20)

  1. 1 . A method comprising: selecting, using at least one processing device of an electronic device, a reference frame and a non-reference frame from a plurality of image frames; generating, using the at least one processing device, a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame; generating, using the at least one processing device, a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map; generating, using the at least one processing device, a blend map based on the reference frame and the non-reference frame; modifying, using the at least one processing device, the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map; and blending, using the at least one processing device, the reference frame and the non-reference frame based on the modified blend map to generate an output image.
  2. 2 . The method of claim 1 , wherein generating the moving edge map comprises: filtering the edges of the reference edge map and the edges of the non-reference edge map that have a length below a threshold; and identifying the one or more movements based on a non-zero area of a difference between corresponding edges of the reference edge map and the non-reference edge map.
  3. 3 . The method of claim 1 , wherein modifying the blend map comprises: for each identified movement of a respective pixel in the moving edge map, multiplying a pixel value for the respective pixel in the blend map by an edge score in the moving edge map to generate a corresponding pixel value for the respective pixel in the modified blend map.
  4. 4 . The method of claim 3 , wherein modifying the blend map further comprises: for each static pixel identified in the moving edge map, maintaining a pixel value for the static pixel in the modified blend map.
  5. 5 . The method of claim 1 , further comprising: selecting a second non-reference frame from the plurality of image frames; generating a second non-reference edge map identifying edges in the second non-reference frame; generating a second moving edge map based on at least one movement between at least one of the edges of the reference edge map and at least one of the edges of the second non-reference edge map; generating a second blend map based on the reference frame and the second non-reference frame; and modifying the second blend map based on at least one indication of movement of corresponding pixels in the second moving edge map to generate a second modified blend map; and wherein blending the reference frame and the non-reference frame comprises blending the reference frame, the non-reference frame weighted by the modified blend map, and the second non-reference frame weighted by the second modified blend map.
  6. 6 . The method of claim 5 , further comprising: generating a final blend map for the non-reference frame by selecting a final pixel value for a respective pixel based on a highest pixel value between (i) a pixel value of the respective pixel in the modified blend map and (ii) a pixel value of the respective pixel in a modified blend map of an adjacent non-reference frame; wherein blending the non-reference frame weighted by the modified blend map comprises blending the non-reference frame weighted by the final blend map.
  7. 7 . The method of claim 6 , further comprising: performing dilation on the moving edge map of the non-reference frame and the moving edge map of the adjacent non-reference frame to determine an intersection area value; wherein generating the final blend map comprises multiplying the final pixel value by the intersection area value.
  8. 8 . An apparatus comprising: at least one processing device configured to: select a reference frame and a non-reference frame from a plurality of image frames; generate a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame; generate a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map; generate a blend map based on the reference frame and the non-reference frame; modify the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map; and blend the reference frame and the non-reference frame based on the modified blend map to generate an output image.
  9. 9 . The apparatus of claim 8 , wherein, to generate the moving edge map, the at least one processing device is configured to: filter the edges of the reference edge map and the edges of the non-reference edge map that have a length below a threshold; and identify the one or more movements based on a non-zero area of a difference between corresponding edges of the reference edge map and the non-reference edge map.
  10. 10 . The apparatus of claim 8 , wherein, to modify the blend map, the at least one processing device is configured, for each identified movement of a respective pixel in the moving edge map, to multiply a pixel value for the respective pixel in the blend map by an edge score in the moving edge map to generate a corresponding pixel value for the respective pixel in the modified blend map.
  11. 11 . The apparatus of claim 10 , wherein, to modify the blend map, the at least one processing device is further configured, for each static pixel identified in the moving edge map, to maintain a pixel value for the static pixel in the modified blend map.
  12. 12 . The apparatus of claim 8 , wherein the at least one processing device is further configured to: select a second non-reference frame from the plurality of image frames; generate a second non-reference edge map identifying edges in the second non-reference frame; generate a second moving edge map based on at least one movement between at least one of the edges of the reference edge map and at least one of the edges of the second non-reference edge map; generate a second blend map based on the reference frame and the second non-reference frame; and modify the second blend map based on at least one indication of movement of corresponding pixels in the second moving edge map to generate a second modified blend map; and wherein, to blend the reference frame and the non-reference frame, the at least one processing device is configured to blend the reference frame, the non-reference frame weighted by the modified blend map, and the second non-reference frame weighted by the second modified blend map.
  13. 13 . The apparatus of claim 12 , wherein: the at least one processing device is further configured to generate a final blend map for the non-reference frame; to generate the final blend map for the non-reference frame, the at least one processing device is configured to select a final pixel value for a respective pixel based on a highest pixel value between (i) a pixel value of the respective pixel in the modified blend map and (ii) a pixel value of the respective pixel in a modified blend map of an adjacent non-reference frame; and to blend the non-reference frame weighted by the modified blend map, the at least one processing device is configured to blend the non-reference frame weighted by the final blend map.
  14. 14 . The apparatus of claim 13 , wherein: the at least one processing device is further configured to perform dilation on the moving edge map of the non-reference frame and the moving edge map of the adjacent non-reference frame to determine an intersection area value; and to generate the final blend map, the at least one processing device is configured to multiply the final pixel value by the intersection area value.
  15. 15 . A non-transitory machine-readable medium containing instructions that when executed cause at least one processor to: select a reference frame and a non-reference frame from a plurality of image frames; generate a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame; generate a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map; generate a blend map based on the reference frame and the non-reference frame; modify the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map; and blend the reference frame and the non-reference frame based on the modified blend map to generate an output image.
  16. 16 . The non-transitory machine-readable medium of claim 15 , wherein the instructions that when executed cause the at least one processor to generate the moving edge map comprise instructions that when executed cause the at least one processor to: filter the edges of the reference edge map and the edges of the non-reference edge map that have a length below a threshold; and identify the one or more movements based on a non-zero area of a difference between corresponding edges of the reference edge map and the non-reference edge map.
  17. 17 . The non-transitory machine-readable medium of claim 15 , wherein the instructions that when executed cause the at least one processor to modify the blend map comprise instructions that when executed cause the at least one processor to: for each identified movement of a respective pixel in the moving edge map, multiply a pixel value for the respective pixel in the blend map by an edge score in the moving edge map to generate a corresponding pixel value for the respective pixel in the modified blend map; and for each static pixel identified in the moving edge map, maintain a pixel value for the static pixel in the modified blend map.
  18. 18 . The non-transitory machine-readable medium of claim 15 , further containing instructions that when executed cause the at least one processor to: select a second non-reference frame from the plurality of image frames; generate a second non-reference edge map identifying edges in the second non-reference frame; generate a second moving edge map based on at least one movement between at least one of the edges of the reference edge map and at least one of the edges of the second non-reference edge map; generate a second blend map based on the reference frame and the second non-reference frame; and modify the second blend map based on at least one indication of movement of corresponding pixels in the second moving edge map to generate a second modified blend map; and wherein the instructions that when executed cause the at least one processor to blend the reference frame and the non-reference frame comprise instructions that when executed cause the at least one processor to blend the reference frame, the non-reference frame weighted by the modified blend map, and the second non-reference frame weighted by the second modified blend map.
  19. 19 . The non-transitory machine-readable medium of claim 18 , further containing instructions that when executed cause the at least one processor to generate a final blend map for the non-reference frame by selecting a final pixel value for a respective pixel based on a highest pixel value between (i) a pixel value of the respective pixel in the modified blend map and (ii) a pixel value of the respective pixel in a modified blend map of an adjacent non-reference frame; wherein the instructions that when executed cause the at least one processor to blend the non-reference frame weighted by the modified blend map comprise instructions that when executed cause the at least one processor to blend the non-reference frame weighted by the final blend map.
  20. 20 . The non-transitory machine-readable medium of claim 19 , further containing instructions that when executed cause the at least one processor to perform dilation on the moving edge map of the non-reference frame and the moving edge map of the adjacent non-reference frame to determine an intersection area value; wherein the instructions that when executed cause the at least one processor to generate the final blend map comprise instructions that when executed cause the at least one processor to multiply the final pixel value by the intersection area value.

Description

TECHNICAL FIELD This disclosure relates generally to image processing. More specifically, this disclosure relates to multi-frame edge-enhanced deghosting. BACKGROUND Many mobile electronic devices, such as smartphones and tablet computers, include cameras that can be used to capture still and video images. In some cases, an electronic device can capture multiple low dynamic range (LDR) image frames of a real-world scene at different exposure levels and blend the LDR image frames to produce a high dynamic range (HDR) image of the real-world scene. This process is often referred to as multi-frame processing (MFP). The resulting HDR image generally has a larger dynamic range than any of the individual LDR image frames. Among other things, blending the LDR image frames to produce the HDR image can help to incorporate greater image details into both darker regions and brighter regions of the HDR image so that the HDR image can preserve details of the real-world scene. Moreover, noise in the HDR image can be mitigated based on an analysis of the noise profiles for all of the LDR image frames, resulting in less noise in the HDR image. SUMMARY This disclosure relates to multi-frame edge-enhanced deghosting. In a first embodiment, a method includes selecting, using at least one processing device of an electronic device, a reference frame and a non-reference frame from a plurality of image frames. The method also includes generating, using the at least one processing device, a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame. The method further includes generating, using the at least one processing device, a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map. The method also includes generating, using the at least one processing device, a blend map based on the reference frame and the non-reference frame. The method further includes modifying, using the at least one processing device, the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map. In addition, the method includes blending, using the at least one processing device, the reference frame and the non-reference frame based on the modified blend map to generate an output image. In a second embodiment, an apparatus includes at least one processing device configured to select a reference frame and a non-reference frame from a plurality of image frames. The at least one processing device is also configured to generate a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame. The at least one processing device is further configured to generate a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map. The at least one processing device is also configured to generate a blend map based on the reference frame and the non-reference frame. The at least one processing device is further configured to modify the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map. In addition, the at least one processing device is configured to blend the reference frame and the non-reference frame based on the modified blend map to generate an output image. In a third embodiment, a non-transitory machine-readable medium contains instructions that when executed cause at least one processor to select a reference frame and a non-reference frame from a plurality of image frames. The non-transitory machine-readable medium also contains instructions that when executed cause the at least one processor to generate a reference edge map identifying edges in the reference frame and a non-reference edge map identifying edges in the non-reference frame. The non-transitory machine-readable medium further contains instructions that when executed cause the at least one processor to generate a moving edge map based on one or more movements between one or more of the edges of the reference edge map and one or more of the edges of the non-reference edge map. The non-transitory machine-readable medium also contains instructions that when executed cause the at least one processor to generate a blend map based on the reference frame and the non-reference frame. The non-transitory machine-readable medium further contains instructions that when executed cause the at least one processor to modify the blend map based on one or more indications of movement of corresponding pixels in the moving edge map to generate a modified blend map. In addition, the non-transitory machine-readable medium contains instructions that when executed cause the at least one processor to blend the