Search

EP-4740170-A1 - METHOD FOR REPRESENTING A SCENE ON A DISPLAY MEDIUM, COMPUTER PROGRAM, DEVICE, APPARATUS AND VEHICLE IMPLEMENTING SUCH A METHOD

EP4740170A1EP 4740170 A1EP4740170 A1EP 4740170A1EP-4740170-A1

Abstract

The invention relates to a method (100) for representing a scene on a display medium (416) from a stack of images (PILs) of said scene, comprising at least one iteration of a display phase (110), the method comprising the following steps: - detecting (112) a target position of the scene; - identifying (114), in the stack of images (PILs), an image that sharply displays said target position, comprising the following steps for at least one of the other images in the stack (PILs), referred to as candidate image:  calculating (118), by a predetermined coordinate transformation function, the target position in said candidate image, and  when the target position in the candidate image is located in a sharp area of the candidate image, displaying (126) said image on the display medium. The invention also relates to a computer program and a device implementing such a method.

Inventors

  • LEGROS, ERIC
  • PETITGRAND, Sylvain
  • LUONG, BRUNO

Assignees

  • Fogale Optique

Dates

Publication Date
20260513
Application Date
20230708

Claims (15)

  1. 1. Method (100; 200; 300) of representing a scene on a display medium (416) from a stack of images (PIL S ) of said scene, each taken with a different focus so that each image of the stack (PIL S ) has a zone of sharpness different from the other images, said method (100; 200; 300) comprising at least one iteration of a display phase (110) comprising the following steps: - detection (112) of a target position of the scene, from an image, called current image, chosen from said stack of images (PIL S ) and currently displayed on said display medium (416); - identification (114), in the image stack (PIL S ), of an image clearly displaying said target position, comprising the following steps for at least one of the other images of the stack (PILs), called candidate image: ■ calculation (118), by a predetermined coordinate transformation function, of the target position in said candidate image, taking as input the target position in said current image, and ■ when the target position in the candidate image is within a sharpness area of said candidate image, selecting (116) said candidate image as an image to be displayed on said display medium (416); and - displaying (126) said selected image on said display medium (416).
  2. 2. Method (100; 200; 300) according to the preceding claim, characterized in that it further comprises a step (124) of recalibrating the selected image, relative to the current image, by an inverse coordinate transformation function, applied to each point of the image to be displayed.
  3. 3. Method (100; 200; 300) according to any one of the preceding claims, characterized in that, for at least one candidate image, the transformation function is a function of: - the focus of the current image, and - the focus of the candidate image; so that a coordinate transformation function is associated with each focus pair {current image focus; candidate image focus}.
  4. 4. Method (100;200;300) according to any one of the preceding claims, characterized in that at least one coordinate transformation function comprises: - a first coordinate transformation function giving the target position in an image format, called reference format, previously defined, as a function of the target position in the current image; and - a second coordinate transformation function giving the target position in the candidate image, as a function of the target position in the reference format.
  5. 5. Method (100;200;300) according to any one of the preceding claims, characterized in that at least one transformation function is a position correspondence matrix.
  6. 6. Method (200; 300) according to any one of the preceding claims, characterized in that it further comprises, prior to the first iteration of the display phase (110), a phase (202), called the calibration phase, for determining at least one coordinate transformation function.
  7. 7. Method (200; 300) according to any one of the preceding claims, characterized in that the calibration phase (202) is carried out with the same camera module as that used for capturing the stack of images of the scene (PIL S ) so that at least one transformation function is specific to said camera module.
  8. 8. Method (200; 300) according to any one of the preceding claims, characterized in that the calibration phase (202) comprises the following steps: - acquisition (204) of an image stack (PILR), called calibration, with at least the same focus as those of the images of the scene image stack (PILs), and - deduction (206) from said calibration image stack (PILR) of at least one coordinate transformation function.
  9. 9. Method (200; 300) according to any one of claims 1 to 7, characterized in that the calibration phase (202) is carried out with the stack of images of the scene (PILs), by analysis of the content of said images.
  10. 10. Method (100; 200; 300) according to any one of claims 1 to 5, characterized in that at least one coordinate transformation function is received from a device other than that implementing said method, and in particular when the stack of images of the scene (PILs) has been acquired by said other device.
  11. 11. Computer program comprising executable instructions which, when executed by a computing device, implement all the steps of the method (100; 200; 300) according to any one of the preceding claims.
  12. 12. Processing device (400) comprising means configured to implement all the steps of the method (100; 200; 300) according to any one of claims 1 to 10.
  13. 13. Apparatus (510;520;530) comprising: - at least one display means (404) for displaying an image on a display medium (416), - at least one means (406) for detecting a target position on an image displayed on said display medium, and - at least one computing unit; configured to implement the method (100;200;300) according to any one of claims 1 to 10.
  14. 14. Apparatus (510;520;530) according to the preceding claim, characterized in that it is: - a smartphone (510), - a tablet - a computer, - a television, - a virtual reality or augmented reality headset (520), or - a medical imaging device (530).
  15. 15. Vehicle (600) comprising: - at least one display means (404) for displaying an image on a display medium (416), and - at least one means (406) for detecting a target position on an image displayed on said display medium, and - at least one calculation unit; configured to implement the method (100; 200; 300) according to any one of claims 1 to 10.

Description

DESCRIPTION Title: Method for representing a scene on a display medium, computer program, device, apparatus and vehicle implementing such a method. [0001] The present invention relates to a method for representing a scene on a display medium, from a stack of images of said scene. It also relates to a computer program and a device implementing such a method. It further relates to an apparatus and a vehicle implementing such a method. [0002] The field of the invention is the field of the representation of a scene on a display medium from images of said scene. State of the art [0003] Techniques are known for representing a scene from a stack of images taken at different focusing distances. For example, the technique called focus bracketing, or focus stacking, is known, which makes it possible to capture a stack of images of a scene, each image being captured at a different depth of field. Then, the scene is represented on a display medium, with a greater depth of field, by exploiting the stack of images. [0004] However, current techniques, although improving the depth of field of the image of a scene, have drawbacks. For example, when acquiring the stack of images by a camera module comprising an image sensor and an optical lens, the distance between the image sensor and the optical lens is modified to change the focus, and this for each image. The inventors have noticed that this change in focus causes deformations, or shifts, in the images and therefore degrades the representation of the scene on a display medium from the image stack. [0005] An aim of the present invention is to remedy at least one of the drawbacks of the state of the art. [0006] Another aim of the invention is to propose a solution making it possible to improve the representation of a scene from images of said scene captured at different focuses. Disclosure of the invention [0007] The invention proposes to achieve at least one of the aforementioned aims by a method of representing a scene on a display medium from a stack of images of said scene, each taken with a different focus so that each image of the stack has a zone of sharpness different from the other images, said method comprising at least one iteration of a display phase comprising the following steps: - detection of a target position of the scene, from an image, called the current image, chosen from said stack of images and currently displayed on said display medium; - identifying, in the stack of images, an image clearly displaying said target position, comprising the following steps for at least one of the other images in the stack, called a candidate image: ■ calculating, by a predetermined coordinate transformation function, the target position in said candidate image, taking as input the target position in said current image, and ■ when the target position in the candidate image is within a sharpness area of said candidate image, selecting said candidate image as an image to be displayed on said display medium; and - displaying said selected image on said display medium. [0008] Thus, the method according to the invention proposes to identify, in the stack of images of the scene, an image to be displayed on which the target position is located in a clear zone. When this image to be displayed is identified, it is displayed on the display medium. Thus, for a given target position, the invention makes it possible to choose, on the fly, the image of the scene clearly displaying a region of the scene comprising said target position. In other words, the invention makes it possible to dynamically adjust the clear zone of a scene displayed on the display medium, by using a stack of images each comprising a clear zone of the scene. [0009] Above all, the invention proposes to identify the image to be displayed, i.e. the image of the stack on which the target position is clear, by correcting the target position obtained on the current image, with a coordinate transformation function. This coordinate transformation function makes it possible to identify more precisely said target position on the other image by taking into account the geometric aberrations induced by a difference in focus between the images of the image stack. Thus, the method according to the invention makes it possible to take into account the geometric aberrations introduced during a change of focus in the imaging module used to capture the image stack, and therefore to improve the choice of the image to be displayed and therefore the display of the scene on a display medium. [0010] By image is meant a digital image, and in particular a matrix image. [0011] According to the invention, the display medium can be any type of display medium. [0012] For example, the display medium may be a display screen, such as for example a touch screen, etc. [0013] For example, the display medium may be a surface onto which an image of the scene is projected, such as for example a display surface associated with a projector, etc. [0014