CN-122002127-A - Luminance extraction method, electronic device, storage medium, and program product
Abstract
The application discloses a brightness extraction method, electronic equipment, a storage medium and a program product, and relates to the technical field of near-eye display, wherein the method comprises the steps of determining a mapping relation between camera pixels of a target camera and screen pixels of a display screen when an instruction for extracting brightness of the display screen of the near-eye display equipment by the target camera is detected, wherein the target camera is positioned at an exit pupil frame of the near-eye display equipment; and extracting the brightness information corresponding to each screen pixel on the display screen from the calibrated brightness camera image based on the mapping relation. The application can reliably and efficiently acquire the accurate brightness information which is completely matched with the original pixel distribution of the screen under the condition that the camera image is severely distorted due to the distortion of the optical system.
Inventors
- ZHANG ZHONGYA
- GAO XIANG
- LIANG JIANHUA
- TANG YINGYING
- LIU YAOCHENG
Assignees
- 青岛歌尔视显科技有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260112
Claims (10)
- 1. A method of brightness extraction, the method comprising: When an instruction for extracting brightness of a display screen of a near-eye display device through a target camera is detected, determining a mapping relation between camera pixels of the target camera and screen pixels of the display screen, wherein the target camera is positioned at an exit pupil frame of the near-eye display device; Displaying a calibrated brightness image on the display screen, and shooting by the target camera to obtain a calibrated brightness camera image corresponding to the calibrated brightness image; and extracting brightness information corresponding to each screen pixel on the display screen from the calibrated brightness camera image based on the mapping relation.
- 2. The method of claim 1, wherein the step of determining a mapping relationship between camera pixels of the target camera and screen pixels of the display screen comprises: Displaying a pixel positioning image on the display screen, and shooting by the target camera to obtain a pixel positioning camera image corresponding to the pixel positioning image, wherein the pixel positioning image comprises a positioning point array formed by a plurality of positioning points; Determining screen pixels corresponding to each positioning point in the positioning point array, positioning a camera image through the pixels, and determining camera pixels corresponding to each positioning point in the positioning point array; And establishing a mapping relation between the camera pixel of the target camera and the screen pixel of the display screen based on the screen pixel and the camera pixel corresponding to each positioning point in the positioning point array.
- 3. The method of claim 2, wherein the step of determining camera pixels corresponding to each anchor point in the array of anchor points by locating a camera image from the pixels comprises: Determining camera pixels corresponding to a first positioning point and a second positioning point from the pixel positioning camera image, wherein the first positioning point is a central positioning point of the positioning point array, and the second positioning point is a positioning point adjacent to the first positioning point in the positioning point array; And determining a camera pixel corresponding to a third positioning point from the pixel positioning camera image based on the camera pixels corresponding to the first positioning point and the second positioning point, wherein the third positioning point is a positioning point outside the first positioning point and the second positioning point in the positioning point array.
- 4. The method of claim 3, wherein the step of determining a camera pixel corresponding to a third location point from the pixel location camera image based on camera pixels corresponding to the first location point and the second location point comprises: Determining the first positioning point and the second positioning point as mapped positioning points, and determining the third positioning point as unmapped positioning points; Determining a target unmapped positioning point from the unmapped positioning points, and determining a first target mapped positioning point and a second target mapped positioning point corresponding to the target unmapped positioning point from the mapped positioning points, wherein the target unmapped positioning point is a positioning point adjacent to the mapped positioning point in the unmapped positioning points, the first target mapped positioning point is a positioning point adjacent to the target unmapped positioning point in the mapped positioning points, and the second target mapped positioning point is a positioning point adjacent to the first target mapped positioning point in the mapped positioning points and is on the same straight line with the first target mapped positioning point and the target unmapped positioning point; Determining the camera pixels corresponding to the target non-mapped positioning points from the pixel positioning camera image according to the camera pixels corresponding to the first target mapped positioning points and the second target mapped positioning points; and determining the target unmapped positioning point as a mapped positioning point, and returning to the step of determining the target unmapped positioning point from the unmapped positioning points until the unmapped positioning point does not exist, so as to obtain all the camera pixels corresponding to the third positioning points.
- 5. The method of claim 4 wherein the step of determining camera pixels corresponding to the target unmapped anchor points from the pixel-located camera image based on camera pixels corresponding to the first target mapped anchor points and the second target mapped anchor points comprises: Determining a first camera pixel distance and a first camera pixel direction between the first target mapped positioning point and the second target mapped positioning point according to the camera pixel corresponding to the first target mapped positioning point and the camera pixel corresponding to the second target mapped positioning point; Determining a camera pixel to be corrected corresponding to the target unmapped positioning point according to the camera pixel corresponding to the first target mapped positioning point, the first camera pixel distance and the first camera pixel direction; and determining the camera pixel corresponding to the target unmapped positioning point from the pixel positioning camera image based on the camera pixel to be corrected.
- 6. The method of claim 5, wherein the step of determining the camera pixel corresponding to the target unmapped anchor point from the pixel-oriented camera image based on the camera pixel to be corrected comprises: according to the camera pixels to be corrected, determining an image point imaging area corresponding to the target unmapped positioning point from the pixel positioning camera image, and extracting brightness information corresponding to each camera pixel in the image point imaging area from the pixel positioning camera image; and determining the brightness weight corresponding to each camera pixel in the image point imaging area based on the brightness information corresponding to each camera pixel in the image point imaging area, and determining the camera pixel corresponding to the target unmapped positioning point from the image point imaging area by a centroid method based on the brightness weight.
- 7. The method of claim 6, wherein the step of determining a first target mapped anchor point and a second target mapped anchor point corresponding to the target unmapped anchor point from the mapped anchor points comprises: Detecting whether a third target mapped positioning point corresponding to the target unmapped positioning point exists in the mapped positioning points, wherein the third target mapped positioning point is a positioning point which is adjacent to the second target mapped positioning point in the mapped positioning points and is in the same straight line with the second target mapped positioning point, the first target mapped positioning point and the target unmapped positioning point; Under the condition that a third target mapped positioning point corresponding to the target unmapped positioning point does not exist in the mapped positioning points, determining a first target mapped positioning point and a second target mapped positioning point corresponding to the target unmapped positioning point from the mapped positioning points; After the step of detecting whether the third target mapped positioning point corresponding to the target unmapped positioning point exists in the mapped positioning points, the method further includes: under the condition that a third target mapped positioning point corresponding to the target unmapped positioning point exists in the mapped positioning points, determining a first target mapped positioning point, a second target mapped positioning point and a third target mapped positioning point corresponding to the target unmapped positioning point from the mapped positioning points; Determining a first camera pixel distance and a first camera pixel direction between the first target mapped positioning point and the second target mapped positioning point according to the camera pixel corresponding to the first target mapped positioning point and the camera pixel corresponding to the second target mapped positioning point; Determining a second camera pixel distance and a second camera pixel direction between the second target mapped positioning point and the third target mapped positioning point according to the camera pixel corresponding to the second target mapped positioning point and the camera pixel corresponding to the third target mapped positioning point; calculating a third camera pixel distance according to the first camera pixel distance and the second camera pixel distance; Calculating a third camera pixel direction according to the first camera pixel direction and the second camera pixel direction; And determining a camera pixel to be corrected corresponding to the target unmapped positioning point according to the camera pixel corresponding to the first target mapped positioning point, the third camera pixel distance and the third camera pixel direction, and executing the step of determining the camera pixel corresponding to the target unmapped positioning point from the pixel positioning camera image based on the camera pixel to be corrected.
- 8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the luminance extraction method according to any one of claims 1 to 7.
- 9. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the luminance extraction method according to any one of claims 1 to 7.
- 10. A program product, characterized in that the program product is a computer program product comprising a computer program which, when being executed by a processor, implements the steps of the luminance extraction method according to any one of claims 1 to 7.
Description
Luminance extraction method, electronic device, storage medium, and program product Technical Field The present application relates to the field of near-eye display technologies, and in particular, to a luminance extraction method, an electronic device, a storage medium, and a program product. Background With the popularization of near-eye display devices such as virtual reality and augmented reality, the brightness uniformity and color fidelity of the display screen have become core indexes for determining the quality of user experience. To ensure that the user gets an immersed, comfortable and consistent visual experience, the brightness of each screen pixel on the display screen must be corrected with high precision to eliminate the problem of uneven brightness caused by manufacturing tolerances, optical system attenuation or assembly deviations, which is of great importance to the improvement of product display quality and user satisfaction. Currently, acquiring screen pixel level brightness data for correction typically relies on analyzing images acquired by a camera located at the exit pupil frame. However, due to the inherent, complex distortions of the near-eye display device optics, the image captured by the camera suffers from significant, non-linear distortions in both geometry and brightness distribution. This distortion introduced by the optical path causes systematic deviations between the luminance information characterized in the camera image and the original luminance information actually emitted by the display screen that is expected to be observed, which are difficult to trace back and match directly. The traditional method tries to correct the distortion through a complex image processing algorithm, but the correction process has large calculation amount and low efficiency, and more importantly, the accuracy and universality of a correction model are difficult to ensure, new errors are easily introduced, the reliability of finally extracted brightness data is reduced, and the requirement of high-precision brightness correction cannot be met. Therefore, how to reliably and efficiently obtain accurate brightness information that is completely matched with the original pixel distribution of the screen under the condition that the camera image is severely distorted due to the distortion of the optical system has become a core technical bottleneck for realizing high-quality brightness correction in the near-eye display field. Disclosure of Invention The application mainly aims to provide a brightness extraction method, electronic equipment, storage medium and program product, and aims to solve the technical problem of reliably and efficiently obtaining accurate brightness information completely matched with the original pixel distribution of a screen under the condition that a camera image is severely distorted due to the distortion of an optical system. In order to achieve the above object, the present application provides a brightness extraction method, which includes: When an instruction for extracting brightness of a display screen of a near-eye display device through a target camera is detected, determining a mapping relation between camera pixels of the target camera and screen pixels of the display screen, wherein the target camera is positioned at an exit pupil frame of the near-eye display device; Displaying a calibrated brightness image on the display screen, and shooting by the target camera to obtain a calibrated brightness camera image corresponding to the calibrated brightness image; and extracting brightness information corresponding to each screen pixel on the display screen from the calibrated brightness camera image based on the mapping relation. In addition, in order to achieve the above object, the present application also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the luminance extraction method as described above. Furthermore, to achieve the above object, the present application also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the luminance extraction method as described above. Furthermore, to achieve the above object, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the luminance extraction method as described above. The embodiment of the application provides a brightness extraction method, electronic equipment, a storage medium and a program product, and relates to the technical field of near-eye display, wherein the brightness extraction method comprises the following steps: when an instruction for extracting brightness of a display screen of the near-eye display device through the target camera is detect