US-20260128002-A1 - TRAINING SYSTEM, TRAINING METHOD, DIMMING SYSTEM, DIMMING METHOD, COMPUTER-READABLE RECORDING MEDIUM WITH STORED PROGRAM, AND NON-TRANSITORY COMPUTER PROGRAM PRODUCT
Abstract
A training system, a training method, a dimming system, a dimming method, a computer-readable recording medium with a stored program, and a non-transitory computer program product are provided. The training method is used for training a to-be-trained neural network module and is performed by a processing module. The to-be-trained neural network module includes a target image generation module, a to-be-trained neural network, and a light distribution generation module. The training method includes: performing the following steps in one training epoch: repeatedly performing: using a training image in a training set as an input image, performing a convolution operation on an intermediate compensation image of the training image and light distribution to generate a convolutional image, and obtaining a loss based on the convolutional image and a target image; and updating a plurality of parameters based on an average of all losses obtained in the foregoing step and an updated algorithm.
Inventors
- Cheng-Chun Wang
Assignees
- REALTEK SEMICONDUCTOR CORP.
Dates
- Publication Date
- 20260507
- Application Date
- 20251023
- Priority Date
- 20241104
Claims (17)
- 1 . A training system, comprising a processing module and a to-be-trained neural network module, wherein the to-be-trained neural network module comprises: a target image generation module, configured to receive an input image and obtain a target image based on a target calculation procedure; a to-be-trained neural network having a plurality of parameters, and the to-be-trained neural network being configured to generate an intermediate compensation image based on the input image and the parameters; and a light distribution generation module, configured to generate light distribution based on the input image; and the processing module is configured to perform the following steps in one training epoch: (a) repeatedly performing the following operations: using a training image in a training set as the input image; performing a convolution operation on the intermediate compensation image of the training image and the light distribution to generate a convolutional image; and obtaining a loss based on the convolutional image and the target image; and (b) updating the parameters based on an average of all losses obtained in step (a) and an updated algorithm.
- 2 . The training system according to claim 1 , wherein the target image generation module comprises a depth information model, and the target calculation procedure comprises: (c) obtaining a depth image corresponding to the input image based on the depth information model; and (d) adjusting the input image based on depth information of the depth image to obtain the target image.
- 3 . The training system according to claim 2 , wherein step (d) comprises: (d1) increasing brightness of the input image based on a brightness adjustment coefficient to obtain a high brightness image; (d2) adjusting a gamma value of the input image based on an S curve to obtain a high contrast image; and (d3) performing a point-wise multiplication operation on the depth image and the high contrast image to obtain a depth-adjusted high contrast image; subtracting the depth image from an all-ones tensor to obtain a difference tensor, and performing the point-wise multiplication operation on the difference tensor and the high brightness image to obtain a depth-adjusted high brightness image; and performing a point-wise addition operation on the depth-adjusted high contrast image and the depth-adjusted high brightness image to obtain the target image.
- 4 . The training system according to claim 2 , wherein the depth information model comprises a MiDaS model.
- 5 . The training system according to claim 1 , wherein a mean square error is used as the loss.
- 6 . The training system according to claim 1 , wherein the light distribution generation module comprises a backlight decision module, the backlight decision module is configured to receive the input image and generate a plurality of backlight source intensities based on the input image, and the light distribution generation module is configured to generate the light distribution based on the backlight source intensities.
- 7 . A dimming system using the parameters trained by the training system according to claim 1 , comprising: a dimming system backlight decision module, configured to receive an image and generate a plurality of backlight source intensities of the image based on the image; and a neural network module, comprising a neural network, wherein the neural network is configured to have a same architecture as the to-be-trained neural network, and the neural network module is configured to store the parameters and is configured to receive the image and generate a compensation image of the image based on the neural network and the parameters.
- 8 . The dimming system according to claim 12 , wherein the dimming system comprises: a backlight driver module, configured to receive the backlight source intensities and drive a backlight source module of a display based on the backlight source intensities; and a panel driver module, configured to receive the compensation image and drive a display panel of the display based on the compensation image.
- 9 . A training method, used for training a to-be-trained neural network module and performed by a processing module, wherein the to-be-trained neural network module comprises: a target image generation module, configured to receive an input image and obtain a target image based on a target calculation procedure; a to-be-trained neural network having a plurality of parameters, and the to-be-trained neural network being configured to generate an intermediate compensation image based on the input image and the parameters; and a light distribution generation module, configured to generate light distribution based on the input image, and the training method comprises performing the following steps in one training epoch: (a) repeatedly performing the following operations: using a training image in a training set as the input image; performing a convolution operation on the intermediate compensation image of the training image and the light distribution to generate a convolutional image; and obtaining a loss based on the convolutional image and the target image; and (b) updating the parameters based on an average of all losses obtained in step (a) and an updated algorithm.
- 10 . The training method according to claim 7 , wherein the target image generation module comprises a depth information model, and the target calculation procedure comprises: (c) obtaining a depth image corresponding to the input image based on the depth information model; and (d) adjusting the input image based on depth information of the depth image to obtain the target image.
- 11 . The training method according to claim 8 , wherein step (d) comprises: (d1) increasing brightness of the input image based on a brightness adjustment coefficient to obtain a high brightness image; (d2) adjusting a gamma value of the input image based on an S curve to obtain a high contrast image; and (d3) performing a point-wise multiplication operation on the depth image and the high contrast image to obtain a depth-adjusted high contrast image; subtracting the depth image from an all-ones tensor to obtain a difference tensor, and performing the point-wise multiplication operation on the difference tensor and the high brightness image to obtain a depth-adjusted high brightness image; and performing a point-wise addition operation on the depth-adjusted high contrast image and the depth-adjusted high brightness image to obtain the target image.
- 12 . The training method according to claim 8 , wherein the depth information model comprises a MiDaS model.
- 13 . The training method according to claim 7 , wherein a mean square error is used as the loss.
- 14 . A dimming method using the parameters trained by the training method according to claim 9 , comprising: receiving an image and generating a plurality of backlight source intensities of the image based on the image by a dimming system backlight decision module; and receiving the image and generating a compensation image of the image based on a neural network and the parameters by a neural network module, wherein the neural network has a same architecture as the to-be-trained neural network.
- 15 . The dimming method according to claim 14 , wherein the dimming method comprises: receiving the backlight source intensities and driving a backlight source module of a display based on the backlight source intensities by a backlight driver module; and receiving the compensation image and driving a display panel of the display based on the compensation image by a panel driver module.
- 16 . A non-transitory computer-readable recording medium with a stored program, wherein after the stored program is loaded and executed by a processing unit, the method according to claim 9 is completed.
- 17 . A non-transitory computer-readable program product, storing at least one instruction, wherein when the at least one instruction is executed by a processing unit, the processing unit is enabled to perform the method according to claim 9 .
Description
CROSS-REFERENCE TO RELATED APPLICATION This non-provisional application claims priority under 35 U.S.C. § 119(a) to Patent Application No. 113142193 filed in Taiwan, R.O.C. on Nov. 4, 2024, the entire contents of which are hereby incorporated by reference. BACKGROUND Technical Field The present invention relates to the dimming field, and in particular, to a technology of applying a neural network to dimming. Related Art In a current local dimming system, a process roughly includes first performing backlight decision. A backlight source intensity of each block may be decided upon algorithm design, sometimes may be decided upon a maximum pixel, or sometimes may be decided upon an average pixel. Then, light spread modeling is performed. Light distribution of backlight is calculated based on the backlight source intensity. Then, pixel compensation is performed. Pixels are adjusted based on the backlight source intensity to maintain image stability. In an ideal situation, image contrast is enhanced in this manner. The backlight adjustment and the corresponding pixel compensation have some problems. First, in some low-brightness scenarios, because the pixel compensation is to calculate an amount to be compensated through local backlight, a black side may be caused in a conventional manner. In addition, the pixel compensation is decided based on the local backlight. A concept of depth of field is lacking in this process. As a result, a farther image has a darker surface, resulting in a poor depth-of-field effect. SUMMARY In view of this, some embodiments of the present invention provide a training system, a training method, a dimming system, a dimming method, a computer-readable recording medium with a stored program, and a non-transitory computer program product, to eliminate the current technical problems. Some embodiments of the present invention provide a training system, including a processing module and a to-be-trained neural network module. The to-be-trained neural network module includes: a target image generation module, configured to receive an input image and obtain a target image based on a target calculation procedure; a to-be-trained neural network having a plurality of parameters, and the to-be-trained neural network being configured to generate an intermediate compensation image based on the input image and the parameters; and a light distribution generation module, configured to generate light distribution based on the input image. The processing unit is configured to perform the following steps in one training epoch: repeatedly performing the following operations: using a training image in a training set as the input image; performing a convolution operation on the intermediate compensation image of the training image and the light distribution to generate a convolutional image; and obtaining a loss based on the convolutional image and the target image; and updating the parameters based on an average of all losses obtained in the foregoing step and an updated algorithm. Some embodiments of the present invention provide a training method, used for training a to-be-trained neural network module and performed by a processing module. The to-be-trained neural network module includes: a target image generation module, configured to receive an input image and obtain a target image based on a target calculation procedure; a to-be-trained neural network having a plurality of parameters, and the to-be-trained neural network being configured to generate an intermediate compensation image based on the input image and the parameters; and a light distribution generation module, configured to generate light distribution based on the input image. The training method includes performing the following steps in one training epoch: repeatedly performing the following operations: using a training image in a training set as the input image; performing a convolution operation on the intermediate compensation image of the training image and the light distribution to generate a convolutional image; and obtaining a loss based on the convolutional image and the target image; and updating the parameters based on an average of all losses obtained in the foregoing step and an updated algorithm. Some embodiments of the present invention provide a dimming system. The dimming system includes a dimming system backlight decision module and a neural network module; The backlight decision module of the dimming system is configured to receive an image and generate a plurality of backlight source intensities of the image based on the image. The neural network module includes a neural network, the neural network is configured to have a same architecture as the to-be-trained neural network, and the neural network module is configured to store parameters obtained by the training system through training, and is configured to receive an image, and generate a compensation image of the image based on the neural network and the parameters. So