CN-121985211-A - USB camera equipment capable of self-adapting to light adjustment and control method thereof
Abstract
The application relates to the technical field of camera control, in particular to a USB camera device capable of self-adapting light adjustment and a control method thereof, comprising the following steps of acquiring and storing scene characteristic information; when the camera is started later, matching and verifying the environmental data and scene characteristic information, triggering a special regulation mode when the environmental data and the scene characteristic information are verified to be consistent, identifying a target visual attention area from a picture, distributing visual importance weight for the target visual attention area, calculating reference gain, carrying out differential distribution on the reference gain to generate gain regulation mapping, regulating the picture by applying the gain regulation mapping, carrying out self-adaptive visual fusion between the target visual attention area and an adjacent area, and outputting a final picture. The application realizes the differential gain adjustment of the visual attention area by memorizing and identifying the unique light intensity gradient and the space structure formed by the camera when being installed at the corner, so as to solve the imaging contradiction under the scene of weak light gradient.
Inventors
- HUANG CHANGYONG
- LIU QIANG
- XIAO ZHENGHUI
Assignees
- 深圳市中维奥柯科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20251222
Claims (9)
- 1. The USB camera control method for self-adaptive light adjustment is characterized by comprising the following steps of: Acquiring and storing scene characteristic information of a specific scene where the camera is arranged for the first time, wherein the scene characteristic information comprises spatial structure characteristics and light intensity gradient characteristics; when the camera is started subsequently, matching and verifying the environment data acquired in real time with the stored scene characteristic information, and triggering a special regulation mode corresponding to the specific scene when the verification is consistent; in the special regulation mode, positioning a scene reference boundary from a real-time picture according to the spatial structure characteristics, identifying a target visual attention area from the picture based on the light intensity gradient characteristics and the scene reference boundary, and simultaneously distributing visual importance weights to the target visual attention area; Calculating a reference gain according to the illumination level of the target visual attention area and the visual importance weight, and performing differential distribution on the reference gain in a picture space based on the light intensity gradient characteristics to generate a gain regulation and control map; And adjusting the picture by applying the gain control mapping, and carrying out self-adaptive visual fusion between the target visual attention area and the adjacent area according to the visual importance weight, so as to output a final picture.
- 2. The method according to claim 1, wherein the acquiring and storing scene characteristic information of a specific scene in which the camera is first installed includes: analyzing the first-time installed picture, and automatically detecting spatial structure characteristics which accord with preset structure judgment conditions so as to identify the spatial layout of the specific scene; when the specific scene is identified, calculating a light intensity gradient distribution characteristic based on the light intensity change from the identified reference position to a preset reference area in the picture; And correlating the identified spatial structure characteristics with the calculated light intensity gradient distribution characteristics, and storing the spatial structure characteristics as scene characteristic information.
- 3. The method of claim 1, wherein the matching verification of the real-time collected environmental data with the stored scene feature information comprises: based on the stored scene characteristic information, acquiring the spatial structure characteristic and the light intensity gradient characteristic as verification references; Analyzing the real-time picture, extracting real-time spatial structural features, and comparing the real-time spatial structural features with the spatial structural features in the verification standard in a geometric consistency manner; Meanwhile, analyzing the light intensity distribution of the real-time picture, calculating the real-time light intensity gradient characteristic, and carrying out gradient consistency verification with the light intensity gradient characteristic in the verification standard; Only when the real-time environment data indicate that the real-time environment data are in a weak light condition and the results of the geometric consistency comparison and the gradient consistency verification meet a preset confidence coefficient threshold, judging that the verification is consistent so as to trigger the special regulation mode; The low light condition means that the ambient illuminance measured by the ambient light sensor is lower than a preset illumination threshold, or the average brightness of the real-time picture is lower than a preset brightness threshold.
- 4. The method of claim 1, wherein identifying a target visual attention from a picture while assigning a visual importance weight to the target visual attention comprises: Positioning a scene reference boundary in the real-time picture according to the spatial structure characteristics; the scene datum demarcation is used as a reference, and a candidate area is defined from the real-time picture by combining the light intensity change rule indicated by the light intensity gradient characteristic; Performing a multi-scale contrast analysis on the candidate region; And screening out a region containing effective visual details as the target visual attention region according to the result of the multi-scale contrast analysis, and distributing corresponding visual importance weights to each sub-region according to the detail contrast levels of different sub-regions in the target visual attention region.
- 5. The method of claim 4, wherein generating a gain adjustment map comprises: Determining a reference gain value required for meeting a preset visual comfort threshold according to the average illumination level of the target visual attention area; based on the light intensity change rule characterized by the light intensity gradient characteristics, establishing a gain attenuation model extending from the boundary of the target visual attention area to the peripheral area; And combining the vision importance weights distributed to all the subareas in the target vision attention area, carrying out weight compensation on the reference gain value processed by the gain attenuation model, and carrying out additional gain promotion on the subareas with the vision importance weights exceeding a preset weight threshold value so as to generate a gain regulation and control map.
- 6. The method of claim 5, wherein after generating the gain adjustment map, performing an adaptive visual fusion comprising: Determining a fusion transition zone according to the boundary of the target visual attention area and the distribution of the visual importance weight; generating a fusion coefficient map according to the change of the gain control map in the fusion transition zone; and carrying out weighted synthesis on the target visual attention area and the adjacent area by utilizing the fusion coefficient mapping.
- 7. The method as recited in claim 6, further comprising: generating a parameter optimization signal based on the interaction behavior of the user on the final picture and objective quality evaluation data of the final picture; the parameter optimization signals and the distribution of the visual importance weights and the light intensity gradient characteristics are input into a parameter optimization model together; The parameter optimization model generates a collaborative adjustment instruction for local gain parameters in the gain regulation mapping and width parameters of the fusion transition zone according to preset collaborative optimization logic; When the parameter optimization signal indicates the intensified attention to a specific subarea in the target visual attention area, the collaborative optimization logic correspondingly improves the gain parameter of the subarea in the gain regulation mapping according to the visual importance weight, and adjusts the width parameter of the fusion transition zone at the boundary of the subarea in a linkage manner according to the light intensity gradient characteristic.
- 8. The method of claim 7, wherein the parameter optimization signals include a dominant optimization signal and a recessive optimization signal; The dominant optimization signal is generated by capturing active editing operation of a user on the gain regulation mapping or the fusion transition zone configuration parameters; The implicit optimization signal is generated by analyzing a continuous attention behavior pattern of a user to a specific subarea in the final picture; the parameter optimization model performs differential fusion on the dominant optimization signal and the recessive optimization signal according to a preset signal weight configuration rule, and takes the fused comprehensive signal as a driving input for updating the collaborative optimization logic.
- 9. An adaptive light-conditioning USB camera device comprising a processor and a memory, the memory storing a computer program which, when executed by the processor, implements the method of any of claims 1-8.
Description
USB camera equipment capable of self-adapting to light adjustment and control method thereof Technical Field The application relates to the technical field of camera control, in particular to a self-adaptive light-adjusting USB camera device and a control method thereof. Background As an image acquisition device widely used in the current video conference, online education, remote monitoring and other scenes, the imaging quality of a USB camera depends greatly on the ambient light conditions. Under the condition of sufficient illumination, the camera can output clear and true-color pictures, however, under the environment of weak light, the problems of increased noise, detail loss, color distortion and the like of the image easily occur, and the effectiveness of visual experience and information transmission is seriously affected. Therefore, various self-adaptive light adjustment schemes have been proposed in the industry, and the purpose is to automatically improve the imaging performance of the camera under the poor illumination condition by a technical means and improve the use satisfaction of users under different environments. The common self-adaptive light adjustment schemes can be mainly classified into two types, namely an adjustment method based on global automatic gain control, namely, the overall average brightness of an image is calculated, the gain of an image sensor is adjusted so that the average brightness of an output picture reaches a preset target, and a brightness enhancement algorithm based on local characteristics of the image, namely, the visibility of local details is improved by identifying dark areas in the picture and carrying out targeted brightening. The method can improve the overall or local visibility of the picture to a certain extent in a scene with relatively uniform illumination distribution or gentle variation. However, when the USB camera is fixedly installed at an indoor corner position and faces a weak light gradient distribution extending from the installation point to the inside of the scene, the existing adjustment scheme has a defect of failing or even being counterproductive to the adjustment. This disadvantage is not common in all low light scenes, but is specific to the "corner-mounted" arrangement, which is often unique in terms of light environmental characteristics. In particular, in typical application scenarios such as home study rooms and small offices, a user often arranges a camera at an included angle of a wall surface so as to face a target area such as a table top and a working area inside the room in view of saving space or obtaining a better view angle. At this time, the main light source is usually located at the center or at the side of the desk, and the corners where the cameras are located are shielded by the wall and far away from the light source, so as to form a gradient distribution with the intensity increasing significantly from the corners to the interior of the room. For example, the illumination near the corner wall is very weak, while the illumination at the edge of the desk surface outside one meter is obviously enhanced, and the center of the desk surface is brighter. This continuous light intensity variation from extremely dark to relatively bright constitutes a so-called gradient dim light distribution. Under the special scene, the existing global gain control method takes the average brightness of the whole picture as an adjusting target, the high gain required to be applied for lightening the extremely dark corner area inevitably leads to saturation of the signal of the non-corner area which is brighter and loss of details, otherwise, if the gain is set by taking the brighter area as a reference, the corner area still cannot display effective details due to insufficient illumination. On the other hand, even if the local enhancement technology is adopted, the conventional method performs indiscriminate brightness based on pixel gray scale, and fails to combine the actual visual attention points of the user in the scene, and the information in the work area such as a desktop is really concerned by the user instead of the large shadow of the corner. The non-targeted enhancement not only wastes computational resources, but also can form a hard brightness jump between the concerned area and the non-concerned area due to excessive strengthening of the non-concerned dark area, and the brightness jump exceeds the application range of human eyes to continuous change of brightness, so that visual fatigue is caused. In addition, the existing scheme generally needs to re-analyze the geometric and illumination characteristics of the scene when the camera is started every time, the process is time-consuming, misjudgment is easily generated due to objects temporarily appearing in the picture, the characteristic of stable scene structure after the camera is fixedly installed cannot be fully utilized, and the repeated calculation and adjustment