Search

CN-122024223-A - Binocular vision naked eye 3D image processing method, device and system

CN122024223ACN 122024223 ACN122024223 ACN 122024223ACN-122024223-A

Abstract

The invention discloses a binocular vision naked eye 3D image processing method, equipment and a system, and particularly relates to the technical field of naked eye 3D, wherein a parameter extraction and figure-of-merit quantization computing system covering parallax, scenes and splicing dimensions is constructed by defining foreground and background areas, and comprehensive and accurate evaluation of naked eye 3D image quality is realized by combining a multi-scene preset threshold value and a weight factor, so that the problem of single-index evaluation in the prior art is solved, and a core symptom which influences the stereoscopic effect is accurately positioned.

Inventors

  • XIE YU
  • Tian Ruoxian
  • XU QIANG
  • LI RONGRONG
  • SUN LUBIN
  • SHEN LONG

Assignees

  • 惜贤科技有限公司

Dates

Publication Date
20260512
Application Date
20260413

Claims (9)

  1. 1. The binocular vision naked eye 3D image processing method is characterized by comprising the following steps of: The real-time image data analysis comprises the steps of identifying and preprocessing binocular parallax composite images projected by a large screen, extracting parallax data, scene data and splicing data, comprehensively analyzing the parallax data, the scene data and the splicing data to obtain an image quality estimated value, presetting an image quality estimated value threshold value, and marking the images lower than the threshold value as images to be processed; integrating the core information of the image to be processed, generating a standardized problem message, and sending the standardized problem message to an operation and maintenance personnel terminal in real time; Matching and adjusting strategies, namely matching the image vector to be processed with a history adjusting strategy library after the operation and maintenance personnel terminal receives the message, and selecting the application strategy with the highest similarity; And (3) image data rechecking, namely projecting the calibrated image to a large screen test area for rechecking, if the image quality estimated value is higher than the minimum allowable value, incorporating the image into a play queue, and if the image quality estimated value is lower than the minimum allowable value, sending the image quality estimated value to a manager terminal.
  2. 2. The binocular vision naked eye 3D image processing method according to claim 1, wherein the analysis process of the parallax data is: Performing target recognition on the image, defining a foreground region and a background region, generating pixel coordinate masks of the two types of regions, and determining a feature point selection range; randomly selecting a plurality of characteristic points in a foreground region, reading horizontal pixel coordinates of each point in an L diagram and an R diagram, calculating pixel offset delta x of a single characteristic point, and passing through a formula ; The parallax conversion coefficient is a preset outdoor large screen parallax conversion coefficient; Converting depth values corresponding to the feature points into physical parallax values, and taking an arithmetic average value of the parallax values of the feature points as a foreground parallax value; presetting a foreground parallax value standard value, and carrying out absolute difference calculation on the foreground parallax value and the standard value to obtain a foreground parallax value; Adopting logic consistent with the foreground, selecting a plurality of characteristic points in a background area, calculating pixel offset, converting the pixel offset into a parallax value, and taking an average value as a background parallax value; carrying out absolute difference calculation on the background parallax value and a preset standard value to obtain a background parallax value; By the formula Obtaining variances of the parallax values of the characteristic points of the foreground region, wherein, As the total number of feature points, For a single feature point disparity value, For the foreground parallax average value, the formula is reused Obtaining parallax uniformity; and comprehensively processing the foreground parallax offset value, the background parallax offset value and the parallax uniformity to obtain a parallax optimal value.
  3. 3. The binocular vision naked eye 3D image processing method according to claim 1, wherein the specific process of analyzing the scene data is: counting gray values of all pixels in the left eye view angle diagram, and taking arithmetic average value Extracting maximum brightness in hardware parameters Using the formula Obtaining an image brightness value, and carrying out absolute difference calculation on the image brightness value and a preset image brightness standard value to obtain an image brightness standard deviation; Inputting a grating period, a screen width and a standard viewing distance of a corresponding scene of a screen, setting a standard horizontal angle range, presetting an interval angle, calculating the three-dimensional separation degree of split left and right eye images angle by angle, presetting a separation degree standard, screening out a continuous angle interval meeting the standard, and calculating the symmetrical midpoint of the interval to obtain an effective viewing angle range; Calculating gray values pixel by pixel for the left eye view angle diagram and the right eye view angle diagram, taking an average value, and respectively marking as , The left-eye average gray value and the right-eye average gray value are then formulated Output of 1, Picture contrast value; Carrying out absolute difference calculation on the picture contrast value and a preset standard contrast value to obtain a contrast standard difference value; And respectively carrying out comprehensive treatment on the image brightness standard deviation, the effective visual angle range and the contrast standard deviation to obtain a scene figure of merit.
  4. 4. The binocular vision naked eye 3D image processing method according to claim 1, wherein the specific process of the spliced data analysis is as follows: Intercepting local parallax subgraphs with a plurality of pixel widths at two sides of a splicing seam from the spliced binocular parallax composite image, and carrying out pixel-level registration on the two subgraphs to ensure that edge pixels of the subgraphs at two sides correspond to each other one by one; Traversing all the corresponding pixel points after registration, calculating the parallax values of the left side view difference image and the right side view difference image at the spatial positions point by point based on the spatial positions of the registered pixel points, taking absolute values to obtain parallax mutation values of each pixel point, and taking average values to obtain average parallax mutation values; The formula of point-by-point calculation is , wherein, Is the disparity mutation value of the i-th pixel, For the disparity value of the i-th pixel of the left sub-picture, A disparity value for the ith pixel of the right sub-graph; Extracting an average parallax mutation value, setting each group of parallax mutation value intervals corresponding to the average parallax mutation value, wherein each group of parallax mutation value intervals respectively correspond to a group of eclosion width values, and matching the average parallax mutation value with the corresponding interval to obtain the eclosion width values; Identifying a boundary area for any two adjacent splicing units, uniformly selecting a plurality of groups of pixel points corresponding to the space positions one by one in the boundary area, counting the parallax value of each group of corresponding pixel points, respectively calculating the absolute difference of the parallax values of each group of corresponding pixel points, and calculating an average value as the parallax value; and respectively carrying out comprehensive treatment on the average parallax mutation value, the eclosion width value and the parallax deviation value to obtain a splicing figure of merit.
  5. 5. The binocular vision naked eye 3D image processing method according to claim 1, wherein the specific process of determining the image to be processed is: setting a parallax figure of merit, a scene figure of merit and a reference value corresponding to the splicing figure of merit; comprehensively processing the parallax figure-of-merit, the scene figure-of-merit and the splicing figure-of-merit to obtain a picture quality estimated value; Presetting a minimum allowable value corresponding to the image quality estimated value, and marking the image corresponding to the image quality estimated value smaller than the minimum allowable value as the image to be processed.
  6. 6. The binocular vision naked eye 3D image processing method according to claim 1, wherein the specific process of forming the standardized question message is as follows: The push message needs to contain an image unique identifier, parallax data, scene data, an original value of spliced data, a standard problem type and a specific data deviation value corresponding to the standard problem; H-1, generating an image unique id, extracting parallax, scenes and splicing actual measurement parameters, and extracting a reference threshold value of a corresponding scene; H-2, comparing the measured parameters with a reference threshold value, and recording the conditions of out-of-standard, insufficient or up-to-standard according to a unified format; H-3, matching preset standard problem types according to the parameter deviation result, and summarizing the problem list; and H-4, integrating the image id, the actual measurement parameter, the deviation description and the problem type to construct a standardized problem message, and pushing the standardized problem message to an operation and maintenance personnel terminal in real time.
  7. 7. The binocular vision naked eye 3D image processing method according to claim 1, wherein the specific process of determining the application strategy is: Based on the received standardized problem message, calculating absolute differences of the parallax figure of merit, the scene figure of merit and the splicing figure of merit and the respective reference values, and respectively using the absolute differences as the parallax figure of merit, the scene figure of merit and the splicing figure of merit; After multiplying the parallax offset value, the scene offset value and the splicing offset value with corresponding preset weight factors respectively, selecting the figure-of-merit type with the largest corresponding value as a problem type, extracting each parameter offset and the problem type as core characteristics, and converting the parameter offset and the problem type into structural feature vectors according to preset rules; Extracting feature vectors of all cases in a historical adjustment strategy library, wherein each case is associated with corresponding adjustment parameters and adjustment effect labels; And (3) calculating similarity values of the feature vectors of the image to be processed and the feature vectors of the historical cases one by adopting a cosine similarity algorithm, and screening out a tuning strategy corresponding to the historical case with the highest similarity as an application strategy of the image to be processed.
  8. 8. A binocular vision naked eye 3D image processing apparatus, applying a binocular vision naked eye 3D image processing method according to any one of claims 1-7, comprising: the high-definition image acquisition card is used for extracting a binocular parallax composite image; the computing processing equipment is used for preprocessing the image, extracting parameters and matching application strategies; Projecting a 3D image for re-inspection test after adjustment; and the data storage device is used for storing a history adjustment strategy library and an image processing record.
  9. 9. A binocular vision naked eye 3D image processing system, applying a binocular vision naked eye 3D image processing method according to the above claims 1-7, comprising: The system identifies and preprocesses the binocular parallax composite image, extracts parallax data, scene data and spliced data, comprehensively analyzes the parallax data, obtains an image quality estimated value, presets an image quality estimated value threshold value, and marks an image lower than the threshold value as an image to be processed; the information integration module is used for integrating the core information of the image to be processed, generating a standardized problem push message and sending the standardized problem push message to the terminal of the operation and maintenance personnel in real time; the strategy matching module is used for matching the image vector to be processed with the history adjustment strategy library after the operation and maintenance personnel terminal receives the message, and selecting the image vector with the highest similarity as an application strategy; And the data rechecking module projects the adjusted image to a large screen test area for rechecking, if the image quality estimated value is higher than the minimum allowable value, the image quality estimated value is included in a play queue, and if the image quality estimated value is lower than the minimum allowable value, the image quality estimated value is sent to a manager terminal.

Description

Binocular vision naked eye 3D image processing method, device and system Technical Field The invention relates to the technical field of naked eye 3D, in particular to a binocular vision naked eye 3D image processing method, equipment and system. Background With the proliferation of iteration and immersive experience demands of display technologies, naked eye 3D technology has moved from laboratories to large-scale businesses, is widely applied to outdoor advertising large screens, intelligent exhibitions and the like, has the core advantage that stereoscopic visual effects can be presented without wearing auxiliary equipment, and has the key that binocular parallax is reconstructed by simulating a real light field, so that a viewer obtains natural stereoscopic experience. However, in the practical application of large-size large screen, the image quality directly determines the viewing experience, and the parallax rationality, scene suitability and splicing fluency are considered at the same time, which is also a core requirement of technical landing. However, in the prior art, the following disadvantages still exist: Firstly, the prior art lacks a systematic multidimensional quantitative evaluation system, relies on a single index to judge the image quality, does not cover key dimensions such as parallax uniformity, splice seam parallax mutation, effective visual angle and the like, causes one-sided problem judgment, and is difficult to accurately position a core symptom node affecting the stereoscopic effect; Secondly, the existing adjustment mode is mostly dependent on manual trial and error, lacks a multiplexing mechanism for history adjustment experience, is low in efficiency, is difficult to adapt to the dynamic image requirement of large-screen real-time projection, and meanwhile, has the advantages that the problem of multi-unit collaborative optimization of a spliced screen is outstanding, and the problems of parallax consistency deviation, eclosion width misadaptation and the like easily cause image splitting, ghost images or visual fatigue, so that immersive experience is seriously affected. Therefore, a binocular vision naked eye 3D image processing method, device and system are provided. Disclosure of Invention In order to overcome the defects in the prior art, the embodiment of the invention provides a binocular vision naked eye 3D image processing method, equipment and system. In order to achieve the above purpose, the present invention provides the following technical solutions: a binocular vision naked eye 3D image processing method comprises the following steps: The real-time image data analysis comprises the steps of identifying and preprocessing binocular parallax composite images projected by a large screen, extracting parallax data, scene data and splicing data, comprehensively analyzing the parallax data, the scene data and the splicing data to obtain an image quality estimated value, presetting an image quality estimated value threshold value, and marking the images lower than the threshold value as images to be processed; integrating the core information of the image to be processed, generating a standardized problem message, and sending the standardized problem message to an operation and maintenance personnel terminal in real time; Matching and adjusting strategies, namely matching the image vector to be processed with a history adjusting strategy library after the operation and maintenance personnel terminal receives the message, and selecting the application strategy with the highest similarity; And (3) image data rechecking, namely projecting the calibrated image to a large screen test area for rechecking, if the image quality estimated value is higher than the minimum allowable value, incorporating the image into a play queue, and if the image quality estimated value is lower than the minimum allowable value, sending the image quality estimated value to a manager terminal. Specifically, the analysis process of the parallax data is as follows: Performing target recognition on the image, defining a foreground region and a background region, generating pixel coordinate masks of the two types of regions, and determining a feature point selection range; randomly selecting a plurality of characteristic points in a foreground region, reading horizontal pixel coordinates of each point in an L diagram and an R diagram, calculating pixel offset delta x of a single characteristic point, and passing through a formula ;The parallax conversion coefficient is a preset outdoor large screen parallax conversion coefficient; Converting depth values corresponding to the feature points into physical parallax values, and taking an arithmetic average value of the parallax values of the feature points as a foreground parallax value; presetting a foreground parallax value standard value, and carrying out absolute difference calculation on the foreground parallax value and the standard value to obtain