Search

CN-121999050-A - Visual positioning measurement method and system based on micro-coding target

CN121999050ACN 121999050 ACN121999050 ACN 121999050ACN-121999050-A

Abstract

The invention provides a visual positioning measurement method and system based on a micro-coding target. The invention collects target images by a module camera matched with a micro-lens, firstly utilizes global linear structural characteristics of an orthogonal lattice in a frequency domain and a phase domain, realizes automatic estimation of a distortion center by minimizing a phase nonlinear residual error, adopts closed weighted least square to solve low-order radial and tangential distortion parameters on the basis to realize single-image lens distortion correction, then utilizes target images of a plurality of different local positions to be shot from different positions, decodes each image to obtain global position information of a characteristic point corresponding to a photoetching plate, realizes self calibration of the camera and obtains camera internal parameters, finally carries out two-dimensional Fourier transform on the collected target images, unwraps the phase information of a plurality of fields, combines decoding information of the images, and fuses absolute positions of the output camera.

Inventors

  • ZHU MINGZHU
  • Rao Jianglei
  • HE BINGWEI

Assignees

  • 福州大学

Dates

Publication Date
20260508
Application Date
20260211

Claims (10)

  1. 1. The visual positioning measurement method based on the micro-coding target is characterized by comprising the following steps of: Step S1, micro-coding target design, which comprises the steps of generating a positive and negative sequence unique limited sliding window coding sequence for coding, and establishing a two-dimensional orthogonal lattice grid to form a sparse lattice diagram with two-dimensional absolute position information; S2, single-image lens distortion correction, which comprises the steps of utilizing full-field linear phase characteristics of an orthogonal lattice under ideal imaging conditions, analyzing phase nonlinearity introduced by lens distortion, extracting phase information by utilizing frequency domain analysis, and calculating and correcting a distorted field; Step S3, decoding and global self-calibration, wherein the method comprises the step of realizing the on-site high-precision calibration of the internal parameters of the microscope camera by combining single-image distortion correction with multi-image combined calibration; And S4, positioning the high-precision absolute position, taking the internal reference back to the measurement model, and fusing the integer period position obtained by decoding with the sub-period displacement obtained by phase calculation to obtain the high-precision absolute position of the target under the coordinate system of the photoetching plate.
  2. 2. A method of visual localization measurement based on microcoded targets according to claim 1, the method is characterized in that the step S1 comprises the following steps: step S11, generating a positive and negative sequence unique limited sliding window coding sequence S for coding, wherein the expression of the sequence S is as follows: ; ; The positive and negative sequence unique limited sliding window coding sequence S for coding meets constraint conditions including local window constraint and positive and negative sequence unique constraint; wherein the local window constraints include the following: The local window constraint is used to define the maximum effective window size for decoding, which must be strictly less than the camera effective field of view diagonal length to ensure complete coverage in the presence of rotation, and which includes the following: ; where d is the effective field of view size of the camera, Is a safety margin coefficient; wherein the positive and negative uniqueness constraint includes the following: The subsequence with any length of l in the corresponding sequence S is unique in the positive sequence direction and the negative sequence direction of the whole sequence and is not repeated; Mathematical expression of setting up Then for any of Satisfies the following conditions And is also provided with 。
  3. 3. The method of claim 2, wherein step S1 further comprises the following steps: Step S12, mapping the generated numerical sequence S into marking points which exist in a dot matrix and are also missing, establishing a two-dimensional orthogonal dot matrix grid, and setting the grid period as P; The shooting rule is to define each numerical value s i in the sequence as the number of continuously existing marking points between two adjacent missing points, wherein the missing marking points are defined as grid nodes which are not drawn physically and are used as coded separators, and the existing marking points are defined as physically drawn circular marks and are used as coded carriers; Step S13, respectively generating two sequences S x and S y which meet the local window constraint and the forward and reverse sequence uniqueness constraint and are respectively used for coding in the horizontal X direction and the vertical Y direction; Further, in the two-dimensional orthogonal lattice, if a node (X, Y) is defined as a missing mark point in both the X and Y directions, the node appears as a missing point on the final target, and by such orthogonal superposition, a sparse lattice diagram with two-dimensional absolute position information is formed.
  4. 4. A method of visual localization measurement based on micro-coded targets according to claim 3, wherein step S2 comprises the following: step S21, describing imaging errors of a microscope lens by adopting a first-order radial distortion model, wherein a model formula is as follows: ; Wherein: distance (observed value) from point to distortion center in the distorted image; Is the point-to-center distance in an undistorted ideal image; k is a radial distortion coefficient; further, a coordinate conversion relationship is obtained: ; Wherein the displacement error And Directly related to the distortion coefficient k, the relationship is as follows: ; s22, performing two-dimensional Fourier transform on a local target image I (r) shot by a camera to obtain a frequency spectrum H (f), wherein the frequency spectrum H (f) is represented by a central direct current component and four main first-order frequency spectrum lobes around the central direct current component, and the central direct current component and the four main first-order frequency spectrum lobes correspond to grating fundamental frequencies in the X and Y directions respectively; Selecting lobes H x corresponding to the X-direction frequency and lobes H y corresponding to the Y-direction frequency, filtering them from the spectrum using a Gaussian window function and moving to the origin, performing an inverse Fourier transform (IFFT) on the filtered lobes to obtain a complex field, and calculating the phase distribution And Wherein the deviation between the actual phase and the ideal linear phase represents the distortion-induced displacement because the phase of the ideal undistorted orthogonal lattice image should be a plane that varies linearly with position, the distortion-induced displacement expression is as follows ; Wherein the method comprises the steps of Is the physical period of the grating and, Wrapping the phase deviation after phase unwrapping; step S23, obtaining full field displacement data Then, combining a first-order radial distortion model formula to construct an overdetermined equation set, and searching the optimal distortion coefficient k and the distortion center coordinate by using a nonlinear least square method Minimizing the sum of squares of residuals between the model predicted displacement and the GPA measured displacement; Finally, establishing a slave distorted image coordinate according to the distortion coefficient k To ideal coordinates And (3) re-mapping the pixels of the original distorted image to a new grid by using bicubic interpolation and other algorithms, thereby obtaining a corrected undistorted image.
  5. 5. The method of claim 4, wherein the step S3 comprises the following steps: s31, controlling the camera and the target to move relatively, shooting M images at different positions in different areas of the target, and acquiring original images of each image (Wherein ) Distortion correction is performed by using the single-image lens distortion correction method to generate an undistorted image ; S32, performing binarization processing on the undistorted image, analyzing and calculating the image moment of each square by using the connected domain to obtain the accurate barycenter coordinate of each square Forming a point set Similar to Fourier transform processing method in distortion correction, analyzing energy peak value of spectrogram, calculating half-period pixel distance t and two orthogonal direction angles of lattice ; Step S33, adopting improved K-neighbor strategy to collect discrete points Converting into graph structure with adjacency, including for each pair of neighbors Screening and establishing a connecting edge And endowing the distance attribute and the direction attribute; Wherein the distance attribute comprises Marking(s) If (1) Marking of And determining a physical midpoint M; Wherein the direction attribute comprises two orthogonal direction angles calculated from the foregoing Calculating the angle of the edge vector If (if) Classified as direction 1, if Classified as direction 2; step S34, utilizing dot product projection to make edge Unified to be unidirectional chain along positive direction of main shaft Traversing the graph structure to generate sequences with length greater than 50 and eliminating all 1 sequences with nodes with degree of entry of 0 as starting points, thereby converting the undirected graph into an ordered sequence.
  6. 6. The method of claim 5, wherein step S3 further comprises the following steps: step S35, traversing the sequences of the direction 1 and the direction 2 respectively, and extracting all marks as Calculates all of the edges in direction 1 and direction 2, respectively Is a physical midpoint of (a) And Matching with KDTree if Judging the virtual intersection point; Step S36, for each successfully matched intersection, recording the starting points of the edges marked 2 in the direction 1 and the direction 2 respectively And And at present A plurality of digits are properly extended in the sequence of the direction 1 and the direction 2 around the side of the frame, and a subsequence containing distance information with a defect point in two orthogonal directions is obtained And Reconstructing a subsequence with length l in a corresponding sequence S in an observation field by taking the missing point as the center And According to two sub-sequences And The matching result in the original designed unique limited sliding window coding sequence of the forward and reverse sequences can be calculated And Specific index position of corresponding target And 。
  7. 7. The method of claim 6, wherein step S3 further comprises the following steps: step S37, utilizing the obtained point pairs Wherein: Constructing a pinhole camera model equation: ; Wherein K is a camera internal reference matrix to be solved: ; Nonlinear optimization is performed by using a Levenberg-Marquardt algorithm, and a reprojection error is minimized: ; thereby calculating the accurate focal length And principal point coordinates 。
  8. 8. A method of visual localization measurement based on microcoded targets as claimed in claim 7, the method is characterized in that the step S4 comprises the following steps: step S41, extracting the wrapped phase value at the center point of the image by using GPA And Adopting the strategy of band-pass filtering interpolation to reserve the fundamental frequency The band-pass filter of the optical filter can automatically smooth the phase of the missing region, recover the continuous linear phase field, and calculate the sub-pixel displacement by using the analog characteristic of the optical signal: ; Finally, logic alignment and fusion are carried out to obtain a final absolute coordinate formula: ; Wherein: Correcting the function for phase alignment; And finally, strictly mapping the displacement of the image plane into the displacement of the actual physical space by using the calibrated camera internal parameter K and focal length information, and outputting a final measurement result.
  9. 9. A micro-coded target based visual positioning measurement system comprising an electronic device, wherein the electronic device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements a micro-coded target based visual positioning measurement method according to any one of claims 1 to 8 when executing the computer program.
  10. 10. A micro-coded target based visual positioning measurement system comprising a computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements a micro-coded target based visual positioning measurement method according to any one of claims 1 to 8.

Description

Visual positioning measurement method and system based on micro-coding target Technical Field The invention provides a visual positioning measurement method and system based on a micro-coding target, and relates to the technical field of precise measurement. Background With the rapid development of the fields of semiconductor fabrication, microelectromechanical systems (MEMS), and precision machining, there is an increasing demand for trans-scale (large-stroke and high-resolution) planar position measurement. In these applications, absolute positioning in the centimeter range is often required with nanometer-scale accuracy. Currently, high accuracy displacement measurement relies mainly on a laser interferometer (Laser Interferometer). Although the laser interferometer can provide nanometer-scale measurement accuracy, the system is huge and high in cost, and can only perform single-degree-of-freedom (1-DoF) linear measurement generally, so that the laser interferometer is difficult to directly apply to multi-degree-of-freedom planar motion monitoring. In contrast, the microscopic vision-based measurement method is attracting attention because it is non-contact, compact in structure and capable of simultaneously measuring displacement of multiple degrees of freedom. In the microscopic vision measurement field, the existing technical schemes are mainly divided into the following categories, but certain limitations exist: 1. A method based on Digital Image Correlation (DIC) and speckle patterns calculates displacement by tracking deformation of random speckle patterns. However, DIC methods are computationally intensive and extremely sensitive to imaging quality. More importantly, when a high magnification microscope lens is used, the optical distortion (especially radial distortion) of the lens can increase significantly. According to the existing microscopic measurement method, systematic errors are introduced into lens distortion, so that measured displacement fields are unevenly distributed, and measurement accuracy is seriously affected. While distortion can be corrected by rigid body translation experiments, this typically requires a high precision displacement table fit, and physical translation itself may introduce new mechanical errors. 2. A method based on periodic grating and Geometric Phase Analysis (GPA) utilizes fourier transform and Geometric Phase Analysis (GPA) to extract phase information of the periodic grating for position measurement. The method can realize the measurement accuracy of the sub-pixel level because the phase change and the displacement are in a linear relation and the grating has fixed spatial frequency. However, conventional periodic gratings suffer from "period ambiguity" (Period Ambiguity). That is, the relative position within one period can be determined only by the phase, and the different periods cannot be distinguished. This results in the method being able to measure only short stroke displacements or having to rely on continuous tracking, and once the camera field of view is lost or the power is turned off again, the absolute position cannot be known. 3. In order to solve the problem of range ambiguity existing in a periodic structure, the prior art adopts a visual positioning method based on manual marking. Most commonly, a two-dimensional Code (QR Code) or fiducial mark (Fiducial Markers, such as ArUco) is used to obtain global absolute coordinates. However, such marks mainly rely on edge feature or corner extraction, and the positioning accuracy is limited by image resolution and imaging noise, so that sub-pixel level is generally difficult to break through, and the nano-scale measurement requirement in the micro-nano manufacturing field cannot be met. In order to combine high precision with absolute positioning, researchers have attempted to embed absolute encoded information into a periodic measurement target of high precision. For example, the grating stripes are modulated with a pseudo-random binary sequence (e.g., LFSR or M sequence). However, existing coding schemes often represent binary "0" and "1" by changing the line width (e.g., manchester code) or changing the geometric duty cycle. The physical damage to the periodic structure can lead to confusion (spectrum leakage) of the frequency domain characteristics of the image, so that nonlinear phase errors are introduced in the fine positioning process based on Fourier transformation or Geometric Phase Analysis (GPA), and the measurement accuracy is seriously reduced. 4. In practical industrial application, in order to reduce the cost, a common USB micro-module camera is often used. The optical quality of such camera lenses is far lower than telecentric lenses, with severe nonlinear distortion. The traditional camera calibration method (such as Zhang Zhengyou calibration method) needs to shoot checkerboards with different angles, has complicated flow and is difficult to correct internal parameter drift cause