CN-115239882-B - Crop three-dimensional reconstruction method based on weak light image enhancement
Abstract
The invention discloses a crop three-dimensional reconstruction method based on weak light image enhancement, which comprises weak light crop image enhancement and crop three-dimensional reconstruction, wherein the weak light crop image enhancement comprises the steps of collecting weak light crop images and the weak light crop image enhancement, and mainly comprises the steps of enhancing the brightness of the collected crop images in a weak light environment so as to improve the image quality, the crop three-dimensional reconstruction comprises the steps of detecting and matching based on Scale Invariant Feature Transform (SIFT) characteristics and reconstructing spatial point cloud based on a restoring structure in motion (SFM), and mainly comprises the steps of obtaining key points as characteristic points and obtaining characteristic point description vectors, obtaining a matched point pair set by applying Euclidean distance judgment, obtaining the spatial point cloud and camera pose by combining camera internal and external parameters, and realizing accurate sensing of agricultural equipment on crops in the weak light operation environment.
Inventors
- HUANG YOURUI
- LIU YUWEN
- HAN TAO
- XU SHANYONG
- FU JIAHAO
Assignees
- 安徽理工大学环境友好材料与职业健康研究院(芜湖)
- 安徽理工大学
Dates
- Publication Date
- 20260512
- Application Date
- 20220720
Claims (8)
- 1. The crop three-dimensional reconstruction method based on the weak light image enhancement is characterized by comprising weak light crop image enhancement and crop three-dimensional reconstruction, wherein the weak light crop image enhancement comprises the steps of collecting weak light crop images and the weak light crop image enhancement, and mainly comprises the steps of enhancing the brightness of the collected crop images in a weak light environment so as to improve the image quality; The weak light crop image enhancement comprises the following steps: The image enhancement network of the low-light crops is formed by three sub-networks of layer decomposition, reflectivity recovery and illumination adjustment, so that the enhancement of the image of the low-light crops is realized; The method comprises the steps that two crop images under different exposure conditions are used as input images of a network, a layer decomposition sub-network is utilized to decompose the input weak light crop image into an illumination component and a reflectivity component, the illumination component is responsible for brightness adjustment, the reflectivity component is used for removing degradation, and the weight is shared between the two components; the decomposed illumination component flexibly adjusts illumination intensity through an illumination adjustment sub-network formed by a plurality of convolution layers; The decomposed reflectivity component is used as the input of the reflectivity recovery sub-network, and the denoising operation is performed through the coding and decoding network with residual connection; Forming a final weak-light crop enhancement image by combining the reflectivity image generated by the reflectivity recovery sub-network and the illumination image generated by the illumination adjustment sub-network, and freely adjusting proper illumination conditions; the reconstruction of the point cloud of the restoring structure space in the motion solves the three-dimensional information of the feature points by utilizing the feature point pairs and the camera parameters which are obtained by matching, restores the matched feature points into the three-dimensional space, and the reconstruction of the point cloud of the restoring structure space in the motion comprises the following steps: extracting exchangeable image file information in the input weak light crop image; Obtaining a precise point corresponding relation between two crop images through feature detection and matching of a scale invariant feature transformation algorithm; Singular value decomposition is carried out on the essential matrix; and carrying out beam method adjustment optimization on the camera pose and the agricultural object point cloud in sequence.
- 2. The method for reconstructing the crop three-dimensionally based on the enhancement of the weak light image according to claim 1, wherein the process of acquiring the weak light crop image is that in the outdoor weak light image acquisition process, a camera is used for shooting with the crop to be reconstructed as a center, the image is acquired once every 10 degrees of rotation around the scene of the crop to be reconstructed, a large number of weak light crop images containing different height information are sequentially acquired around the crop, and more identical scenes in the scene are reserved in the adjacent images.
- 3. The method for three-dimensional reconstruction of crops based on dim light image enhancement according to claim 1, wherein the step of feature detection and matching of the scale invariant feature transform algorithm is as follows: the method comprises the steps of establishing a Gaussian differential pyramid for an input enhanced weak-light crop image to obtain a spatial representation of the image under multiple scales, constructing a scale space, and searching candidate points, wherein the process is as follows: the Gaussian pyramid is composed of multiple groups of image sequences, each group of image sequences is composed of basic images in the group And multiple changes in scale factors Gaussian function Different scale images obtained by convolution calculation The composition is thus constructed into a multi-scale space, and the convolution calculation formula is: the calculation formula of the Gaussian function containing the scale factors is as follows: The Gaussian pyramid group number O of the image is composed of images Is of a line height of (2) Width of the row Determining, the calculation formula is: The number of layers S of each group of the image Gaussian pyramid is related to the number n of images of the image features to be extracted, and the calculation formula is as follows: Gaussian blur coefficient of corresponding image The calculation formula is as follows: index sequence number for each group of image sequence group of Gaussian pyramid; index sequence number for the scale image layer in a group of image sequences; For Gaussian blur initial value, default setting is 1.6 in SIFT algorithm, and the actual image is considered So that the actual initial Gaussian blur coefficients The method comprises the following steps: The Gaussian difference pyramid is obtained by subtracting adjacent layers in each group of image sequences in the created image Gaussian pyramid on the premise that the extreme points of the images in each layer in the Gaussian difference pyramid are feature points to be extracted, and the calculation formula of the Gaussian difference pyramid is as follows: Wherein the method comprises the steps of Representation and representation Scale factors for different layers; In the detection of extreme points in the scale space, the key points comprise local extreme points in the Gaussian differential pyramid space, the detection process is that the preliminary detection of the key points is completed by comparing each two adjacent layers of images in the same group in the Gaussian differential pyramid, each intermediate detection point is compared with all 26 adjacent points, namely 18 points corresponding to the upper and lower adjacent scales of 8 adjacent points in the same scale are used for ensuring that the extreme points are detected in the scale space and the two-dimensional image space, when the D (x, y, k sigma) value of the detection point is the maximum value or the minimum value in the D (x, y, k sigma) value of 26 adjacent points, the detection point is judged to be one key point of the image in the scale, in order to obtain a more accurate result, the extreme points are found by curve fitting the function of the D (x, y, k sigma) of the scale space, the edge effect existing in the different images or the same image is considered, and the stable key points are obtained when the key points have non-scaling coordinates of the point are deleted And dimensions Will be recorded as characteristic information of the point; the method comprises the steps of determining a feature area of a scale-invariant feature transformation algorithm, calculating a stable direction of a local structure by using an image gradient method, and calculating gradient amplitude values, wherein three values represent position, scale and direction information, a center represents feature point position, a radius represents the scale of a key point, and an arrow represents a main direction And gradient direction The calculation formula is as follows: Counting gradient directions and amplitudes corresponding to pixels in a key point neighborhood by using a histogram, selecting eight basic directions which are 45 degrees apart as angles of representing the gradient directions by using a horizontal axis, accumulating gradient amplitudes corresponding to the gradient directions by using a vertical axis, wherein a peak value in the histogram is a key point main direction, if the peak value of a certain key point direction is not less than 80% of a set main direction, setting the peak value as an auxiliary direction of the key point, and increasing matching stability, wherein the key point is determined to be a characteristic point of a scale invariant characteristic transformation algorithm and has rotation invariance; And generating corresponding feature point descriptors through the position, the scale and the direction information of each feature point, and representing the feature point neighborhood Gaussian image gradient statistical result so that the feature point neighborhood Gaussian image gradient statistical result is not changed along with various changes.
- 4. A method for three-dimensional reconstruction of crops based on dim light image enhancement as claimed in claim 3, wherein the characteristic point descriptor generating process comprises the steps of: Correcting the main rotation direction, wherein in order to ensure the rotation invariance of the feature vector, the coordinate axis is rotated by theta main angles in the neighborhood around the feature point as the center according to the main direction of the feature point, namely the coordinate axis is rotated as the main direction of the feature point: The method comprises the steps of generating a descriptor to obtain 128-dimensional feature vectors, dividing a neighborhood pixel area into 4 multiplied by 4 sub-areas by taking a feature point main direction as a center after rotation, dividing 16 sub-areas into 16 seed points for describing the feature point, dividing pixels around the feature point into blocks, decomposing each gradient in 45 degrees and 8 directions, calculating gradient amplitude and gradient direction of each pixel in a scale space where the feature point neighborhood is located, weighting the gradient amplitude and gradient direction by using a Gaussian window, calculating gradient histograms in 8 directions in the block, calculating accumulated values of each gradient direction, and generating a seed point, wherein each seed point has 8 direction vector information, so that the feature point can generate 128-dimensional SIFT feature vectors; Normalizing 128-dimensional feature vector length, reducing illumination interference to enable the descriptor to have illumination invariant characteristic, and assuming the feature vector to be The normalized feature vector is The normalized calculation formula is: After normalization, a threshold of 0.2 is set, and a final SIFT feature description vector is obtained after screening, feature point description vectors are matched, no matching problem caused by object shielding and foreground background blurring is solved by a method of obtaining a matching point pair set through Euclidean distance calculation, after feature description vectors of a scale-invariant feature transformation algorithm of two images are generated, euclidean distance of the feature description vectors is used as feature point similarity judgment measurement in the two images, a certain feature description vector v in one image is taken, and the first two feature description vectors with the closest Euclidean distance with the other image are found out And (3) with And at a distance of If the ratio of the nearest distance to the next nearest distance in the two feature points is smaller than the set ratio threshold The calculation formula is as follows: accepting the pair of matching points, feature description vectors Describing vectors for features The matching method comprises the steps of matching a large number of two pictures with any scale, rotation and brightness change, wherein the number of matching points of a scale-invariant feature transformation algorithm is reduced but more stable, the matching is carried out on the two pictures, the scale threshold is set to be 0.4-0.6, the best setting is generally set to be 0.5, the matching with high accuracy requirement is carried out on the two pictures, the scale threshold is set to be 0.4, and the scale threshold is set to be 0.6 when the number of matching points is matched with high accuracy requirement.
- 5. The method for three-dimensional reconstruction of crops based on weak light image enhancement according to claim 1, wherein exchangeable image file information in the input weak light crop image is extracted, and each piece of exchangeable image file information contains attribute information and shooting data of shooting the image, so that focal length and principal point of a camera are obtained, and a camera internal reference matrix is calculated.
- 6. The crop three-dimensional reconstruction method based on dim light image enhancement as claimed in claim 1, wherein the accurate point correspondence between two crop images is obtained by feature detection and matching of a scale invariant feature transform algorithm, a basic matrix between the two images is calculated by utilizing a matched feature point pair and epipolar constraint between the two images, and the basic matrix is calculated by the obtained camera internal parameters, wherein the given source point cloud P and the target point cloud Q are expressed as follows: the source point cloud P and the target point cloud Q have the expression: By finding a spatially optimal transformation matrix consisting of a rotation matrix R and a translation matrix t, the distance between the point sets P and Q is minimized, and the following equation is minimized to solve for R, t: 。
- 7. The crop three-dimensional reconstruction method based on weak light image enhancement as set forth in claim 1, wherein the external parameter matrix between cameras is obtained by singular value decomposition of the essential matrix, and the process comprises subtracting the centroids of the points in the point sets P, Q from the centroids of the two point sets respectively: the coordinates of the two groups of points after the center of mass is removed are as follows: Solving an essential matrix of two point sets after removing the mass center: Calculating a rotation matrix R and a translation matrix t: Singular value decomposition is performed on the covariance matrix W, and when the matrix W is full of rank, there is a unique solution: The solved rotation matrix R and translation matrix t are camera pose, then three-dimensional coordinates M of crop feature points under a world coordinate system are calculated through a triangulation method, and the position relation of the points is determined by observing included angles of the same points at two places.
- 8. The crop three-dimensional reconstruction method based on weak light image enhancement as claimed in claim 1, wherein the adjustment optimization of the beam method is sequentially performed on the pose of the camera and the cloud of the crop points, the difference between the projection and the re-projection of the real three-dimensional space points on the image plane is reduced, the crop points can be regarded as a light beam when viewed from each camera pose, and the objective function calculation formula for each frame containing K characteristic points in n frames of images is as follows by optimizing (R, t, M) when the cost of all light beams is minimum: representing the coordinates of the characteristic points corresponding to the ith three-dimensional point in the jth image; indicating whether the ith three-dimensional point is projected on the jth image, if so Otherwise Q is a reprojection function of mapping three-dimensional points into images, d is a Euclidean distance measurement function, and accurate camera pose and three-dimensional coordinates of the agricultural point cloud are obtained after adjustment and optimization by a beam method, and finally an agricultural point cloud model is generated.
Description
Crop three-dimensional reconstruction method based on weak light image enhancement Technical Field The invention relates to the technical field of image processing, in particular to a crop three-dimensional reconstruction method based on weak light image enhancement. Background The three-dimensional reconstruction of the live-action is applied to the operation process of agricultural machinery equipment, the problems of complex production environment and various features of agricultural crops are well solved, the important requirements of smart agriculture on refinement and high efficiency are met, and precise information is provided for the agricultural machinery equipment to accurately identify the crops. Under the condition that illumination conditions of farm machinery equipment in a farmland operation environment are not ideal enough, accurate three-dimensional reconstruction is carried out, firstly, crop images shot under weak light are enhanced, then, feature detection and matching are carried out on the enhanced images, key points are obtained as feature points, feature point descriptors are obtained, finally, the internal and external parameters of a camera are combined to obtain space point cloud and camera pose, and three-dimensional reconstruction of crops is carried out. At present, an SFM method is used for three-dimensional reconstruction of main crops, reconstruction tasks are difficult to complete under the condition of weak light, when an input crop image is constrained by illumination, incorrect space point cloud shape estimation and an inaccurate camera track are generated, and the problems that real features or pseudo features are not detected and cannot be matched with features in other images exist in some scenes. Disclosure of Invention Aiming at the problems, the invention aims to provide a crop three-dimensional reconstruction method based on weak light image enhancement, which acquires a correct space point cloud shape and an accurate camera pose, reconstructs a more accurate crop point cloud model and improves the accuracy of crop perception of agricultural machinery equipment in a weak light operation environment. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: The crop three-dimensional reconstruction method based on the dim light image enhancement comprises the steps of dim light crop image enhancement and crop three-dimensional reconstruction, wherein the dim light crop image enhancement comprises the steps of acquiring a dim light crop image and the dim light crop image enhancement, the main functions are enhancing the brightness of the acquired crop image in a dim light environment so as to improve the image quality, the crop three-dimensional reconstruction comprises the steps of detecting and matching based on Scale Invariant Feature Transform (SIFT) characteristics and reconstructing based on a space point cloud of a motion recovery Structure (SFM), the main functions are obtaining key points as characteristic points and obtaining characteristic point description vectors, determining by Euclidean distance to obtain a matching point pair set, and obtaining the space point cloud and the camera pose by combining camera internal and external parameters so as to perform crop three-dimensional reconstruction. Further, in the process of acquiring the low-light crop images, a camera is used for shooting with crops to be reconstructed as the center in the outdoor low-light image acquisition process, panoramic stitching of shooting scenes is facilitated, images are acquired once every 10 degrees around the scenes of the crops to be reconstructed, a large number of low-light crop images containing information of different heights are orderly acquired around the crops, more identical scenes in the scenes are reserved in adjacent images, so that reconstruction details are rich, and meanwhile, the time spent in image matching is reduced. The method comprises the steps of carrying out image enhancement on a weak-light crop image, forming a weak-light crop image enhancement network through three sub-networks of image layer decomposition, reflectivity recovery and illumination adjustment, carrying out noise removal operation on the weak-light crop image by taking the crop image under two different exposure conditions as an input image of the network, decomposing the input weak-light crop image into an illumination component and a reflectivity component by utilizing the image layer decomposition sub-network, wherein the illumination component is responsible for brightness adjustment, the reflectivity component is used for removing degradation, weight is shared between the two parts, the illumination intensity of the decomposed illumination component is flexibly adjusted through the illumination adjustment sub-network formed by a plurality of convolution layers, the decomposed reflectivity component is used as an input of the reflectivity