Search

CN-122023821-A - Runway characteristic detection method for landing of large aircraft

CN122023821ACN 122023821 ACN122023821 ACN 122023821ACN-122023821-A

Abstract

The invention provides a runway characteristic detection method for landing of a large aircraft, and belongs to the field of aviation engineering tests. The method comprises the steps of detecting and segmenting the region of interest (ROI) of a runway characteristic image acquired by a visual sensor based on a deep neural network, introducing a slope and slant distance clustering algorithm and a scoring mechanism to remove most of interference straight line characteristics on a straight line detection result, and finishing accurate extraction of left and right edge line characteristics of the runway. The method automatically realizes the extraction of the characteristic points of the measured data and has the characteristics of large, objective and accurate data sample size, small error, strong practicability and the like.

Inventors

  • YANG WEIPING
  • BU YIN
  • WANG DEZHONG
  • LI LIGONG
  • KANG HONGSHENG

Assignees

  • 中国航空工业集团公司西安飞行自动控制研究所

Dates

Publication Date
20260512
Application Date
20251229

Claims (9)

  1. 1. A runway characteristic detection method for landing a large aircraft is characterized by comprising the following steps: S1, marking a large amount of runway image data, and training a deep neural network model by taking the runway image data as a test set; S2, taking the runway characteristic image to be detected as the input of a deep neural network, and detecting and extracting a runway region image by using the network; S3, screening the detected runway feature frames; S4, classifying left and right edges of the screened detection frames, introducing a clustering and scoring mechanism to remove interference straight line segments, and completing extraction of left and right edges and corner features of the runway.
  2. 2. The method of claim 1, wherein S1, labeling the plurality of runway image data and using the same as the test set to complete training of the deep neural network model, comprises: S11, acquiring big data samples of various airport runways in different landing altitude sections and under different environmental illumination and meteorological conditions in advance; s12, completing manual marking based on the big data sample, and manually and finely marking the area range of the runway and the angular point characteristics of the runway; s13, substituting the marked data serving as a training set into the deep neural network for training.
  3. 3. The method of claim 1, wherein S2, using the runway characteristic image to be detected as an input to a visual detection and recognition algorithm for runway characteristics of the deep neural network, and detecting and extracting a runway region image using the network, comprises: S21, carrying out feature extraction on an input runway feature image to be detected through a plurality of convolution and pooling operations, and respectively outputting feature graphs with three scales; s22, respectively inputting the three-scale feature images into a scale fusion network, and outputting an image with a detection frame and a segmentation result, wherein the detection frame is an area containing a runway, and the segmentation result is an area where the runway is located.
  4. 4. The method of claim 1, wherein S3, screening the detected runway feature frames based on the relative projection location principle, comprises: S31, judging whether a plurality of detection frames exist in the detection and segmentation result image, if so, screening the detection frames to obtain a landing runway to be landed by the aircraft, executing the subsequent steps, and definitely extracting the object fitted with the edge line from the subsequent corner points; S32, if only one detection frame exists in the detection and segmentation result image, skipping a detection frame screening process, and directly entering a subsequent step of judging the proportion of the frame area occupied by the detection frame area and extracting corner points; S33, processing the image in each detection frame area in the detected detection frame set, converting the image into a gray level image and extracting a contour; s34, calculating a corresponding minimum circumscribed rectangle for each detection frame of which the outline is successfully extracted, obtaining the pixel coordinates of a central point of the circumscribed rectangle, and calculating the Euclidean distance between the central point and the central point of the original image; And S35, finding out and marking the detection frame with the smallest distance from the center point of the original image, considering the detection frame with the smallest distance from the center point of the image as the runway area to be landed, and eliminating other detection frames.
  5. 5. The method of claim 1, wherein S4, classifying the left and right edges of the screened detection frame, introducing a clustering and scoring mechanism to eliminate interference straight line segments, and completing the extraction of the left and right edges and corner features of the runway, comprises: (1) Operating a segmentation result mask in a detection frame which is only reserved in the image, carrying out contour extraction and straight line extraction in the detection frame, storing an extraction result in contours if complete contour information can be successfully extracted, and determining left and right side lines and four corner points of a runway according to the extracted complete contour information; (2) If the complete runway contour information cannot be obtained, only a plurality of straight line segments can be detected, a clustering and scoring mechanism is introduced to remove the interference straight line segments, and the extraction of the left and right edge line and corner point characteristics of the runway is completed.
  6. 6. The method of claim 5, wherein (1) determining left and right edges and four corner points of the runway based on the extracted complete profile information is specifically: s16, extracting points in the range of 1/10 and 1/3 of the upper part of the circumscribed rectangle frame in the contours point set, and storing the points in the marked point set filtered_points: S17, calculating the linear distance between each point in the marked point set and the top left corner vertex of the circumscribed rectangle in the y direction, and storing the linear distance in distances: s18, sequencing the results in distances according to the order from small to large, and taking out the first 10 points, removing the minimum and maximum point coordinate values of the x coordinates of the 10 points on the basis, and respectively storing in min_x and max_x; S19, classifying all points in the filtered_points left and right, namely regarding all points with x coordinates larger than max_x in the filtered_points as points on a right runway edge line and storing the points in the right_points, regarding all points with x coordinates smaller than min_x in the filtered_points as points on a left runway edge line and storing the points in the left_points, returning an 'undetected' mark if one of the left_points or the right_points is empty, and respectively performing straight line fitting on left and right type pixel point sets to acquire straight line equations of the two fitted straight lines if the pixel points of the left and right edge lines are not empty; s20, defining 4 boundaries of the circumscribed rectangular frame: Upper boundary-start (x, y) end (x+w, y) Right boundary-start (x+w, y) end (x+w, y+h) Lower boundary-start (x, y+h) end (x+w, y+h) Left boundary-start (x, y) end (x, y+h) Calculating intersection points of the left edge line and the upper boundary, the left boundary and the lower boundary respectively, and judging according to the following rules: the left line and the upper boundary have an intersection point, and the intersection point is positioned in the upper boundary start and end straight line segment, and is considered as the extracted left upper corner point of the runway, and the left upper corner point is stored in point 1; If the intersection point of the left edge line and the left boundary is positioned in the left boundary start and end straight line segment and the intersection point of the left edge line and the lower boundary is positioned outside the lower boundary straight line segment, the intersection point of the left edge line and the left boundary is considered to be the left lower corner point of the extracted runway, otherwise, the intersection point of the left edge line and the lower boundary is considered to be the extracted corner point, and the extracted corner point is stored in the point 4; the same operation is carried out on the right line, the intersection point of the right line and the upper boundary is considered to be the right upper corner point of the runway, and the right upper corner point is stored in point 2; S21, if the intersection point of the right line and the right boundary is positioned in the right boundary start and end straight line segment and the intersection point of the right line and the lower boundary is positioned outside the lower boundary straight line segment, the intersection point with the right boundary is considered to be the extracted right lower corner point of the runway, otherwise, the intersection point of the right line and the lower boundary is considered to be the extracted corner point, and the extracted corner point is stored in a point 3; S22, connecting the upper left corner point1 with the lower left corner point4, and solving the straight line equation to obtain a runway left edge line equation, and connecting the upper right corner point2 with the lower right corner point3, and solving the straight line equation to obtain a runway right edge line equation.
  7. 7. The method of claim 5, wherein the step of determining the position of the probe is performed, S23, when the mask area is smaller than 2% of the total area of the picture, firstly extracting the outline and the straight lines of the mask, if the complete outline information can be successfully extracted, storing the extraction result in contours, executing [ S24] to [ S27], and if the complete runway outline information cannot be obtained, only detecting a plurality of straight line segments, and clustering according to the plurality of straight lines; S24, extracting points in the 1/3 range of the upper part of the circumscribed rectangle frame in the complete contour point set contours and storing the points in the marked point set filtered_points: S25, calculating the linear distance between all points in the extracted point set filtered_points and the top left corner vertex of the circumscribed rectangle frame in the y direction, and storing the linear distance in distances: S26, sequencing distances results from small to large, taking out the first 10 points, taking out the point coordinate values with the minimum and maximum x coordinates in the 10 points on the basis, respectively storing the point coordinate values in point1 and point2, and taking 2 vertexes at the lower end of a detection frame output by a neural network as a left lower vertex point4 and a right lower vertex point3 of the runway respectively; s27, connecting the upper left corner point1 with the lower left corner point4, and solving the straight line equation to obtain a runway left edge line equation, and connecting the upper right corner point2 with the lower right corner point3, and solving the straight line equation to obtain a runway right edge line equation.
  8. 8. The method according to claim 7, wherein if the complete runway contour information is not obtained, only a plurality of straight line segments can be detected, the left and right runway lines are clustered according to the plurality of straight lines, specifically: s28, if the complete runway contour information contours cannot be successfully detected in the detection frame, clustering the straight line characteristics detected in the detection frame according to the slope k and the slope distance b of each straight line segment; S29, traversing each group of clusters, counting the average slope k of line segments in the clusters, the average intercept b of the line segments, the coordinates of the endpoints at the two sides of the line segments and the accumulated total length of the line segments, finding out the cluster line segments which meet the condition that the cluster line segment length is more than 100 pixels in all the clusters, wherein the distance between small line segments contained in the cluster line segments is less than 30 pixels, and the length of any small line segment/the accumulated length of the cluster line segments is more than 0.8; S30, if only one qualified line segment is clustered, the rest other line segments are sequenced from big to small according to the length, the rest other line segments are traversed in sequence, and if the intersection point of the rest other line segments and the qualified line segment is above the two lines, the rest other line segments and the qualified line segment are used as runway lines on two sides; s31, when more than two line segments are clustered normally, calculating the score of each line segment; S33, sequentially searching for the score 1 and the score 2, the score 1 and the score 3, the score 2 and the score 3, the score 1 and the score 4, the score 2 and the score 4, and whether the intersection point of the line segments of the score 3 and the score 4 is positioned above the two lines or not, ending as long as the condition is met, and outputting the corresponding line segments as left and right runway lines.
  9. 9. The method of claim 8, wherein S31, the score of each line segment, specifically comprises the sum of three scores: The score 1 is the current line length/the maximum line length, the score 2 is (the accumulated length of the included line length/the whole line length)/(the accumulated length of the included line length/the whole line length) max , and the score 3 is the minimum distance between the line with the highest score of the first two and the endpoints of other line segments/the minimum distance between the current line and the endpoint of the line with the highest score of the first two.

Description

Runway characteristic detection method for landing of large aircraft Technical Field The invention relates to the technical field of aviation tests, in particular to a runway characteristic detection method for landing of a large aircraft. Background Visual landing is a critical technique in the field of aviation that assists pilots in obtaining accurate position and orientation information while landing an aircraft by using cameras and image processing algorithms. For runway image recognition in the aircraft landing process, high-precision runway image recognition is a key for realizing safe and efficient landing. However, achieving high accuracy runway image identification remains a challenging task due to interference from complex environmental conditions, illumination variations, and image noise. The traditional image processing method has certain limitation in runway image identification under a complex environment. For example, conventional feature extraction and classification algorithms are sensitive to illumination variations and noise, and it is difficult to extract accurate runway feature information. Furthermore, conventional machine learning methods are often limited in terms of algorithm complexity and computational efficiency when dealing with large-scale data and complex patterns. With the rapid development of computer vision and deep learning technologies, researchers have begun to explore methods based on deep learning to solve the problem of runway image recognition. The deep learning model can learn characteristic representation from large-scale data and has strong self-adaption capability and generalization capability. However, in the track image recognition task, due to difficulty in acquiring and labeling data sets and computational resources and time cost in the depth model training process, high-precision track image recognition based on depth learning still has some challenges, namely that acquiring and labeling data sets is critical to training a high-precision track image recognition model, and collection and labeling of large-scale track image data sets is a time-consuming and expensive task. Furthermore, due to the diversity of runway images, including different lighting conditions, weather conditions, and runway geometries, it is desirable to ensure diversity and representativeness of the data sets to improve the generalization ability of the model. The requirement of high precision runway image identification for real-time and robustness also presents new challenges. In the aeronautical field, real-time is critical, as pilots need to acquire runway information in time to make decisions. Meanwhile, the robustness means that the runway image recognition system can keep better performance and stability under different environmental conditions, illumination changes, aircraft position angles and other factors. Therefore, it is necessary to design efficient algorithms and models to meet the real-time and robustness requirements. In addition, runway image recognition is faced with other challenges such as processing of low contrast images, occlusion and distortion, which can all lead to blurring and distortion of the runway features, thereby affecting the accuracy of the recognition. Therefore, new image processing and deep learning methods need to be studied to address these challenges. Therefore, research on high-precision runway image recognition based on visual landing is significant. By solving the problems of data set acquisition and labeling, improving the accuracy and generalization capability of a deep learning model, designing a high-efficiency real-time algorithm and improving the robustness, a more accurate and reliable runway image recognition technology can be realized, and powerful support is provided for the safety and efficiency of the aviation field. Disclosure of Invention The invention aims to design a runway characteristic detection algorithm which is based on a lightweight neural network and faces to a large aircraft landing scene, the algorithm can accurately and efficiently identify and select a proper runway based on real-time runway video or images, on one hand, runway information can be displayed in a display in a visual mode, a pilot can conveniently observe and acquire the accurate runway information visually in a landing process, landing safety is improved, on the other hand, the algorithm can be used as input of visual auxiliary landing positioning calculation, and an accurate and real-time runway characteristic detection result can better support a positioning calculation module to realize the measurement of the relative position of an aircraft and the runway. The algorithm can successfully detect runway features (corner points, left and right side lines) under the condition of complex landing scenes and low visibility, and external other sensing source information is not required to be introduced. A runway characteristic detection method facing large