Search

CN-121995395-A - Railway line thunder fusion multi-sensor collaborative sensing system and installation calibration method

CN121995395ACN 121995395 ACN121995395 ACN 121995395ACN-121995395-A

Abstract

The invention discloses a railway line thunder and vision fusion multi-sensor collaborative sensing system and an installation and calibration method, and belongs to the technical field of intelligent traffic and environment sensing. Aiming at the problems of low target detection precision, poor timeliness of multi-source data fusion, dependence on manual work in sensor installation calibration and the like in a complex scene along a railway, the system provides an innovative architecture of layered perception-dynamic fusion-autonomous calibration, wherein autonomous optimization of sensor pose parameters is realized by utilizing a track geometric constraint and deep learning model through deployment of a multi-mode sensor array of a laser radar, a millimeter wave radar, a visible light camera and a thermal infrared imager, and real-time target tracking, intrusion early warning and equipment state diagnosis are supported by an integrated edge computing node and cloud cooperative platform. Experiments show that the system has the target detection accuracy rate of more than or equal to 98.5 percent and the false alarm rate of less than or equal to 0.3 percent under the scenes of rain and fog, night, high-speed movement and the like, and the sensor installation parameter self-calibration error is less than 0.05 degrees and is improved by more than 40 percent compared with the traditional scheme.

Inventors

  • ZHANG SHILE
  • ZOU XINGXING

Assignees

  • 中南大学

Dates

Publication Date
20260508
Application Date
20260123

Claims (10)

  1. 1. The railway line thunder vision fusion multi-sensor collaborative sensing system is characterized by comprising the following functional modules and hardware, wherein the system is used for collaborative realization of all-weather, high-precision and low-delay environment and intrusion target sensing along the railway line, and can autonomously complete on-line calibration and dynamic compensation of sensor installation parameters in the running process: Multimodal sensor array: distributed deployment is carried out along a railway line according to a preset space topological structure and a scene adaptation principle, and the distributed deployment method at least comprises a laser Radar (LiDAR, preferably 32 lines or 64 lines of mechanical/solid-state radars, a horizontal view field is not less than 90 DEG and not more than 150 DEG, a vertical view field is not less than 20 DEG and not more than 40 DEG, a maximum ranging function is not less than 200m, ranging accuracy is not more than + -2 cm), a millimeter wave Radar (MMW Radar, a working frequency range is 77GHz or 79GHz, the horizontal view field is not less than 60 DEG and not more than 120 DEG, the maximum detection distance is not less than 300m, a speed measurement range covers-200 km/h to +400km/h, the speed resolution is not more than 0.1 m/s), a visible light Camera (VIS CAMERA, the resolution is not less than 1920×1080 and preferably 3840×2160, the frame rate is not less than 25fps and preferably 30fps, an automatic exposure and strong light inhibition function is provided, the focal length can be adjusted between 6 mm-25 mm), an infrared imager (IR Camera, the resolution is not less than 320×240×240 km/h and preferably +400km/h, the speed resolution is not less than 20×400km/h, the speed resolution is not more than 0×0.1 m/fps and the preferable range is not less than 16×40×16×16×fps; the sensing coverage area of each sensor is between 50m and 500m along the railway, and the sensing range between any adjacent sensors is provided with a cross overlapping area of not less than 20 percent so as to ensure that the target does not lose tracking when crossing the sensing boundary; the system comprises a space-time synchronization module, a software layer, a master node, a transparent clock (TRANSPARENT CLOCK), a right-hand Cartesian world coordinate system, a Z-axis vertical direction train, a factory-oriented matrix and a space conversion comprehensive error, wherein the hardware trigger signal and software timestamp correction dual mechanism is adopted, the hardware trigger is directly connected to a millimeter wave radar, a visible light camera and an external trigger input end of an infrared thermal imager through a shielded cable by a laser radar emission periodic pulse (with the pulse width of 10 mu s +/-1 mu s and the voltage TTL high level) of the master node, so as to realize microsecond consistency of data acquisition starting time; An edge fusion processing unit is carried with a Rayleigh feature fusion network (RAF-Net) based on an attention mechanism, the network runs on an edge computing node (a hardware platform is NVIDIA Jetson AGX Xavier, xilinx Zynq UltraScale +MPSOC or Atlas 500) in real time, multi-source data output by a time synchronization module are subjected to cross-mode feature extraction and fusion, the edge fusion processing unit comprises ① cross-mode feature encoders respectively process laser radar point clouds by adopting PointNet ++ architecture to extract three-dimensional geometric features (the point clouds are downsampled to be within 8192 points, feature dimensions are 256 dimensions), a Range-Doppler graph convolution network is adopted to process millimeter wave radar echoes to extract radial velocity and Doppler features (feature dimensions are 128 dimensions), and ResNet-50 backbone networks are adopted to process visible light and infrared images to extract textures, edges and heat radiation features (feature dimensions are 512 dimensions); ② the attention fusion layer introduces channel attention (sque-and-specification Block) and space attention (Convolutional Block Attention Module) mechanisms, dynamically distributes weights for different modal characteristic channels and space positions, suppresses invalid or noise characteristics under interference scenes such as rain and fog, backlight, high temperature difference background and the like, enhances the response of the target salient characteristics, and the ③ multitask decoder outputs a three-dimensional target detection frame (center point coordinates, length, width and height, course angle, positioning error not exceeding 0.3m within 50 m), three-dimensional velocity vectors (vx, vy, vz) (velocity estimation error not more than 0.2 m/s) and target class probability distribution (class covers trains, locomotives, pedestrians, people), bicycle, falling stone, animal and large-scale floater, the accuracy rate of category identification is more than or equal to 98 percent); The installation and calibration subsystem consists of an off-line calibration device, an on-line self-calibration module and a dynamic compensation algorithm, and realizes the automation and the periodic optimization of sensor external parameters (installation positions x, y, z and attitude angles alpha, beta and gamma); the off-line calibration device comprises a high-precision calibration target (formed by splicing a high-reflectivity pyramid prism array and a thermochromic material), wherein the geometric dimension of the target is known and the target is distributed at a plurality of control points with known coordinates along a railway, such as kilometer posts and turnout zone datum points), the sensor array is driven by an automatic calibration program to perform multi-angle scanning on the target, an Iterative Closest Point (ICP) algorithm is used for matching laser radar point clouds with three-dimensional coordinates of the control points (registration error is less than or equal to 0.3 mm), and a perspective n-point (PnP) algorithm is combined to solve internal parameters (focal length, principal point and distortion coefficient) and external parameters (position and gesture relative to a laser radar) of a camera, so that sub-pixel calibration precision (reprojection error is less than or equal to 0.1 pixel) is realized; the online self-calibration module continuously monitors the pose change of the sensor during the operation of the system, constructs a reference map based on the fixed geometric constraint of railway tracks (such as the track gauge 1435mm plus or minus 0.5mm, the sleeper distance 600mm plus or minus 10mm and the two-track parallelism error less than or equal to 2 mm/m), automatically starts an incremental SLAM process when the external parameter deviation caused by vibration or temperature drift is detected to exceed a set threshold value (the position deviation is more than or equal to 1cm or the attitude angle deviation is more than or equal to 0.05 DEG), updates the external parameter matrix through fusion of point cloud registration and a visual odometer, has a calibration period not more than 10s, acquires the built-in temperature sensor (the precision is +/-0.5 ℃) and the triaxial accelerometer (the measuring range is +/-20 g in real time by a dynamic compensation algorithm, resolution is less than or equal to 0.01 g), a temperature-optical focal length offset model and a vibration-attitude angle offset model are established, and parameter drift is predicted and compensated by using an extended Kalman filter, so that the target positioning error in a full working temperature range (-40 ℃ to +85 ℃) is not more than 0.1m; The collaborative decision platform is deployed on a cloud server and edge node collaborative architecture, receives a real-time detection result of an edge fusion processing unit, performs multi-objective tracking (adopting DeepSORT or ByteTrack algorithm, the objective ID switching rate is less than or equal to 5%), risk level assessment (calculating a risk index R based on the closest distance between an objective and a track, speed and class weight, and triggering a primary early warning) and early warning instruction generation by combining a railway operation rule base (comprising a train operation plan, an interval speed limit, a construction section and a temporary speed limit command), and pushes early warning information to a dispatching center large screen display terminal, a train vehicle-mounted ATP/ATO system and a mobile terminal APP through a 5G NR or Beidou short message wireless communication module, wherein the pushing delay is not more than 100ms.
  2. 2. The multi-sensor collaborative sensing system based on the railway line radar fusion is characterized in that a topological structure of the multi-mode sensor array adopts a layered distributed layout of a master node and a slave node, wherein the master node is arranged on a contact network strut or a special monitoring tower at each 2km position along the railway line, the installation height is 5-8 m, a high-line number laser radar, a long-distance millimeter wave radar and an edge computing node are integrated to form a local sensing and fusion processing core, the slave nodes are distributed on railway guard rail struts or independent supports at intervals of 500 m+/-50 m, the installation height is 1.2 m-2 m, each slave node is provided with a group of visible light cameras and infrared thermal imagers, and forms a certain included angle towards the outer side and the inner side of a track respectively (for example, an optical axis of the outer side camera forms 30-60 degrees with the vertical line of the track, an inner side camera forms-30-60 degrees with the vertical line of the track) so as to enlarge lateral coverage, the link bandwidth is not lower than 1 Gs through a single-point single-mode fiber loop network, transmission protocol adopts TSbpN (TIME SENSITIVE) to ensure that the whole network redundancy is not influenced by switching of the network redundancy data paths, and the real-time redundancy is not influenced by the network redundancy sensing.
  3. 3. The multi-sensor collaborative sensing system based on the railway line radar fusion is characterized in that a space-time synchronization module adopts a PTP precision clock protocol and hardware triggering dual synchronization mechanism, a main node is provided with a constant temperature crystal oscillator (OCXO) as a PTP GRANDMASTER clock source, the frequency stability is less than or equal to +/-1 ppb, the time of receiving and transmitting messages is recorded through the hardware timestamp function of an Ethernet PHY chip, the synchronization error of the slave node is not more than 5 mu s through actual measurement, a hardware triggering signal is controlled and generated by an FPGA (field programmable gate array) in the main node laser radar, the triggering period is consistent with the laser radar scanning frequency (such as 10Hz or 20 Hz), the triggering jitter is less than or equal to 0.2 mu s, the signal is transmitted to each slave node sensor through an impedance matched shielding cable, the transmission delay is controlled to be less than 1 mu s through oscilloscope calibration, and the whole network time deviation and the triggering delay calibration are executed once in the system initialization stage, and a compensation table is generated for dynamic correction during operation.
  4. 4. The multi-sensor collaborative perception system based on the radar fusion along the railway is characterized in that the RAF-Net network structure and training method specifically comprises the steps of adopting PointNet ++ layered sampling and grouping feature extraction (Set extraction layer 3 level and ball query radius of 0.5m, 1.0m and 2.0m in sequence) to a laser radar branch of a cross-modal feature encoder, outputting a global feature vector 256 dimension, carrying out normalization processing on a Range-Doppler graph by a millimeter wave radar branch, carrying out two-layer convolution (Conv 3 x 3, stride=1, channels= [64,128 ]) and pooling to extract motion features, sharing ResNet-50 first four residual blocks of the backbone network by visible light and infrared image branches, carrying out bilinear interpolation to unify feature map resolution to 1/8 input size, outputting 512-dimensional features, carrying out channel importance weighting (reduction ratio r=16) by an attention fusion layer, carrying out dimensional reinforcement on a target area by a CBAM, carrying out regression function, carrying out error rate regression with a contrast ratio of a local error sensor, carrying out a contrast ratio of 3:3, carrying out a contrast ratio of 1:35:35 to a contrast ratio, carrying out a training process of a contrast ratio 1, and a single-phase error rate 1:35:4, and a training process of a contrast ratio 1, and a training method of a contrast 1.
  5. 5. The multi-sensor collaborative sensing system for the railway line radar fusion according to claim 1 is characterized in that the off-line calibration device for the installation and calibration subsystem further comprises a target bracket with adjustable height and pitch angle so as to adapt to sensor visual angles with different installation heights, a high-reflectivity material and a thermochromic coating are coated on the surface of the target, the high-definition degree is achieved in visible light and infrared bands, an automatic calibration program controls a sensor array to collect target data according to preset scanning tracks (such as horizontal rotation +/-30 degrees and pitch +/-15 degrees), ICP and PnP solving are achieved through invoking PCL (Point Cloud Library) and an OpenCV library respectively, and encrypted calibration files (including an internal reference matrix, a distortion coefficient, an external reference matrix, a time stamp and a verification code) are generated after calibration is completed, are stored in an edge node nonvolatile memory, and cloud backup is uploaded synchronously.
  6. 6. The railway line radar fusion multi-sensor collaborative sensing system is characterized in that the implementation steps of the online self-calibration module comprise (1) extracting track plane characteristics in a laser radar point cloud in real time (by utilizing a RANSAC to fit a ground and track plane equation), (2) extracting track fasteners and sleeper corner characteristics in visible light and infrared images (by means of Harris corner detection and template matching), (3) constructing a reprojection error function E-sigma II p_i img-pi (T.P_i lidar) 2, wherein p_i img is an image characteristic point homogeneous coordinate, P_i lidar is a corresponding point cloud coordinate, T is an external reference matrix to be optimized, pi is a pinhole projection model, (4) adopting a Levenberg-Marquardt nonlinear least square algorithm to iteratively optimize T, and the iteration termination condition is an error change rate <1E-6 or iteration number of times 50), (5) updating an external reference is used for writing a configuration file into a fusion network in real time.
  7. 7. The multi-sensor collaborative sensing system based on the railway line radar fusion is characterized in that the dynamic compensation algorithm further comprises the steps of establishing a temperature-focal length deviation lookup table (focal length variation of each 5 ℃ in a range of-40 ℃ to +85 ℃ measured based on experiment room temperature control experiments), obtaining a compensation value according to linear interpolation of temperature sensor readings in operation, establishing a vibration-attitude angle deviation model (instantaneous attitude change is estimated based on accelerometer integration and low-pass filtering), fusing a predicted value and a measured value through Kalman filtering, outputting a compensated attitude angle for point cloud and image coordinate transformation, and enabling a target positioning error to be less than or equal to 0.1m and a speed estimation error to be less than or equal to 0.15m/s under any working condition after compensation.
  8. 8. The installing and calibrating method of the railway line-along thunder-vision fusion multi-sensor collaborative sensing system is characterized by comprising the following steps of carrying out fine control by combining special railway environment and physical characteristics of sensors in each step: S1, off-line pre-calibration, namely, under a laboratory controlled environment, using the high-precision calibration target and automatic calibration program as claimed in claim 5 to perform initial calibration on internal parameters (camera distortion coefficient k 1 ,k 2 ,p 1 ,p 2 ,k 3 , laser radar scanning angle calibration factors) and external parameters (installation positions x, y, z and Euler angles alpha, beta and gamma) of a sensor array, generating an initial calibration file, and verifying that the re-projection error is less than or equal to 0.1pixel and the point cloud registration error is less than or equal to 0.3mm; S2, performing site rough calibration, namely after the sensor is installed at a preset position along the railway, acquiring absolute geographic coordinates (horizontal precision +/-2 cm and elevation precision +/-3 cm) of a phase center of the sensor antenna by using a GNSS-RTK receiver, and calculating theoretical deviation of the installation position by combining with track center line coordinates in a CAD drawing of the railway line design, and performing mechanical adjustment to ensure that the position deviation is less than or equal to 1cm; S3, on-line fine calibration, namely starting the on-line self-calibration module according to claim 6, continuously collecting at least 100 frames of sensor data, extracting and matching geometric features and perception features of a track, performing external parameter optimization until the external parameter variable quantity of two adjacent iterations meets position change <1cm and attitude angle change <0.01 degrees, and the calibration period is less than or equal to 10S; s4, dynamic compensation verification, namely under the condition that a train runs or disturbance is manually applied (such as manually shifting the sensor angle by 0.2 ℃ or changing the ambient temperature by 20 ℃), running the dynamic compensation algorithm described in claim 7, and verifying that the target detection recall rate is reduced by less than or equal to 2%, the positioning error is increased by less than or equal to 0.05m, and the early warning delay is increased by less than or equal to 50ms; And S5, periodically maintaining and calibrating, namely issuing a calibration instruction by the cloud operation and maintenance platform every 3 months, automatically executing the steps S3-S4, updating the calibration file, and synchronizing to the cloud database and all edge nodes to form a closed-loop calibration file.
  9. 9. The method for installing and calibrating the railway line-along-line radar fusion multi-sensor collaborative awareness system is characterized in that the method for realizing the online fine calibration in the step S3 further comprises the steps of constructing a multi-scale feature matching strategy, utilizing voxel grid downsampling to accelerate ICP matching in a point cloud layer, utilizing ORB features and optical flow tracking to improve feature point stability in an image layer, adding a track geometric constraint penalty term in an optimization process to prevent overfitting in a feature scarcity area (such as a tunnel), and adopting a sliding window mode to maintain an optimal result mean value of the nearest N=50 frames to inhibit the influence of instantaneous noise on external parameters.
  10. 10. The method for calibrating the installation of the integrated multi-sensor collaborative sensing system along the railway line according to claim 8, wherein the dynamic compensation verification in step S4 further comprises the steps of continuously testing under different environmental conditions (rain, snow, fog, sand and dust and night) for not less than 24 hours, counting the detection performance indexes under each condition, and forming an environmental adaptability and compensation effectiveness report for subsequent model iteration and parameter fine adjustment.

Description

Railway line thunder fusion multi-sensor collaborative sensing system and installation calibration method Technical Field The invention relates to the technical field of intelligent traffic and environmental awareness, in particular to a railway line thunder-vision fusion multi-sensor collaborative awareness system and an installation and calibration method. Background Along with the rapid expansion of the network of the high-speed railway and the common-speed railway in China, the main skeleton of the high-speed railway with the length of eight vertical and eight horizontal is basically formed, and the business mileage of the national railway breaks through 16 ten thousand kilometers, wherein the high-speed railway exceeds 4 ten thousand kilometers, and the common-speed railway is about 12 ten thousand kilometers. The continuous increase of railway operation speed (such as 350 km in the operation speed of high-speed iron in jinghu and 400 km in the operation speed of partial intercity railways) and the continuous increase of passenger and cargo transportation density obviously improve the importance and urgency of safety monitoring along the line. Common risks along the railway include foreign matter invasion limit (such as falling rocks, falling trees and falling engineering components), illegal invasion (such as personnel climbing guard rails entering limit), equipment abnormality (such as hanging foreign matters by a contact net and damaging signal facilities), and secondary risks (such as landslide and flood rushing out of roadbeds) caused by natural disasters. Once the risks occur, the train is stopped temporarily and the transportation order is disordered when the risks are light, and serious accidents such as derailment and collision of the train can be caused, so that the life and property safety of people and the stable operation of national transportation pulse are seriously threatened; The monitoring means widely adopted along the railway at present can be divided into three types of manual inspection, fixed video monitoring and single sensor automatic monitoring. Manual inspection relies on periodic hiking or riding inspection by an inspection worker, and although partial hidden danger can be found, the problems of long inspection period, limited coverage, low efficiency at night and in bad weather and the like exist, and real-time response cannot be achieved. The fixed video monitoring system can realize uninterrupted video recording and retrospective tracing for 24 hours by means of visible light cameras distributed along the line, but the imaging quality of the fixed video monitoring system is drastically reduced, even completely disabled in the environments of night, thick fog, rain and snow, strong backlight and the like, and the identification rate of static or slow small targets (such as lying animals and falling rocks with smaller volume) is low. The single sensor automatic monitoring scheme is usually one or two of a laser radar, a millimeter wave radar and a thermal infrared imager, although the all-weather detection capability can be improved, the inherent limitations brought by the physical characteristics of the single sensor automatic monitoring scheme are difficult to compensate each other, for example, the attenuation of the laser radar in rain and fog is serious, the detection distance can be shortened by more than half, the resolution capability of the millimeter wave radar on the shape of a target is poor, false alarms or missed detection are easy to occur, and the thermal infrared imager is difficult to distinguish the target from the background when the temperature difference between the environment temperature and the target is small. Therefore, the sensing capability of a single sensor is insufficient to support complex and changeable railway line scenes, and a multi-sensor cooperative comprehensive sensing system must be developed; In order to overcome the limitation of a single sensor, a multi-sensor fusion technology is gradually introduced in the industry, and different types of sensing data are processed in a combined way in time and space so as to improve the reliability and accuracy of detection. However, from the view of the prior published patent, academic paper and commercial products, most schemes still have obvious technical bottlenecks, and the four aspects of shallow perception fusion level, insufficient space-time synchronization precision, poor environmental adaptability and difficult calibration maintenance are mainly reflected; In the aspect of fusion hierarchy, a post-fusion strategy of 'independent detection first and result fusion later' is adopted in the main flow, namely, each sensor completes target detection locally first, and then detection results of different sensors are associated and combined in a decision layer. The method has the advantages of simple implementation, but has obvious defects that firstly, each sensor is limited by physical ch