CN-121978728-A - Tug fusion enhanced perception method based on target tracking and obstacle recognition
Abstract
The application relates to the technical field of perception fusion, and discloses a tug fusion enhanced perception method based on target tracking and obstacle recognition, which comprises the following steps: and processing AIS, optical fiber compass, beidou RTK, laser radar, millimeter wave radar and panoramic camera on the tug to establish the ship coordinate system. And collecting AIS message data and carrying out target ship screening by combining MMSI information to obtain the initial position of the target ship. And predicting the state sequence of the target ship to obtain the position, the navigational speed and the heading information of the target ship. And carrying out target recognition on the image, outputting recognition results of semantic levels of ships, buoys and bank walls, and carrying out space projection alignment with radar data. And fusing the processed results to obtain fusion results, and carrying out early warning according to the fusion results. The application improves the relative pose resolving precision of the target ship and the obstacle identifying reliability, shortens the perception link time delay and reduces the blind area risk of the driver.
Inventors
- YANG ZHIXIN
- LI QIUNAN
- SUN BO
- Meng Yongxun
- ZHANG KUN
- LIU CHEN
- ZHANG NAIYUE
Assignees
- 天津港轮驳有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260409
Claims (10)
- 1. The tug fusion enhanced perception method based on target tracking and obstacle recognition is characterized by comprising the following steps of: Initializing and time synchronizing AIS, optical fiber compass, beidou RTK, laser radar, millimeter wave radar and panoramic camera on the tug to establish a ship coordinate system; Collecting the message data of the AIS, analyzing, removing abnormal values and filtering in time sequence, screening a target ship by combining MMSI information of the target ship, and obtaining an initial position of the target ship under the ship coordinate system through geometric correction; performing target tracking and position prediction on the state sequence of the target ship based on Kalman filtering to obtain the position, navigational speed and heading information of the target ship; When the target ship enters the range of the distance threshold, invoking a laser radar to acquire boundary point cloud of the target ship, calculating a transverse distance, a longitudinal distance and a central point outline frame, and detecting the distance and the relative speed of peripheral obstacles based on the millimeter wave radar; performing target recognition on the image acquired by the panoramic camera, outputting recognition results of semantic levels of ships, buoys and quay walls, and performing space projection alignment with radar data; And fusing the processed AIS data, the optical fiber compass data, the Beidou RTK data, the laser radar data, the millimeter wave radar data and the panoramic camera identification result to obtain a fusion result, and carrying out early warning according to the fusion result.
- 2. The method for enhanced perception of tug fusion based on object tracking and obstacle recognition according to claim 1, wherein the initialization and time synchronization are performed, and the method for establishing the own ship coordinate system comprises the following steps: taking the unified time output by the Beidou RTK as a reference, carrying out clock synchronization on an AIS, an optical fiber compass, a laser radar, a millimeter wave radar and a panoramic camera, recording time deviation of each sensor, setting a millisecond-level synchronization window, and carrying out interpolation and delay compensation on a sampling time stamp to obtain an aligned data frame; Performing heading zero calibration based on the optical fiber compass, and establishing a ship coordinate system with a Beidou RTK antenna projection point as an origin, a heading as an x-axis and a port as a y-axis; Reading installation bias parameters of the AIS, the optical fiber compass, the Beidou RTK, the laser radar, the millimeter wave radar and the panoramic camera, and solving a rigid body transformation matrix from each sensor to the ship coordinate system, wherein the laser radar and the panoramic camera complete external parameter calibration through combined calibration targets, and the millimeter wave radar complete external parameter calibration through beam pointing and reflection target alignment; And projecting the point cloud characteristics of the laser radar, the echoes of the millimeter wave radar and the identification result of the panoramic camera to the ship coordinate system, comparing the heading and the position of the optical fiber compass and the Beidou RTK, and correcting the corresponding camera external parameters and time deviation if the projection errors and the heading errors exceed the error threshold.
- 3. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 1, wherein the method is characterized by performing analysis, outlier rejection and time sequence filtering, performing target ship screening by combining MMSI information of a target ship, and obtaining an initial position of the target ship under the ship coordinate system through geometric correction, and comprises: Unpacking the AIS message, extracting MMSI information, longitude and latitude, navigational speed, heading, ship length, ship width and installation distance parameters from an AIS antenna to the ship bow, the ship stern, the port board and the starboard to form a state sequence; Filtering the state sequence by using a threshold rule and a sliding window, and removing data frames of longitude and latitude out-of-range, time interval overrun, navigational speed mutation and heading jump; performing unique matching according to the MMSI information to determine a target ship; and converting the longitude and latitude of the target ship into the coordinate of the ship coordinate system from the geographic coordinate, performing lever arm compensation by utilizing the AIS antenna installation distance parameter of the target ship and the Beidou RTK antenna installation offset parameter of the ship, estimating the coordinate of the geometric center of the target ship under the ship coordinate system, and obtaining the initial position.
- 4. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 1, wherein the method for carrying out target tracking and position prediction on the state sequence of the target ship based on kalman filtering, when obtaining the position, the navigational speed and the heading information of the target ship, comprises the following steps: Establishing a target ship state vector under the ship coordinate system, adopting a uniform turning model CT as a process model, setting state transition according to the time interval of adjacent data frames, carrying out prediction updating by Kalman filtering, and outputting the prior position of the target ship; When the millimeter wave radar detects the target ship, the radial distance and the relative speed are used for geometric conversion to obtain the plane position and the speed component as millimeter wave radar measurement; Respectively carrying out consistency check on the measurements reached by different sensors, removing the measurements which do not pass the consistency check, weighting the AIS measurement, the laser radar measurement and the millimeter wave radar measurement based on time freshness to obtain fusion measurement, and obtaining the posterior position of the target ship according to the fusion measurement under the ship coordinate system; and comparing the prior position with the posterior position, obtaining the position, the navigational speed and the heading information of the target ship when the positions are consistent, obtaining the position, the navigational speed and the heading information of the target ship according to the posterior position when the positions are inconsistent, and correcting the process noise based on error analysis.
- 5. The method for enhancing perception of tug fusion based on target tracking and obstacle recognition according to claim 1, wherein when calculating relative distance and relative azimuth based on the AIS, optical fiber compass and beidou RTK, comprising: And under the ship coordinate system, calculating the linear distance between the geometric center of the target ship and the ship reference point, determining the included angle of the target ship relative to the ship heading based on the heading angle provided by the optical fiber compass as the relative distance, and outputting the relative distance and the relative position as a resolving result under a long-distance working condition.
- 6. The method for enhancing perception of tug fusion based on target tracking and obstacle recognition according to claim 4, wherein the step of calling a laser radar to obtain the boundary point cloud of the target ship, calculating the lateral distance, the longitudinal distance and the center point outline frame, and detecting the distance and the relative speed of the peripheral obstacle based on the millimeter wave radar comprises the following steps: setting a region clipping window in the ship coordinate system according to the posterior position of the target ship, preprocessing the laser radar point cloud, performing motion compensation and removing water surface miscellaneous points, and performing clustering segmentation in the region clipping window to obtain a target ship point cloud cluster; Performing boundary extraction and outline frame fitting on the target ship point cloud cluster, determining the center point and outline dimension of a target ship, and respectively calculating the forward component and the lateral component from the ship reference point to the center point by using the heading shaft and the port shaft of the ship coordinate system to obtain a longitudinal distance and a transverse distance, and simultaneously obtaining the outline frame of the center point; Detecting and clustering echoes of the millimeter wave radar to obtain the distance and the relative speed of an echo target, correlating the echoes which spatially fall into the outline frame of the central point and the neighborhood thereof as target ship measurement, and measuring the rest of uncorrelated echoes as peripheral obstacles to obtain the distance and the relative speed of the nearest obstacle; and in the range of the distance threshold, the transverse distance, the longitudinal distance and the outline frame of the central point obtained by the laser radar are taken as main measurement, and the distance and the relative speed of the millimeter wave radar are checked and supplemented to obtain a measurement set under a short-distance working condition.
- 7. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 2, wherein the method for performing target recognition on the image acquired by the panoramic camera, outputting the recognition result of semantic level of ship, buoy and quay wall, and performing spatial projection alignment with radar data comprises: preprocessing the panoramic camera image, wherein the preprocessing comprises de-distortion and brightness normalization, and outputting bounding boxes or pixel masks of ships, buoys and quay walls and class labels thereof based on a target recognition model; Converting the center of the boundary frame or the outline point of the pixel mask into a sight ray under a camera coordinate system, and generating a corresponding direction set; Projecting the laser radar point cloud to an image plane, performing cross-modal correlation on the identification result of the panoramic camera and the point cloud clusters of the laser radar according to the projection coincidence degree and space-time consistency, and distinguishing the point cloud clusters belonging to the target ship from the point cloud clusters belonging to the peripheral obstacles; Fitting a near-field water surface reference plane by using the correlated laser radar point cloud, and calculating the intersection point of the sight ray and the reference plane to obtain the spatial position of the panoramic camera identification result under the ship coordinate system; Outputting the category, the spatial position and the relative orientation aligned with the ship coordinate system as vision side measurement.
- 8. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 1 is characterized in that before fusion of processed AIS data, optical fiber compass data, beidou RTK data, laser radar data, millimeter wave radar data and panoramic camera recognition results, alignment of all data under a unified timestamp is carried out, an asynchronous multi-rate data queue is established, measurement is associated based on consistency test and space-time proximity constraint, inconsistent measurement and repeated measurement are removed, relative distances and relative orientations are converged according to a long-distance working condition and a short-distance working condition respectively, and a transverse distance and a longitudinal distance are converged to obtain a fusion measurement set for fusion.
- 9. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 8 is characterized in that a fusion result and uncertainty are obtained by solving the fusion measurement set based on weighted least squares under the ship coordinate system, wherein the fusion weight is determined according to measurement covariance, time freshness, message quality, point cloud density and recognition confidence, and the fusion result comprises a transverse distance, a longitudinal distance, a center point distance and a relative direction of a target ship relative to the ship.
- 10. The tug fusion enhanced perception method based on target tracking and obstacle recognition according to claim 9 is characterized in that when early warning is carried out according to the fusion result, collision time margin and minimum safety interval are calculated according to relative distance, relative azimuth, transverse distance and longitudinal distance and nearest obstacle distance and relative speed obtained by millimeter wave radar, and early warning level is generated according to a safety interval threshold.
Description
Tug fusion enhanced perception method based on target tracking and obstacle recognition Technical Field The invention relates to the technical field of perception fusion, in particular to a tug fusion enhanced perception method based on target tracking and obstacle recognition. Background The tug is core auxiliary equipment in the port dispatching and large ship berthing and leaving processes, and the safety and the synergy of tug operation directly influence the port transportation efficiency and the sailing safety. Along with the development trend of continuous expansion of port scale and intelligent shipping, the tugboat is required to finish accurate berthing and cooperative control in a complex port environment, so that higher requirements on environment sensing capability are provided. At present, the perception means of tugboat mainly depends on AIS, radar and manual observation, hysteresis exists in the aspect of remote perception, the problem of insufficient precision exists in the aspect of near obstacle recognition, and the requirement of the future intelligent port on high-reliability autonomous perception is difficult to meet. For example, the CN114995419a intelligent multi-tug cooperative control operating system of the prior disclosure proposes a multi-tug cooperative control scheme, which realizes cooperative operation through communication and scheduling between tugs, and has positive significance for path planning and control of tugs in berthing operation. However, this approach still relies on conventional single sensor inputs at the perception level, lacking a fusion enhanced perception mechanism for the target vessel and surrounding obstacles. In particular, in a complex harbor environment, AIS messages have the problems of low updating frequency and easiness in signal interference, high-precision target tracking is difficult to support independently, a laser radar and a millimeter wave radar are limited by a distance threshold value and environment interference in the use process, long-distance and short-distance continuous perception is difficult to achieve, a panoramic camera can provide semantic identification information, but an output result of the panoramic camera and radar data lack of a space alignment mechanism, and the panoramic camera is difficult to be directly used for multi-source fusion. Therefore, there is a need to design a tug fusion enhanced perception method based on target tracking and obstacle recognition to solve the problems in the prior art. Disclosure of Invention In view of the above, the invention provides a tug fusion enhancement sensing method based on target tracking and obstacle recognition, which aims to solve the problem that the current fusion enhancement sensing method for continuously tracking a target ship and obstacle recognition is lacking in time alignment and space registration of multi-source heterogeneous sensors. In one aspect, the invention provides a tug fusion enhancement sensing method based on target tracking and obstacle recognition, which comprises the following steps: Initializing and time synchronizing AIS (Automatic Identification System satellite ship automatic identification system), optical fiber compass, beidou RTK (Real-TIME KINEMATIC Real-time dynamic carrier phase difference technology), laser radar, millimeter wave radar and panoramic camera on a tug to establish a ship coordinate system; Collecting the message data of the AIS, analyzing, removing abnormal values and filtering in a time sequence, and screening a target ship by combining MMSI (Maritime Mobile SERVICE IDENTITY water Mobile service identification code) information of the target ship, so as to obtain the initial position of the target ship under the ship coordinate system through geometric correction; performing target tracking and position prediction on the state sequence of the target ship based on Kalman filtering to obtain the position, navigational speed and heading information of the target ship; When the target ship enters the range of the distance threshold, invoking a laser radar to acquire boundary point cloud of the target ship, calculating a transverse distance, a longitudinal distance and a central point outline frame, and detecting the distance and the relative speed of peripheral obstacles based on the millimeter wave radar; performing target recognition on the image acquired by the panoramic camera, outputting recognition results of semantic levels of ships, buoys and quay walls, and performing space projection alignment with radar data; And fusing the processed AIS data, the optical fiber compass data, the Beidou RTK data, the laser radar data, the millimeter wave radar data and the panoramic camera identification result to obtain a fusion result, and carrying out early warning according to the fusion result. Further, the initializing and time synchronizing process includes: taking the unified time output by the Beidou RTK as a reference, carry