CN-121994197-A - Method and system for detecting sensing faults of online unmanned system through multimode data fusion
Abstract
The invention discloses a multimode data fusion online unmanned system perception fault detection method and system, which are characterized in that an environment video stream is acquired through a vehicle-mounted monocular camera, a specific marker is identified by utilizing a target detection algorithm, the pixel height of the specific marker is extracted, the distance between the marker and the unmanned system is calculated by utilizing a monocular vision ranging model in combination with the priori physical height of the marker and the focal length of the camera, the self-calculation coordinates of the unmanned system are calculated by utilizing a geometric modeling method in combination with the distance information of a plurality of markers, the positioning coordinates provided by a positioning module of the unmanned system are acquired at the same time, the calculation coordinates and the positioning coordinates are compared, an offset value is calculated, and when the offset value continuously exceeds a preset threshold value, the perception sensor is judged to be faulty. The invention also introduces a V2X communication mechanism, which allows the sensing result to be shared or the detection model to be dynamically updated through the road side unit. The invention can realize low-cost and light-weight online fault detection without extra hardware redundancy, and effectively improves the robustness and safety of the unmanned system in a dynamic environment.
Inventors
- ZHANG QINGYANG
- CUI YONGHAO
- CUI JIE
- WANG FENGQUN
- ZHONG HONG
Assignees
- 安徽大学
Dates
- Publication Date
- 20260508
- Application Date
- 20260122
Claims (7)
- 1. The method for detecting the sensing faults of the online unmanned system by means of multimode data fusion is characterized by comprising the following steps of: step 1, data acquisition and pretreatment; in the running process of the unmanned system, the front-end sensing equipment is used for acquiring video stream data containing environmental information in real time, and meanwhile, the global navigation satellite system receiving module is used for acquiring the current absolute positioning coordinates of the unmanned system in real time ; Step2, target detection and feature extraction; The method comprises the steps of carrying out framing processing on collected video stream data, inputting a plurality of frame images into a pre-trained lightweight target detection model, identifying preset static markers in the environment by the lightweight target detection model, and outputting the types of the detected markers and a boundary frame, wherein boundary frame information comprises pixel heights of the markers in an image plane ; Step 3, monocular vision distance estimation; priori physical height of preset static marker based on monocular vision imaging method Camera focal length of front-end aware device Pixel height Constructing a monocular ranging model, and calculating the relative linear distance between the unmanned system and each detected marker ; Step 4, calculating the multipoint geometric coordinates; When two or more than two static markers with known geographic position information are detected in a single frame image, using a geometric modeling method, calculating relative linear distances between the unmanned system and each key marker according to the step 3 And the absolute geographic coordinates pre-stored by the key markers are reversely calculated to obtain the calculated coordinates of the unmanned system currently under the geographic coordinate system ; Step 5, multi-mode data fusion and deviation calculation; the absolute positioning coordinates obtained in the step 1 are processed And the calculated coordinates obtained in the step 4 Performing space-time alignment and fusion comparison, and calculating Euclidean distance deviation between the two (I.e., data fusion bias); Step 6, online fault judgment and early warning; deviation of Euclidean distance Comparing the sensor fault detection signal with a preset safety threshold interval, and if the deviation exceeds the safety threshold interval continuously for a plurality of times, judging that the sensor fault exists in the sensing system and triggering a fault alarm signal.
- 2. The method for detecting the perception faults of the online unmanned system based on multimode data fusion according to claim 1, wherein the lightweight target detection model in the step 2 is based on YOLOv architecture and optimized for specific markers; the preset marker object identified by the lightweight target detection model refers to an object which has a fixed physical size and is widely distributed in a road scene, and comprises a lamp post, a traffic sign post or a preset marker.
- 3. The method for detecting the perception failure of the online unmanned system based on the multi-mode data fusion according to claim 1, wherein the calculation formula of the monocular vision distance estimation in the step 3 is as follows: ; Wherein, the For the relative distance between the unmanned system and the preset marker object, For the actual physical height or width of the preset marker object, For the focal length of the vehicle-mounted monocular vision sensor, The pixel height or width of the marker object on the image plane is preset.
- 4. The method for detecting the perception failure of the online unmanned system based on multimode data fusion according to claim 1, wherein the step 4 is a visual positioning coordinate calculation method by using a circle intersection positioning method, and the specific method comprises the following steps: step 4.1, establishing a local coordinate system by using two identified preset markers And Is the known geographical location coordinates of (a) And As the center of a circle, the corresponding relative distance calculated in the step 3 And For radius, two positioning circle equations are constructed: ; ; Step 4.2, calculating the center distance between two preset marker objects : ; Step 4.3, solving the intersection point coordinates of the two circles according to a positioning circle equation set to serve as candidate positions of the unmanned system; Step 4.4, based on the movement direction of the unmanned system or the historical position information of the last moment, removing the pseudo solution from the two candidate intersection points, and determining the visual estimated coordinates of the current moment 。
- 5. The method for detecting the perception failure of the online unmanned system based on the multi-mode data fusion according to claim 1, wherein the method for time-space alignment in the step 5 is as follows: The positioning coordinates obtained by GPS are longitude and latitude, and based on the plane coordinates of the coordinate system with the marker obtained in the step 4 as the origin, the positioning coordinates and the plane coordinates are unified to the longitude and latitude or the plane coordinates for fusion, wherein the fusion method comprises the steps of calculating Euclidean distance deviation and fusing the data of the Euclidean distance deviation The calculation formula of (2) is defined as: ; Wherein, the The real-time positioning coordinates output by the vehicle-mounted positioning module are used, Coordinates are estimated for vision.
- 6. The method for detecting the sensing fault of the on-line unmanned system with multi-mode data fusion according to claim 1, wherein in the step 4, if the double markers are not detected continuously in a short time, a cooperative enhancement step is started, and the assistance of other vehicles is obtained from the roadside units so as to perform the steps 5 and 6, and the method specifically comprises the steps of: When the unmanned system cannot identify a sufficient number of preset marker objects due to limited visual field, a cooperation request is sent to a Road Side Unit (RSU) or other adjacent vehicles through a V2X communication module; receiving a sensing result shared by other vehicles forwarded by a road side unit, wherein the sensing result comprises the marker position detected by the other vehicles and positioning information of the marker position; and the received shared sensing data is used for assisting the coordinate calculation of the self or directly serving as reference data to carry out deviation comparison, so that collaborative fault detection is realized.
- 7. An unmanned system perception fault on-line detection system based on multi-mode data fusion according to any one of claims 1 to 6, which is characterized by comprising a front end perception module, an edge calculation processing module, a multi-mode fusion fault diagnosis module and a V2X communication module, The front-end sensing module comprises a vehicle-mounted monocular camera and a Global Navigation Satellite System (GNSS) receiver, and is used for collecting environment image data and vehicle absolute positioning data respectively; The system comprises a front-end sensing module, an edge computing processing module, a target detection unit, a distance estimation unit and a coordinate resolving unit, wherein the front-end sensing module is connected with the edge computing processing module; The multi-mode fusion fault diagnosis module receives the visual position output by the edge calculation processing module and the positioning data output by the GNSS receiver, calculates fusion deviation, and judges the health state of the sensor according to the relation between the fusion deviation and a threshold value; The V2X communication module is used for carrying out data interaction with road side infrastructure and other vehicles, and receiving external auxiliary perception information and dynamically updated detection model parameters.
Description
Method and system for detecting sensing faults of online unmanned system through multimode data fusion Technical Field The invention belongs to the technology of unmanned system environment sensing and safety monitoring, and particularly relates to a multimode data fusion online unmanned system sensing fault detection method and system. Background In recent years, with the rapid development of artificial intelligence and control technology, intelligent unmanned systems represented by automatic driving automobiles and unmanned aerial vehicles have been widely used. In these systems, sensors play a vital role in acquiring ambient information (e.g., road conditions, obstacle distances, self-location, etc.) in real time, providing basic data support for navigation, perception, and decision-making. However, the sensor is in a complex and variable dynamic environment for a long time in the actual operation process, is extremely susceptible to external factors (such as extreme temperature, humidity, electromagnetic interference and vibration), and also risks aging of internal hardware, performance degradation and even complete failure. Once key sensors (such as cameras and positioning modules) fail, an unmanned system may have a blind sensing area and positioning drift, which may cause navigation interruption, path deviation and even serious safety accidents. The existing sensor fault detection technology mainly has the following limitations: 1. Relying on off-line inspection, conventional inspection methods typically require off-line calibration and inspection in a static environment using calibration plates (e.g., checkerboards, aprilTag) at specific maintenance sites. The method can not find faults in real time in the process of executing tasks by the unmanned system, and once the equipment is abnormal in operation, the system can not respond in time. 2. Hardware redundancy is costly and part of the schemes use hardware redundancy (e.g., redundant sensors are installed) to vote for reliability. This, while effective, adds significant hardware cost, power consumption, and computational burden to the system, which is detrimental to lightweight deployment. 3. The existing detection method based on single mode is easy to greatly reduce detection precision and has higher false alarm rate and false missing rate under complex dynamic scenes such as illumination change, shielding and the like. 4. The lack of multi-modal fusion verification-existing online detection is focused on data stream analysis (e.g., checking for noise in video streams) of a single sensor, and the lack of an effective mechanism to exploit data consistency between different types of sensors (e.g., vision and localization) for cross-validation. For example, a vehicle detection method based on multi-mode data fusion disclosed in chinese patent application CN116863461a, and a multi-mode data fusion anti-unmanned aerial vehicle high-precision detection method based on CN-YOLOv5 disclosed in chinese patent application CN119716839a all improve vehicle detection precision, but cannot judge whether detection is wrong, so that a frame for realizing on-line, real-time and high-reliability perception fault detection by utilizing existing sensor data to perform multi-mode fusion without increasing additional hardware cost is needed. Disclosure of Invention The invention aims to provide a multimode data fusion on-line unmanned system perception fault detection method and system, which solve the problems of off-line calibration dependence, high hardware cost and poor real-time performance of sensor fault detection in the prior art, and realize real-time monitoring and fault early warning of sensor states by utilizing a monocular vision distance estimation and global positioning system data cross-validation mechanism and combining an edge calculation and an artificial intelligent model. The technical scheme is that the method for detecting the sensing faults of the online unmanned system based on multimode data fusion comprises the following steps: step 1, data acquisition and pretreatment; In the running process of the unmanned system (such as an automatic driving vehicle), the video stream data containing environmental information is acquired in real time through the front-end sensing equipment (such as a vehicle-mounted monocular camera) and the current absolute positioning coordinate of the unmanned system is acquired in real time through the global navigation satellite system receiving module (such as GNSS/GPS) (Further obtaining longitude, latitude and elevation information of the unmanned system); step2, target detection and feature extraction; The acquired video stream data is subjected to framing processing (static image frames can be extracted from the video stream at a preset fixed frequency (such as 10 Hz), each frame of image is obtained after framing), a plurality of obtained frame images are input into a pre-trained lightweight target detection mo