CN-122020411-A - Vehicle super-treatment method, system and storage medium
Abstract
The application provides a vehicle super-treatment processing method, a system and a storage medium, wherein the method comprises the steps of obtaining multi-source time sequence data from a plurality of super-treatment devices, and performing space-time alignment processing on the multi-source time sequence data to obtain aligned time sequence data; the method comprises the steps of extracting key features from aligned time sequence data to obtain a multi-source feature set, carrying out multi-source data fusion processing on the multi-source feature set by adopting a data fusion algorithm to obtain fusion vehicle information, carrying out contradiction detection on the fusion vehicle information to obtain a contradiction detection result and an alarm state, and generating and outputting an excessive value judgment result based on the contradiction detection result and the alarm state. The application can efficiently and accurately complete the vehicle overrun control work.
Inventors
- YUAN XIN
- ZHAO YE
- LIU LINA
- ZHANG WUYI
- SONG JINGJING
- CHEN SHENGJIAN
- YANG ZHIYU
- GUO WEIJIAN
- Luo fukang
- Chen Binshuqi
- DING LING
Assignees
- 浙江交科工程检测有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251208
Claims (10)
- 1. A vehicle treatment method, the method comprising: acquiring multi-source time sequence data from a plurality of super treatment devices, and performing space-time alignment processing on the multi-source time sequence data to obtain aligned time sequence data; extracting key features from the aligned time sequence data to obtain a multi-source feature set, and carrying out multi-source data fusion processing on the multi-source feature set by adopting a data fusion algorithm to obtain fusion vehicle information; And performing contradiction detection on the fused vehicle information to obtain a contradiction detection result and an alarm state, and generating and outputting an excessive value judgment result based on the contradiction detection result and the alarm state.
- 2. The method of claim 1, wherein the acquiring multi-source timing data from a plurality of superequipment comprises: acquiring the vehicle speed from a velocimeter in real time, and dynamically adjusting the data acquisition frequency of each ultrasonic treatment device based on the vehicle speed; Acquiring initial multi-source time sequence data from a plurality of ultra-treatment devices based on the adjusted data acquisition frequency, wherein the initial multi-source time sequence data comprises weighing data from a dynamic truck scale, vehicle identification data from an axle type identification instrument, license plate data from a vehicle identification system, vehicle running state data from a velocimeter and cargo state machine vision identification data from a video monitoring system; And performing preliminary data cleaning and time stamp synchronization on the initial multi-source time sequence data through edge computing nodes arranged on each super treatment equipment side to obtain the preprocessed multi-source time sequence data.
- 3. The method of claim 2, wherein performing a space-time alignment process on the multi-source timing data to obtain aligned timing data comprises: Constructing a space coordinate mapping model based on matching of positioning data of a global navigation satellite system and laser radar point clouds, and mapping the multi-source time sequence data to the space coordinate mapping model to obtain aligned space time sequence data, wherein the aligned space time sequence data comprises aligned space weighing data, aligned space vehicle identification data, aligned space license plate data, aligned space vehicle driving state data and aligned space cargo state machine vision identification data; And taking the initial time stamp of the aligned space weighing data as a reference time point, and adopting a sliding window time alignment algorithm to dynamically correct the time stamps of the aligned space vehicle identification data, the aligned space license plate data, the aligned space vehicle driving state data and the aligned space cargo state machine vision identification data to obtain aligned time sequence data.
- 4. The method of claim 1, wherein extracting key features from the aligned time series data to obtain a multi-source feature set comprises: Extracting vehicle visual features from aligned space-time cargo state machine visual identification data in the aligned time sequence data by adopting a convolutional neural network, wherein the vehicle visual features comprise cargo contour point clouds and vehicle outline dimensions; Extracting dynamic weight fluctuation characteristics from aligned space-time weighing data in the aligned time sequence data by adopting a long-short-period memory network, wherein the dynamic weight fluctuation characteristics comprise an axle weight sequence and a total weight change trend; Extracting an axis profile feature from aligned spatiotemporal vehicle identification data in the aligned time series data, wherein the axis profile feature comprises a tire number distribution and an axis component distribution; extracting vehicle identification features from aligned space-time license plate data in the aligned time sequence data, wherein the vehicle identification features comprise license plate numbers, body colors and vehicle brands; extracting motion features from aligned spatiotemporal vehicle driving state data in the aligned time series data, wherein the motion features comprise a real-time speed curve and an acceleration change feature; And combining the vehicle image feature, the time sequence weight fluctuation feature, the shaft type distribution feature, the vehicle identification feature and the motion feature into a multi-source feature set.
- 5. The method of claim 4, wherein performing a multi-source data fusion process on the multi-source feature set using a data fusion algorithm to obtain fused vehicle information comprises: Inputting the multi-source feature set into a multi-mode transform fusion model, and performing modal interaction learning on each source feature in the multi-source feature set through a self-attention mechanism of the multi-mode transform fusion model to generate a cross-modal fusion feature representation; Calculating the dynamic fusion weight of each feature source through a dynamic weight learning module based on the cross-modal fusion feature representation, wherein the dynamic weight learning module dynamically adjusts the dynamic fusion weight according to the real-time confidence index of each source feature and the historical error data, and the dynamic fusion weight is the dynamic fusion weight of the ith feature source=the sum of the inverse of the historical error variance of the ith feature source/the inverse of the historical error variance of all feature sources based on a formula of the historical error data adjustment; And carrying out weighted fusion on the multisource feature set according to the dynamic fusion weight to obtain fusion vehicle information.
- 6. The method of claim 5, wherein said contradictory detecting the fused vehicle information to obtain a contradictory detection result and an alert condition comprises: Performing equipment-level contradiction detection to perform initial verification based on real-time confidence indexes of all data sources in the fused vehicle information, triggering re-acquisition or identification operation of any data source when the confidence of the data source is lower than a preset threshold value, and generating equipment-level contradiction marks; Performing data-level conflict detection to perform consistency verification on axle type distribution characteristics in the fused vehicle information, vehicle brands in vehicle identification characteristics, axle weight sequences in dynamic weight fluctuation characteristics and gross weight change trends based on vehicle physical characteristic logic rules, and generating a data-level conflict mark when logic conflicts are detected, wherein the logic conflicts comprise at least one of an axle type distribution characteristic and an axle weight sequence, and a vehicle brand mismatch; performing system-level contradiction detection to perform difference analysis on measured values of different data sources in the fused vehicle information through cross-device data comparison, triggering a collaborative verification request when the difference exceeds a preset tolerance threshold, and generating a system-level contradiction mark; And generating a contradiction detection result and an alarm state based on the device-level contradiction flag, the data-level contradiction flag and the system-level contradiction flag.
- 7. The method of claim 6, wherein the generating and outputting an override determination based on the contradictory detection results and the alert condition comprises: determining a data credibility level based on the alarm state, and dynamically selecting an overload judging mode according to the data credibility level, wherein the overload judging mode comprises a direct judging mode and a rechecking judging mode; when the alarm state is no alarm, adopting a direct judging mode, and comparing the dynamic weight fluctuation characteristic in the fused vehicle information with a preset overload threshold value to generate an excessive value judging result; When the alarm state is an alarm, a rechecking judgment mode is adopted, a collaborative verification mechanism is triggered to acquire additional verification data, the fusion vehicle information is corrected based on the additional verification data, and an excessive value judgment result is generated through a multi-source data weighted fusion algorithm, wherein the multi-source data weighted fusion algorithm dynamically adjusts the weight of each data source according to the real-time confidence index in the contradiction detection result.
- 8. The method of claim 7, wherein the method further comprises: outputting the excessive value judgment result to a display device or a law enforcement platform, and storing the contradiction detection result and the alarm state in a correlated way for traceability analysis.
- 9. A vehicle super-treatment processing system is characterized by comprising an acquisition module, an alignment module, an extraction module, a fusion module and a processing module, wherein, The acquisition module is used for acquiring multi-source time sequence data from a plurality of super treatment devices; The alignment module is used for acquiring multi-source time sequence data from a plurality of super treatment devices; The extraction module is used for extracting key features from the alignment time sequence data to obtain a multi-source feature set; The fusion module is used for carrying out multi-source data fusion processing on the multi-source feature set by adopting a data fusion algorithm to obtain fusion vehicle information; The processing module is used for carrying out contradiction detection on the fusion vehicle information to obtain a contradiction detection result and an alarm state, and generating and outputting an excessive value judgment result based on the contradiction detection result and the alarm state.
- 10. A computer-readable storage medium, on which a computer program is stored which can be run on a processor, characterized in that the computer program, when being executed by the processor, implements a vehicle superstration method according to any one of claims 1 to 8.
Description
Vehicle super-treatment method, system and storage medium Technical Field The application relates to the technical field of overrun overload management of highway freight vehicles, in particular to a vehicle overrun treatment method, a vehicle overrun treatment system and a storage medium. Background Along with the rapid increase of the freight traffic of the highway, the overrun overload management of vehicles becomes an important link of road traffic safety management. Currently, the highway super-treatment equipment has the common problems of large dynamic weighing error, insufficient multi-source data fusion, weak cross-equipment coordination capability, serious data island phenomenon and the like. For example, the lack of effective fusion of multi-mode data such as license plates, axes, environments and the like, and the lack of a cross-device data collaborative verification mechanism result in low treatment accuracy and high misjudgment rate. In the prior art, the ultra-high equipment often works independently, data cannot be shared and verified, and the ultra-high overload detection requirement under the complex road environment is difficult to deal with. Therefore, the high-efficiency and accurate completion of the vehicle overrun control work cannot be realized at the present stage. Disclosure of Invention In order to efficiently and accurately complete the vehicle overrun governance work, the embodiment of the application provides a vehicle overrun governance processing method, a vehicle overrun governance system and a storage medium. In a first aspect, the present embodiment provides a vehicle treatment superprocessing method, including: acquiring multi-source time sequence data from a plurality of super treatment devices, and performing space-time alignment processing on the multi-source time sequence data to obtain aligned time sequence data; extracting key features from the aligned time sequence data to obtain a multi-source feature set, and carrying out multi-source data fusion processing on the multi-source feature set by adopting a data fusion algorithm to obtain fusion vehicle information; And performing contradiction detection on the fused vehicle information to obtain a contradiction detection result and an alarm state, and generating and outputting an excessive value judgment result based on the contradiction detection result and the alarm state. In some of these embodiments, the acquiring multi-source timing data from a plurality of superequipment comprises: acquiring the vehicle speed from a velocimeter in real time, and dynamically adjusting the data acquisition frequency of each ultrasonic treatment device based on the vehicle speed; Acquiring initial multi-source time sequence data from a plurality of ultra-treatment devices based on the adjusted data acquisition frequency, wherein the initial multi-source time sequence data comprises weighing data from a dynamic truck scale, vehicle identification data from an axle type identification instrument, license plate data from a vehicle identification system, vehicle running state data from a velocimeter and cargo state machine vision identification data from a video monitoring system; And performing preliminary data cleaning and time stamp synchronization on the initial multi-source time sequence data through edge computing nodes arranged on each super treatment equipment side to obtain the preprocessed multi-source time sequence data. In some embodiments, the performing space-time alignment processing on the multi-source time sequence data to obtain aligned time sequence data includes: Constructing a space coordinate mapping model based on matching of positioning data of a global navigation satellite system and laser radar point clouds, and mapping the multi-source time sequence data to the space coordinate mapping model to obtain aligned space time sequence data, wherein the aligned space time sequence data comprises aligned space weighing data, aligned space vehicle identification data, aligned space license plate data, aligned space vehicle driving state data and aligned space cargo state machine vision identification data; And taking the initial time stamp of the aligned space weighing data as a reference time point, and adopting a sliding window time alignment algorithm to dynamically correct the time stamps of the aligned space vehicle identification data, the aligned space license plate data, the aligned space vehicle driving state data and the aligned space cargo state machine vision identification data to obtain aligned time sequence data. In some of these embodiments, the extracting key features from the aligned time series data to obtain a multi-source feature set includes: Extracting vehicle visual features from aligned space-time cargo state machine visual identification data in the aligned time sequence data by adopting a convolutional neural network, wherein the vehicle visual features comprise cargo contour point clouds