Search

CN-121502696-B - Multi-mode space-time track fusion method, device, equipment, medium and program product

CN121502696BCN 121502696 BCN121502696 BCN 121502696BCN-121502696-B

Abstract

The invention discloses a multi-mode space-time track fusion method, a device, equipment, a medium and a program product. The method comprises the steps of obtaining multi-mode space-time track data to be processed, determining a co-occurrence relation pair adjacent in space time between the space-time track data to be processed of each mode based on a preset time threshold and a preset space threshold, establishing edges according to the co-occurrence relation pair to construct heterogeneous connection diagrams among nodes of each mode, wherein each edge carries an attribute comprising space-time co-occurrence characteristics and association strength characteristics of the corresponding co-occurrence relation pair, inputting the heterogeneous connection diagrams into a trained diagram convolutional neural network to generate target embedded vectors, and carrying out clustering fusion on the target embedded vectors according to a preset similarity threshold to obtain track fusion results. By mapping the space-time track into a space-time topological graph, the graph convolution neural network can be successfully applied to the track fusion field, so that efficient and accurate track fusion is realized, and meanwhile, the method can be well adapted to multi-modal data and sparse data.

Inventors

  • WANG DONGFENG
  • GAO CHAO

Assignees

  • 深圳前海中电慧安科技有限公司

Dates

Publication Date
20260512
Application Date
20260113

Claims (9)

  1. 1. A multi-modal spatio-temporal trajectory fusion method, comprising: acquiring multi-mode space-time track data to be processed, wherein the space-time track data to be processed comprises IMSI data and license plate image data; Based on a preset time threshold and a preset space threshold, determining a co-occurrence relation pair adjacent in time and space between the to-be-processed space-time track data of each mode; Establishing edges according to the co-occurrence relation pairs to construct a heterogeneous connection diagram among all mode nodes, wherein each edge carries attributes including space-time co-occurrence characteristics and association strength characteristics of the corresponding co-occurrence relation pairs; inputting the heterogeneous connection graph into a trained graph convolutional neural network to generate a target embedded vector; Clustering and fusing the target embedded vectors according to a preset similarity threshold value to obtain a track fusion result; Before the edges are established according to the co-occurrence relation pair to construct a heterogeneous connection diagram among all the modal nodes, the method further comprises the steps of: Counting the co-occurrence event fragments continuously occurring on a time line of each co-occurrence relation pair; And determining the association strength score of the corresponding co-occurrence relation pair according to the co-occurrence times of each co-occurrence event fragment and the fragment duration.
  2. 2. The method of claim 1, further comprising, before said establishing edges according to said co-occurrence relation pair to construct a heterogeneous connection graph between modal nodes: and counting the number of co-occurrence places and the total number of times of occurrence of each co-occurrence relation pair as the space-time co-occurrence characteristics.
  3. 3. The multi-modal spatiotemporal trajectory fusion method of claim 1, wherein said correlation strength features further comprise inter-point information corresponding to said co-occurrence relationship pairs.
  4. 4. The method according to claim 1, wherein determining a co-occurrence relationship pair adjacent in time and space between the to-be-processed space-time trajectory data of each modality based on the preset time threshold and the preset space threshold comprises: and dynamically adjusting the preset time threshold and the preset space threshold according to the target moving speed.
  5. 5. The multi-modal spatiotemporal trajectory fusion method of claim 1, further comprising, prior to said inputting the heterogeneous junction graph into a trained graph convolutional neural network to generate a target embedding vector: Acquiring multi-mode sample space-time track data and sample relation pair annotation data; constructing a sample heterogeneous connection diagram according to the sample space-time track data; And taking the sample heterogeneous connection graph as a model input, taking the sample relation pair labeling data as a supervision signal, and training the graph convolution neural network by adopting a semi-supervision learning algorithm.
  6. 6. A multi-modal spatiotemporal trajectory fusion device comprising: A to-be-processed data acquisition module for acquiring multi-modal to-be-processed space-time track data, the space-time track data to be processed comprises IMSI data and license plate image data; The relation pair determining module is used for determining co-occurrence relation pairs adjacent in time and space between the to-be-processed space-time track data of each mode based on a preset time threshold value and a preset space threshold value; The connection diagram construction module is used for establishing edges according to the co-occurrence relation pairs to construct a heterogeneous connection diagram among all the modal nodes, and each edge carries an attribute comprising space-time co-occurrence characteristics and association strength characteristics of the corresponding co-occurrence relation pairs; The embedded vector generation module is used for inputting the heterogeneous connection graph into a trained graph convolutional neural network so as to generate a target embedded vector; the track fusion module is used for carrying out cluster fusion on the target embedded vectors according to a preset similarity threshold value so as to obtain a track fusion result; The association strength feature includes an association strength score, the apparatus further comprising: The co-occurrence event segment statistics module is used for counting co-occurrence event segments continuously occurring on a time line of each co-occurrence relation pair before edges are established according to the co-occurrence relation pairs to construct a heterogeneous connection graph among all mode nodes; And the association strength score determining module is used for determining the association strength score of the corresponding co-occurrence relation pair according to the co-occurrence times and the segment duration of each co-occurrence event segment.
  7. 7. A computer device, comprising: one or more processors; A memory for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the multi-modal spatiotemporal trajectory fusion method of any one of claims 1-5.
  8. 8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a multi-modal spatiotemporal trajectory fusion method as claimed in any one of claims 1 to 5.
  9. 9. A computer program product comprising a computer program which, when executed by a processor, implements the multi-modal spatiotemporal trajectory fusion method of any one of claims 1 to 5.

Description

Multi-mode space-time track fusion method, device, equipment, medium and program product Technical Field The embodiment of the invention relates to the technical field of big data processing, in particular to a multi-mode space-time track fusion method, a device, equipment, a medium and a program product. Background With the acceleration of the urban process, the urban safety problem is more and more emphasized. The smart safe city needs to comprehensively use various technical means including information fusion technology, internet of things technology, artificial intelligence and the like, so that information sharing and cooperative work among all subsystems of the city are realized, and the safety and intelligence level of the city are improved. The track fusion technology is used as an information fusion technology and can make an important contribution to the smart safe city. The track fusion technique is a technique of fusing target tracks obtained by different sensors or the same sensor in different time periods. By fusing different tracks of the targets, the targets can be accurately tracked and identified, and the efficiency and accuracy of urban safety management are improved. Such as improving the security and management efficiency of public transportation, improving the management efficiency of urban security and protection, improving the urban emergency response efficiency, etc. In an actual application scene, the real-time performance requirement on track fusion is extremely high, so that the track fusion is used for real-time monitoring and data analysis. At the same time, there will usually be multiple tracks with completely different modes and characteristics, which may come from different sensors, and the sensor types, accuracy, sampling rates, etc. may be completely different. For example, the positioning accuracy of a monitoring camera, ETC equipment and the like is high, but the acquisition range is small, the monitoring camera has probability identification errors, and the acquisition range of the code detection equipment is large, but the positioning accuracy is low. The characteristics of these different modality data need to be considered in trajectory fusion. In addition, due to the limitation of the number and the precision of the sensors, there may be sparse matrixes which are difficult to process, and the sparse matrixes may have a large influence on the fusion track, and some important track data may be hidden in the sparse matrixes. The existing track fusion method mainly comprises a distance measurement-based method, a time sequence-based method and a filtering-based method. The method based on distance measurement firstly defines the distance between certain tracks, the closer the distance between tracks is, the higher the correlation degree is, and then fusion is carried out between the tracks which are most correlated based on distance retrieval. The effect of the method is completely dependent on whether the distance definition mode is matched with the solved problem and the data, and the data with different modes and different characteristics have different optimal track distance definitions, so that the method has poor effect when track fusion is carried out between the data with different modes and larger characteristic differences. The method based on the time sequence firstly extracts certain characteristics based on the time sequence, and then judges whether the tracks are to be fused or not according to the characteristics through a model. The method needs manual design features, has large dependence on time sequence characteristics, is difficult to solve the problem of sparse time sequence characteristics caused by different sampling rates, and has poor effect because the manual features are highly dependent on experience of designers. The filtering-based method is to fuse the observed data with a priori knowledge to estimate the state of the track. The method mainly comprises Kalman filtering and particle filtering, wherein the Kalman filtering is suitable for a linear system, the application of the Kalman filtering in a nonlinear system is difficult, the behavior rules of personnel and vehicles in cities are not simple linear rules, and the particle filtering can be suitable for the nonlinear system and a system with serious noise, but the problem of overlarge calculation load is easy to generate in a large-scale system. Disclosure of Invention The embodiment of the invention provides a multi-mode space-time track fusion method, a device, equipment, a medium and a program product, which are used for realizing track fusion under the condition of big data rapidly and accurately and can be well suitable for multi-mode data and sparse data. In a first aspect, an embodiment of the present invention provides a multi-mode space-time trajectory fusion method, where the method includes: acquiring multi-mode space-time track data to be processed; Based on a preset time th