Search

CN-121982229-A - Multi-source data fusion treatment method for application of scenerized geographic information

CN121982229ACN 121982229 ACN121982229 ACN 121982229ACN-121982229-A

Abstract

The invention discloses a multisource data fusion treatment method for a scene geographic information application, and belongs to the technical field of geographic information systems and data treatment. The method comprises the steps of representing multi-source data into pentads in the form of < time, place, person, things and event >, registering the pentads of multi-source data, forming space-time constraint fusion data based on the multi-source data after space-time constraint screening registration, constructing an association model corresponding to the space-time constraint fusion data, determining a scene template file corresponding to the space-time constraint fusion data, and generating a scene model based on the space-time constraint fusion data, the association model corresponding to the space-time constraint fusion data and the scene template file, wherein the association model comprises a time axis index structure and a space association result. The invention realizes the standardized construction and the rapid multiplexing of the scene and improves the construction efficiency of the scene.

Inventors

  • GAO QIANG
  • ZHAO RUIFANG
  • TANG ZIHAO
  • Zhao Yingxiao
  • YOU HONGLIANG
  • TANG SHANHONG
  • ZHAO KERAN

Assignees

  • 中国人民解放军军事科学院军事科学信息研究中心

Dates

Publication Date
20260505
Application Date
20251222

Claims (9)

  1. 1. A multi-source data fusion treatment method for a scene geographic information application is characterized by comprising the following steps: Step S1, carrying out structural annotation on multi-source data related to application of the scene geographic information, wherein the annotation content of the structural annotation comprises time, place, person, things and events; Characterizing the multi-source data as five tuples of the form < time, place, person, thing, event >; The method comprises the steps of S2, registering five-tuple multi-source data, screening the registered multi-source data based on space-time constraint, determining a layer to which the geographic environment data belongs based on the type of the geographic environment data in the multi-source data for the screened multi-source data, wherein the layer to which the geographic environment data belongs comprises a terrain layer, an image layer and a vector layer, and other data in the multi-source data belongs to a labeling layer; Step S3, constructing a correlation model corresponding to the space-time constraint fusion data, wherein the correlation model comprises a time axis index structure and a space correlation result; Wherein: constructing a time axis index structure corresponding to the space-time constraint fusion data based on the space-time constraint fusion data, wherein the time axis index structure takes time elements in the space-time constraint fusion data as indexes, and places elements, character elements, thing elements and event elements corresponding to the time elements are arranged on a time axis; performing spatial association on the space-time constraint fusion data, namely determining time elements, place elements, character elements, thing elements and event elements corresponding to the place elements in the space-time constraint fusion data; S4, determining a scene template file corresponding to the space-time constraint fusion data, wherein the scene template file comprises one or more of visual expression configuration of the space-time constraint fusion data, display priority of scene elements and dynamic scheduling display rules of the scene elements; and step S5, generating a scene model based on the space-time constraint fusion data, an association model corresponding to the space-time constraint fusion data and a scene template file, and displaying a scene corresponding to the space-time constraint fusion data based on the scene model, wherein the scene corresponding to the space-time constraint fusion data refers to a three-dimensional visual environment formed by organizing multi-source data according to the spatial relationship and time sequence evolution rules of the multi-source data in a space-time range corresponding to the space-time constraint, and the environment can truly reflect the distribution, the mutual association and the dynamic change process of scene elements along with time.
  2. 2. The method of claim 1, wherein the multi-source data comprises metadata, process data and geographic environment data, the metadata comprising at least two of a scene name, a scene type, a scene number, an execution unit, a scene start time, a scene end time, a scene target, a list of participants, a list of related resource objects, the process data comprising optical measurement data, radar measurement data, telemetry data, satellite navigation positioning data, sensor data, communication record data, and the geographic environment data comprising digital elevation model, oblique photography data, digital line drawings, satellite imagery data.
  3. 3. The method of claim 1, wherein registering the multi-source data in the form of five-tuple comprises: The processing of the geographic environment data comprises determining the resolution corresponding to the geographic environment data, downsampling the geographic environment data when the resolution is higher than a first resolution, bilinear interpolation or bicubic interpolation is carried out on the geographic environment data when the resolution is higher than a second resolution; And updating the five-tuple corresponding to the processed geographic environment data, and carrying out coordinate conversion on the location elements to unify the converted coordinates to the same coordinate system.
  4. 4. The method of claim 2, wherein in the step S2, the registered multi-source data is filtered based on space-time constraint, the layer to which the geographic environment data belongs is determined based on the type of the geographic environment data in the multi-source data, the layer to which the geographic environment data belongs comprises a terrain layer, an image layer and a vector layer, and the method comprises the following steps: Determining space-time constraint conditions based on user requirements, and screening the multi-source data aligned based on the space-time constraint conditions; and the screened multi-source data belongs to a terrain layer in the geographic environment data of the digital elevation model, belongs to an image layer in the geographic environment data of the satellite image data or the oblique photography data, and belongs to a vector layer in the geographic environment data of the digital line drawing.
  5. 5. The method according to claim 4, wherein the step S4 is to determine a scene template file corresponding to the space-time constraint fusion data, where the scene template file includes one or more of a visual expression configuration of the space-time constraint fusion data, a display priority of a scene element, and a dynamic scheduling display rule of the scene element, and wherein: The method comprises the steps of carrying out parameterization configuration on a scene template, defining a visual expression mode of scene elements, including symbol patterns of points, lines and planes and rendering parameters of a three-dimensional model, setting display priority and layer sequence of the scene elements, configuring dynamic scheduling display rules of the scene elements, instantiating the scene template based on the scene elements, and generating a scene template file corresponding to space-time constraint fusion data.
  6. 6. The method of claim 4, wherein the dynamic scheduling display rules for the scene elements comprise: loading rules according to the need, namely dynamically loading geographic environment data with corresponding precision according to the current view angle range and the zoom level; The time triggering rule is that scene elements corresponding to a time period are automatically displayed or hidden according to the time progress of the scene performance; the event triggering rule is that when a preset event occurs, the display state or the view angle position of the scene element is automatically adjusted; And the interactive response rule is used for updating scene display in real time in response to interactive operation of a user.
  7. 7. The utility model provides a multisource data fusion administering device towards scene geographic information application which characterized in that includes: the annotation module is configured to carry out structural annotation on multi-source data related to the application of the scene geographic information, and the annotation content of the structural annotation comprises time, place, person, things and events; Characterizing the multi-source data as five tuples of the form < time, place, person, thing, event >; The fusion module is configured to register five-tuple multi-source data, screen the registered multi-source data based on space-time constraint, determine a layer to which the geographic environment data belongs based on the type of the geographic environment data in the multi-source data, wherein the layer to which the geographic environment data belongs comprises a terrain layer, an image layer and a vector layer, and the other data in the multi-source data belongs to a labeling layer; the association module is configured to construct an association model corresponding to the space-time constraint fusion data, wherein the association model comprises a time axis index structure and a space association result; Wherein: constructing a time axis index structure corresponding to the space-time constraint fusion data based on the space-time constraint fusion data, wherein the time axis index structure takes time elements in the space-time constraint fusion data as indexes, and places elements, character elements, thing elements and event elements corresponding to the time elements are arranged on a time axis; performing spatial association on the space-time constraint fusion data, namely determining time elements, place elements, character elements, thing elements and event elements corresponding to the place elements in the space-time constraint fusion data; The template construction module is configured to determine a scene template file corresponding to the space-time constraint fusion data, wherein the scene template file comprises one or more of visual expression configuration of the space-time constraint fusion data, display priority of scene elements and dynamic scheduling display rules of the scene elements; the scene generation module is configured to generate a scene model based on the space-time constraint fusion data, an association model corresponding to the space-time constraint fusion data and a scene template file, and display a scene corresponding to the space-time constraint fusion data based on the scene model, wherein the scene corresponding to the space-time constraint fusion data refers to a three-dimensional visualization environment formed by organizing multi-source data according to the spatial relationship and time sequence evolution rules of the multi-source data in a space-time range corresponding to the space-time constraint, and the environment can truly reflect the distribution, the mutual association and the dynamic change process of scene elements along with time.
  8. 8. An electronic device, the device comprising: at least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
  9. 9. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.

Description

Multi-source data fusion treatment method for application of scenerized geographic information Technical Field The invention belongs to the technical field of geographic information systems and data management, and particularly relates to a multi-source data fusion management method for application of scenerised geographic information. Background With the deep advancement of informatization construction and the accelerated development of digital transformation, the application of geographic information in each industry is shifting from a traditional data recording mode to a scenerising application mode. The application emphasis of the scenerization geographic information is centered on a real scene, and the time, place, person, things, events and other elements are organically integrated, so that the complete reproduction and the depth analysis of the process are realized. The transformation makes the application service have the characteristics of increasing complexity, systemization and refinement, and is not an isolated record of single data, but a comprehensive scene expression of multi-type resource coordination, multi-service link crossing and multi-system data fusion. Under the background, the data types generated in the application process of the scenerized geographic information are increasingly diverse, the data scale is rapidly increased, and the data sources are highly dispersed, so that unprecedented challenges are presented for the organization, management and scenerized application of business data. Specifically, the application of the scenerized geographic information relates to extremely rich data types, and mainly comprises metadata (such as scene names, scene numbers, scene types and the like), process data (such as GNSS satellite navigation positioning data, internet of things communication record data, sensor data and the like), geographic environment data (such as DEM (digital model), DLG (digital versatile generator), satellite image data and the like) and result data (such as performance parameter analysis reports, performance evaluation reports and the like). However, the above multi-source heterogeneous data are distributed in relatively independent systems, and the data are difficult to realize unified management and scene fusion application due to different data format standards, different coordinate systems, different time references and various storage modes. And semantic associations between these data are difficult to express and exploit efficiently. For example, in a certain geographic information application scenario, an event of "equipment start" involves a time element (start time), a place element (start point coordinate), a person element (operator), a thing element (equipment object), and a large amount of monitoring data (space track, monitoring data, telemetry parameters, etc.), but the existing system often stores these data in isolation, and lacks an integral description of the scenario, which results in difficulty in data retrieval, high understanding cost, and low application value. When a certain service scene needs to be duplicated, a certain key event is analyzed, and the performance of a certain type of resource object is evaluated, technicians have to extract data from a plurality of systems respectively, and the data association and integration are performed manually, so that time and effort are consumed, and errors are easy to occur. In addition, the traditional data application mainly depends on the forms of static reports, two-dimensional charts, table statistics and the like, lacks visual display and immersive experience of scene processes, and is difficult to meet diversified scene application requirements of scene duplication, situation deduction, decision support and the like. Especially, under the conditions that service scenes are increasingly complex, the number of resource objects is large, and service links are performed in a crossed manner, the traditional data application mode cannot effectively support key links such as organization management, process monitoring and effect evaluation of scene service. Therefore, a new data organization method and a scene application mode are needed to be explored, a multi-source data fusion management system facing to the scene is constructed, and the spanning development from 'under existence' to 'good scene use' of service data is realized. Disclosure of Invention The invention provides a multi-source data fusion treatment method for a scene geographic information application, which is used for solving the technical problems of non-uniform data organization standard, difficult multi-source geographic data fusion, insufficient scene element association relation modeling, low scene construction efficiency, single scene application form and the like in the scene geographic information application and realizing standardized organization, efficient fusion and deep application of scene geographic information app