Search

CN-121982218-A - Substation video fusion monitoring method and system based on 3DGS

CN121982218ACN 121982218 ACN121982218 ACN 121982218ACN-121982218-A

Abstract

The invention discloses a substation video fusion monitoring method and system based on 3DGS, and relates to the technical field of three-dimensional modeling. The method comprises the steps of obtaining a substation laser radar point cloud and multi-view camera image, constructing a three-dimensional Gaussian splatter model, building a mapping relation library based on source view P and target view Q substation 3DGS samples, performing fusion training by using a picture splicing boundary confidence adjustment pointer and a space coordinate mapping deviation correction pointer based on the library, calling an improved sight cone analysis algorithm to dynamically screen an optimal camera when a user clicks target equipment in the model, receiving video streams of the optimal camera, and determining equipment state identification results. The technical problems of poor image splicing and equipment positioning accuracy caused by multi-view angle and visual field range limitation in monitoring of substation equipment are solved, and the technical effects of improving the monitoring precision and efficiency of the substation equipment through dynamically screening the equipment state identification of the optimal camera and the space coordinates are achieved.

Inventors

  • GUO HONGWEI
  • QIAN XIN
  • WEI GUOHUA
  • YU XIAOPENG
  • JIANG YI
  • ZHAO CHUNLI
  • BAI ZHIWEI

Assignees

  • 国网冀北电力有限公司秦皇岛供电公司

Dates

Publication Date
20260505
Application Date
20260206

Claims (10)

  1. 1. The transformer substation video fusion monitoring method based on 3DGS is characterized by comprising the following steps: Acquiring laser radar point clouds and multi-view camera images of a transformer substation, and constructing a three-dimensional Gaussian splatter model containing equipment geometric and appearance information; based on P transformer substation 3DGS samples at a source view angle and Q transformer substation 3DGS samples at a target view angle, establishing a mapping relation library of equipment space coordinates and a monitoring camera; Based on the mapping relation library, performing fusion training by using a confidence level adjustment pointer of a picture splicing boundary and a deviation correction pointer of space coordinate mapping; When a user clicks a target device in the three-dimensional Gaussian splatter model, invoking an improved line-of-sight cone analysis algorithm in a field deployment environment to dynamically screen the obtained optimal camera, receiving an optimal camera video stream, and determining a device state identification result with space coordinates.
  2. 2. The 3 DGS-based substation video fusion monitoring method of claim 1, wherein the method comprises: Constructing a multi-mode sensing sample library of the transformer substation; Taking the equipment space coordinates in the three-dimensional Gaussian splatter model as a main query characteristic tensor, and taking a real-time PTZ video camera video stream as a feedback correction signal; And automatically correcting the preset position offset of the PTZ camera by adopting a pose compensation controller according to the feedback correction signal.
  3. 3. The 3 DGS-based substation video fusion monitoring method of claim 2, wherein the method comprises: extracting local texture features and edge structure features of a video frame by using a CNN layer, and capturing geometric relevance among multiple views by using a transducer layer; meanwhile, the equipment state recognition accuracy and the multi-view splicing error corresponding to the multi-mode sensing sample library of the transformer substation are used as optimization targets.
  4. 4. The 3 DGS-based substation video fusion monitoring method of claim 3, wherein the method comprises: And the type dimension of the equipment state identification accuracy is set as a layered calculation logic according to the coverage proportion of each equipment type in the transformer substation multi-mode perception sample library, and the single type identification accuracy and the overall identification accuracy of each equipment type are marked.
  5. 5. The 3 DGS-based substation video fusion monitoring method of claim 4, wherein the method comprises: and setting the positioning reference of the multi-view splicing error as a grading threshold standard according to the pixel distance between the actual splicing boundary and the ideal splicing boundary of the picture, and marking the splicing error grade and the maximum deviation area of each splicing area.
  6. 6. The 3 DGS-based substation video fusion monitoring method of claim 4, wherein the method comprises: defining P3 DGS samples at a source view angle and Q3 DGS samples at a target view angle based on the multi-mode sensing sample library of the transformer substation, wherein P is more than or equal to 5Q; And based on the P3 DGS samples and the Q3 DGS samples, the mapping relation library is established by contrasting multi-view geometric consistency constraint terms under a cross-scene adaptation mechanism.
  7. 7. The 3 DGS-based substation video fusion monitoring method of claim 6, wherein the method comprises: Determining a source view set according to the P3 DGS samples at the source view; extracting a Gaussian ellipsoid covariance reference tensor through the source view angle set; Determining a target view set according to the Q3 DGS samples at the target view; Based on a camera video stream corresponding to a PTZ camera, carrying out alignment optimization by combining the target view angle set, and determining a geometric deformation vector relative to the covariance reference tensor; Calculating the Frobenius norm difference of the Gaussian ellipsoid covariance matrix of the corresponding equipment under the source view angle and the target view angle, quantifying the geometric distortion degree of the 3DGS reconstruction by using the Frobenius norm value, performing cross-domain optimization by adopting a view angle-geometric joint alignment network, and correcting the spatial position, covariance parameters and spherical harmonic coefficients of the Gaussian ellipsoids to offset reconstruction deviation caused by the view angle difference; and introducing a view angle discriminator to perform unsupervised adaptation with a gradient inversion unit in a shared characterization space, and constructing a multi-view angle geometric consistency constraint term based on the feedback correction signal.
  8. 8. The 3 DGS-based substation video fusion monitoring method of claim 7, wherein the method comprises: Setting a confidence coefficient adjusting pointer of the picture splicing boundary in the shared characterization space by combining the single type identification accuracy and the overall identification accuracy; and setting an offset correction pointer of the space coordinate mapping in the shared characterization space by combining the positioning error level and the maximum vision blind area of each equipment type.
  9. 9. The 3 DGS-based substation video fusion monitoring method of claim 8, wherein the method comprises: based on the mapping relation library, carrying out fusion training from a source view to a target view by using the confidence adjustment pointer and the deviation correction pointer, wherein the joint optimization target comprises view quality loss, shielding inhibition loss, multi-view geometric consistency loss and pose correction regular terms; and the pose correction regular term dynamically weights according to the statistical characteristics of the maximum vision blind area, guides the three-dimensional Gaussian rendering engine to realize the collaborative optimization of high-confidence equipment identification and low-space drift in the target scene, and automatically generates a three-dimensional patrol report containing warning marks and state data.
  10. 10. A 3 DGS-based substation video fusion monitoring system, configured to implement the 3 DGS-based substation video fusion monitoring method according to any one of claims 1-9, the system comprising: The model construction module is used for acquiring a substation laser radar point cloud and a multi-view camera image and constructing a three-dimensional Gaussian splatter model containing equipment geometric and appearance information; The mapping relation establishing module is used for establishing a mapping relation library of equipment space coordinates and the monitoring camera based on the P transformer substation 3DGS samples at the source view angle and the Q transformer substation 3DGS samples at the target view angle; The fusion training module is used for carrying out fusion training by using the confidence level adjustment pointer of the picture splicing boundary and the deviation correction pointer of the space coordinate mapping based on the mapping relation library; and the result screening module is used for calling an optimal camera obtained by dynamic screening based on an improved sight cone analysis algorithm in a field deployment environment when a user clicks the target equipment in the three-dimensional Gaussian splatter model, receiving an optimal camera video stream and determining an equipment state identification result with space coordinates.

Description

Substation video fusion monitoring method and system based on 3DGS Technical Field The invention relates to the technical field of three-dimensional modeling, in particular to a substation video fusion monitoring method and system based on 3 DGS. Background With the continuous expansion of the scale of the power system, the operation safety and the operation and maintenance intelligence level of the transformer substation are used as key infrastructures in the power grid, and have important influence on the stability of the power grid. The existing transformer substation is generally provided with a plurality of fixed cameras and PTZ cameras, performs all-weather video monitoring on primary and secondary equipment in the substation, and realizes equipment state sensing by combining manual inspection or a simple video analysis means. However, because the space structure of the transformer substation is complex, the types of equipment are numerous, the shielding relationship is serious, the traditional two-dimensional video monitoring mode is difficult to accurately express the three-dimensional space relationship between the equipment, and the problems of visual angle fracture, inconsistent space positioning, weak cross-camera picture relevance and the like exist. In recent years, a three-dimensional modeling technology of laser radar point cloud and multi-view vision fusion is gradually applied to an electric power scene, but the existing three-dimensional reconstruction method focuses on geometric modeling or static display, is difficult to fuse with real-time video monitoring depth, and cannot realize 'what you see is what you get' monitoring and interactive analysis at the equipment level. Meanwhile, under the multi-camera collaborative monitoring scene, camera visual angle selection and switching generally depend on manual configuration or simple rules, and it is difficult to dynamically select an optimal visual angle according to the spatial position of equipment, shielding conditions and real-time picture quality, so that monitoring efficiency and state recognition accuracy are affected. In addition, because of calibration errors, pose drifting and scene differences among different camera visual angles, boundary dislocation and space mapping deviation are easy to generate in the multi-visual angle video splicing and fusion process, and accurate monitoring and intelligent analysis based on a three-dimensional model are further restricted. Disclosure of Invention The application provides a 3 DGS-based transformer substation video fusion monitoring method and system, which solve the technical problems of poor image splicing and equipment positioning accuracy caused by multi-view and visual field range limitation in transformer substation equipment monitoring. In a first aspect of the present application, a method for monitoring video fusion of a substation based on 3DGS is provided, the method comprising: the method comprises the steps of obtaining a substation laser radar point cloud and multi-view camera image, constructing a three-dimensional Gaussian splatter model containing equipment geometric and appearance information, building a mapping relation library of equipment space coordinates and a monitoring camera based on P substation 3DGS samples at a source view angle and Q substation 3DGS samples at a target view angle, performing fusion training by using a confidence adjustment pointer of a picture splicing boundary and a deviation correction pointer of space coordinate mapping based on the mapping relation library, calling an optimal camera obtained by dynamic screening based on an improved sight cone analysis algorithm in a field deployment environment when a user clicks target equipment in the three-dimensional Gaussian splatter model, receiving video streams of the optimal camera, and determining equipment state identification results with space coordinates. In a second aspect of the present application, a3 DGS-based substation video fusion monitoring system is provided, the system comprising: The system comprises a model construction module, a mapping relation establishment module, a fusion training module and a result screening module, wherein the model construction module acquires a substation laser radar point cloud and a multi-view camera image, constructs a three-dimensional Gaussian splatter model containing equipment geometric and appearance information, the mapping relation establishment module establishes a mapping relation library of equipment space coordinates and a monitoring camera based on P substation 3DGS samples at a source view angle and Q substation 3DGS samples at a target view angle, the fusion training module carries out fusion training by using a confidence adjustment pointer of a picture splicing boundary and a deviation correction pointer of space coordinate mapping based on the mapping relation library, and the result screening module calls an optimal camera obtained