Search

CN-121983082-A - Transformer voiceprint detection method and system based on edge intelligence and deep learning

CN121983082ACN 121983082 ACN121983082 ACN 121983082ACN-121983082-A

Abstract

The invention relates to the technical field of power transformers and provides a transformer voiceprint detection method and system based on edge intelligence and deep learning, wherein the method comprises the following steps of S1, collecting an original voiceprint signal, preprocessing and enhancing voice to obtain the voiceprint signal; the method comprises the steps of S2, carrying out real-time analysis and feature extraction on voiceprint signals by an edge computing node by using a preliminary analysis deep learning model, judging whether voiceprint abnormality exists, S3, uploading the voiceprint signals and related feature data to a cloud big data platform when the edge computing node judges that the voiceprint abnormality exists, S4, collecting mass voiceprint data, constructing and continuously updating a standard voiceprint library, optimizing to generate a deep analysis model, S5, transmitting the optimized deep analysis model or model parameters to the edge computing node, updating the preliminary analysis model, S6, identifying defect types, severity and fault positioning, and triggering early warning and decision instructions. The invention can preferably detect the voiceprint of the transformer.

Inventors

  • YANG DONG
  • HU XIAOQING
  • XU FAN
  • XU CHAO
  • YU MIAO
  • JIN YAXI

Assignees

  • 国网安徽省电力有限马鞍山供电公司

Dates

Publication Date
20260505
Application Date
20251226

Claims (9)

  1. 1. The transformer voiceprint detection method based on edge intelligence and deep learning is characterized by comprising the following steps of: S1, acquiring an original voiceprint signal in operation by a wireless voiceprint sensor network arranged on a transformer body, and preprocessing and enhancing voice to obtain a preliminarily enhanced voiceprint signal; s2, the edge computing node receives the primarily enhanced voiceprint signal, performs real-time analysis and feature extraction by utilizing a primarily analysis deep learning model deployed on the edge side, and judges whether voiceprint abnormality exists or not; s3, when the edge computing node judges that the abnormality exists, uploading the high-quality voiceprint signal and related characteristic data in an abnormal period to a cloud big data platform; S4, the cloud big data platform collects mass voiceprint data from a plurality of edge nodes, builds and continuously updates a standard voiceprint library, and utilizes a deep neural network model to perform model training and optimization to generate a high-precision deep analysis model; s5, the cloud big data platform transmits the optimized depth analysis model or model parameters to corresponding edge computing nodes, and updates the preliminary analysis model; And S6, the cloud big data platform performs depth analysis on the received abnormal voiceprint signals, and identifies specific defect types, severity and fault location by combining a standard voiceprint library, and triggers corresponding early warning and decision instructions.
  2. 2. The transformer voiceprint detection method based on edge intelligence and deep learning of claim 1 is characterized in that in step S1, the wireless voiceprint sensor network comprises a plurality of distributed microphone nodes or microphone array nodes, the wireless voiceprint sensor network is deployed at key positions of a transformer body and an on-load tap changer in a self-organizing network mode, preprocessing comprises node utility calculation and frequency domain instantaneous blind separation based on high-order statistical information to conduct multi-target sound source separation, and the voice enhancement adopts a sound source positioning technology based on a controllable beam forming method to enhance main sound source signals and inhibit environmental noise.
  3. 3. The transformer voiceprint detection method based on edge intelligence and deep learning of claim 1 is characterized in that in step S2, the preliminary analysis deep learning model is a lightweight network model and is used for extracting time-frequency domain features of voiceprint signals in real time and carrying out quick comparison with normal state reference features stored by edge nodes so as to realize preliminary screening and recognition of abnormal states.
  4. 4. The method for detecting the voiceprint of the transformer based on the edge intelligence and the deep learning, which is disclosed by claim 1, is characterized in that in the step S4, the process of constructing a standard voiceprint library comprises the steps of carrying out endpoint detection and feature extraction on uploaded normal-state voiceprint samples, wherein the extracted features comprise MFCC, LPCC, zero crossing rate, short-time energy and formant features, and carrying out unsupervised or self-supervised learning on massive normal samples by using a deep neural network to generate a standard voiceprint feature vector set representing the normal running state of the transformer.
  5. 5. The method for detecting the voiceprint of the transformer based on the edge intelligence and the deep learning of claim 4, wherein in step S4, the deep neural network model is a cyclic neural network or an attention mechanism network with a memory function and is used for learning time sequence dependency relationship and historical state information in the voiceprint signals so as to improve the feature extraction and state recognition capability of nonlinear and non-stationary voiceprint signals.
  6. 6. The method for detecting the voiceprint of the transformer based on the edge intelligence and the deep learning, which is characterized by comprising the step S6 of identifying specific defect types, wherein the specific defect types comprise mechanical faults, discharge faults, on-load tap changer switching anomalies, gear sliding and motor mechanism faults of the transformer body, and the fault positioning comprises the step of determining physical positions of abnormal voiceprint signals by using sensor network topology and sound source positioning technology.
  7. 7. The transformer voiceprint detection system based on edge intelligence and deep learning is characterized by adopting the transformer voiceprint detection method based on edge intelligence and deep learning as set forth in any one of claims 1 to 6, and comprising the following steps: The wireless voiceprint sensor network is distributed on the monitored transformer, is used for collecting original voiceprint signals and has the functions of signal preprocessing, preliminary enhancement and ad hoc network transmission; The edge computing node is connected with the wireless voiceprint sensor network, is provided with a preliminary analysis deep learning model and is used for carrying out real-time analysis, abnormal preliminary judgment and data screening uploading on the received voiceprint signals; The cloud big data platform is in communication connection with a plurality of edge computing nodes and comprises: The standard voiceprint library management module is used for storing and managing standard voiceprint characteristics in a normal state; The deep learning model training module is used for training and optimizing a deep analysis model by utilizing massive data; the depth analysis and early warning module is used for carrying out accurate analysis, defect identification and fault positioning on the uploaded abnormal signals and generating early warning information; and the model issuing module is used for updating the cloud-optimized model to the edge computing node.
  8. 8. The transformer voiceprint detection system based on edge intelligence and deep learning of claim 7 is characterized in that a sensor node in the wireless voiceprint sensor network is internally provided with a kurtosis calculation-based utility evaluation unit and a blind signal processing unit, and the system is used for performing preliminary signal quality evaluation and multi-sound source separation at a node end.
  9. 9. The transformer voiceprint detection system based on edge intelligence and deep learning of claim 7 wherein the edge computing node further comprises a local storage unit for caching historical voiceprint data and normal state reference features for a period of time and supporting local continuous analysis and anomaly recording in case of network outage.

Description

Transformer voiceprint detection method and system based on edge intelligence and deep learning Technical Field The invention relates to the technical field of power transformers, in particular to a transformer voiceprint detection method and system based on edge intelligence and deep learning. Background Power transformers are core devices in power transmission and distribution systems, the operational reliability of which is directly related to grid safety. During the operation of the transformer, characteristic voiceprint signals can be generated due to the magnetostriction of the iron core, the electromagnetic force of the winding, the mechanical action of the on-load tap-changer and the like. The acoustic signals contain abundant equipment state information, and the traditional state monitoring means such as oil chromatography analysis, partial discharge detection and the like can be usually found when faults develop to a certain stage, and are complex to implement and high in cost. In contrast, voiceprint detection is a non-invasive, online and continuous monitoring technology, has the potential of simple and convenient implementation and visual information, and provides a new way for early fault early warning and state sensing of the transformer. At present, the following modes mainly exist in the technology and application of the voiceprint detection of the transformer: The diagnosis method based on the traditional signal processing and expert experience is characterized in that sound or vibration signals are collected through a sensor (such as a vibration accelerometer and a microphone), frequency spectrum characteristics (such as power frequency doubling components of 100Hz, 200Hz and the like) of the signals are extracted through Fourier transformation, and states are estimated through comparison with historical data or expert experience libraries (such as a typical fault characteristic frequency table). However, the on-site acoustic environment of the transformer is complex, background noise (such as fan and other equipment interference) is strong, fault voiceprint characteristics are closely related to operation conditions (load and temperature), the noise interference and the influence of the operation conditions are difficult to effectively peel off by a linear spectrum analysis method, the on-site acoustic environment is insensitive to early and weak nonlinear fault characteristics, the diagnosis accuracy is highly dependent on personal experiences of experts, and automation and intellectualization are difficult to realize. According to the cloud centralized deep learning based recognition method, with the development of artificial intelligence, research begins to try to upload all collected voiceprint signals to a cloud data center, and a deep learning model (such as a convolutional neural network CNN and a cyclic neural network RNN) is utilized for centralized analysis and pattern recognition. The method can utilize strong computing power of the cloud to process complex models and dig deep features, but has the obvious defects that firstly, the voice print data of the transformer is strong in continuity and large in data quantity, huge pressure is caused to the bandwidth of a communication network by all uploading, the transmission cost is high, secondly, delay time introduced by data transmission and cloud processing is long, real-time monitoring and instant early warning requirements on transient events such as tapping switch action moment and sudden discharge are difficult to meet, and again, the monitoring is completely invalid due to network interruption, and the system reliability is limited by network conditions. The existing sensor and acquisition system has the limitations that the current sensor deployment scheme for acoustic detection of the transformer is relatively simple, is mostly measured by single points or limited points, and lacks an optimal arrangement scheme and networking collaborative acquisition capability for a complex sound field (multiple sound sources and complex propagation paths) of the transformer. In a transformer substation environment with strong electromagnetic interference, the anti-interference capability of the sensor and the signal-to-noise ratio of the acquired signals are required to be improved. Meanwhile, the front end lacks effective signal preprocessing and feature extraction capability, and is usually transmitted after only completing simple analog-to-digital conversion, so that a large amount of invalid or redundant data is transmitted, and the communication and calculation burden is further increased. In summary, the existing transformer voiceprint detection technology faces key challenges such as insufficient real-time performance, high network dependence, low front-end intelligentization level, weak complex environment interference suppression capability and the like. Therefore, a novel detection method and system capable of sinking the int