Search

CN-122025166-A - Intelligent battlefield injury judgment method based on multisource information fusion

CN122025166ACN 122025166 ACN122025166 ACN 122025166ACN-122025166-A

Abstract

The application belongs to the technical field of intelligent perception, and particularly discloses a battlefield personal injury intelligent judging method based on multisource information fusion. The method comprises the steps of firstly obtaining image data and radar data of the same scene at the same moment through space-time synchronization, inputting the image data into a wounded detection model to obtain a boundary frame of a first target and a first detection result of the first target, extracting a position and vital sign of a second target from the radar data, judging a second detection result of the second target based on the vital sign, then carrying out target association on the first target and the second target based on the position of the second target and the boundary frame of the first target, and finally fusing the first detection result and the second detection result of the associated target. According to the application, through the target association fusion image detection technology and the radar detection technology, the identification precision of wounded persons in complex environments such as multi-shielding and multi-interference is improved.

Inventors

  • YAO YAO
  • LI BO
  • PENG MENG

Assignees

  • 中国船舶集团有限公司第七〇九研究所

Dates

Publication Date
20260512
Application Date
20260116

Claims (10)

  1. 1. A battlefield personal injury intelligent judging method based on multi-source information fusion is characterized by comprising the following steps: acquiring image data and radar data of the same scene at the same moment through space-time synchronization; the image data is input into a wounded person detection model to obtain a boundary box of a first target and a first detection result of whether the first target is a wounded person or not; Projecting the position of the second target into a coordinate system where a boundary frame of the first target is positioned by using a transformation matrix to obtain the projection position of the second target, wherein the transformation matrix is obtained in advance through joint calibration of an image acquisition device and a radar; if the boundary box of the first target covers the projection position of the second target, the first target and the second target are associated to be the same target; And fusing the first detection result and the second detection result to obtain the detection result of the same target.
  2. 2. The intelligent discrimination method for battlefield personal injury according to claim 1, wherein the injury detection model is obtained based on YOLOv s network model training, wherein a combined structure of a space-to-depth layer and a1 x1 convolution layer is adopted to replace an original stride convolution layer in a YOLOv s backbone network and a neck network; The space-to-depth layer is used for dividing an input feature map into 4 local space regions according to a 2X 2 grid, and converting the 4 local space regions from space dimensions to channel dimensions to realize space dimension reduction without information loss; the 1 x 1 convolution layer is used to compress the channel dimension to one quarter using a1 x 1 convolution kernel.
  3. 3. The intelligent battlefield personal injury judging method according to claim 1, wherein the injury detecting model is obtained based on YOLOv s network model training, wherein a double-branch decoupling head is adopted to replace an original coupling detecting head in the YOLOv s network model, and the structure of the double-branch decoupling head is as follows: The independent classification branches comprise a 3-layer 3 multiplied by 3 convolution layer, a batch normalization layer, a SiLU activation layer, a 1-layer 1 multiplied by 1 convolution layer and a Softmax activation layer, and focus loss functions are independently adopted to concentrate on learning class characteristics; Independent regression branches comprise a 3-layer 3 multiplied by 3 convolution layer, a batch normalization layer, a SiLU activation layer, a 1-layer 1 multiplied by 1 convolution layer and a Sigmoid activation layer, and the position features and the confidence features of the learning boundary boxes are concentrated by adopting the complete cross ratio loss function independently.
  4. 4. The intelligent battlefield personal injury judging method according to claim 1, wherein the injury detection model is obtained based on YOLOv s network model training, wherein an original universal anchor frame in the YOLOv s network model is replaced by an optimized anchor frame, and the optimized anchor frame is obtained by the following steps: counting the pixel sizes of all wounded labeling bounding boxes in the battlefield live-action image set to obtain a size set; Clustering the size set by using a K-means++ clustering algorithm to obtain 9 clustering centers serving as the optimization anchor frame; and dividing the 9 optimized anchor frames into three groups according to the surface, and respectively distributing the three groups to three detection layers of the YOLOv s network model.
  5. 5. The intelligent battlefield personal injury judging method according to claim 1, wherein the vital signs of the second target are dynamically adjusted based on the environment where the second target is located, specifically: ; ; Wherein, the Is the breathing rate after the adjustment, Is the heart rate after the adjustment, Is the breathing rate before the adjustment, Is the heart rate before the adjustment, Is the altitude at which the second target is located, Is the ambient temperature at which the second target is located, Is the breathing adjustment coefficient of the human body, Is the heart rate adjustment factor.
  6. 6. The intelligent judging method for the battlefield personal injury condition according to claim 1 is characterized in that the same scene is obtained through space-time synchronization, and the image data and the radar data at the same moment are obtained through the synchronous acquisition of the radar and the image acquisition device.
  7. 7. The intelligent battlefield personal injury judging method according to claim 1, wherein the position and vital sign of the second target are extracted from the radar data, specifically: Performing Fourier transform on the radar data along a fast time dimension to generate a two-dimensional data matrix containing distance and slow time information; carrying out spectrum analysis on the slow time sequence of each distance unit in the two-dimensional data matrix, detecting whether a significant spectrum peak exists in a preset vital sign frequency band, marking the distance unit with the significant spectrum peak as an interested distance unit, wherein the distance corresponding to the interested distance unit is the radial distance of the second target; performing arrival angle estimation by beam forming or angle dimension Fourier transform on the data of the multichannel receiving array on the interested distance unit to obtain the azimuth angle of the second target, thereby determining the position of the second target by combining the radial distance; Extracting a phase change signal from a space unit where the second target is located, and decoupling a respiratory signal and a heartbeat signal from the phase change signal by adopting a signal processing technology; And respectively carrying out power spectral density estimation on the respiratory signal and the heartbeat signal, and solving to obtain the respiratory frequency and the heart rate of the second target by detecting the positions of the main peaks of the respective frequency spectrums.
  8. 8. The intelligent battlefield personal injury judging method according to claim 1, wherein the second detection result is further judged whether the second target is a wounded person or not based on the vital sign, specifically, if the respiratory rate of the second target is greater than a respiratory abnormality threshold and the heart rate is greater than a heart rate abnormality threshold, the second detection result is a wounded person, otherwise, the second detection result is a non-wounded person.
  9. 9. The intelligent battlefield injury judging method according to claim 1, wherein the transformation matrix is obtained in advance through the combined calibration of the image acquisition device and the radar, specifically: fixing a corner reflector in the center of the checkerboard calibration plate; moving the checkerboard calibration plate to a plurality of different positions, and synchronously acquiring image data and radar data of the checkerboard calibration plate by using the image acquisition device and the radar on each position; for each pose, identifying two-dimensional coordinates of all angular points in the checkerboard calibration plate from the image data, and further deriving the two-dimensional coordinates of the corner reflector from the relative position of the corner reflector in the checkerboard calibration plate; obtaining a plurality of groups of three-dimensional coordinate-two-dimensional coordinate pairs of the corner reflector through a plurality of pose; and solving to obtain the transformation matrix based on a plurality of groups of three-dimensional coordinates and two-dimensional coordinates.
  10. 10. The intelligent discrimination method for battlefield injury according to claim 1, further comprising: And carrying out injury judgment based on the vital signs: If it meets Judging the risk; If it meets Judging that the patient is seriously injured; If it meets Judging the moderate injury; If it meets Judging that the patient is light; If it meets Then it is determined to be dead Wherein, the In order to be able to breathe at a frequency, Is the heart rate.

Description

Intelligent battlefield injury judgment method based on multisource information fusion Technical Field The application belongs to the technical field of intelligent perception, and particularly relates to an intelligent judgment method for battlefield injury based on multisource information fusion. Background In modern battlefield and disaster rescue environments, rapid and accurate detection of wounded persons and judgment of the wounded condition level are key to improving rescue efficiency and survival rate. The wounded identification and the wounded condition judgment are key links in battlefield medical rescue and disaster emergency rescue, and the key problem is how to quickly and accurately determine the position of the wounded and judge the wounded condition in a complex environment. Most of the existing methods rely on manual visual or single sensor means, such as using photoelectric imaging equipment to identify wounded or using life detection radar to detect vital signs such as respiration and heartbeat, but the methods have obvious limitations that the photoelectric equipment is easy to lose efficacy under natural conditions of smoke, rain and fog, night or low visibility, so that the identification accuracy is reduced, and the radar can avoid the influence of severe natural conditions to a certain extent, but is easy to be interfered by complex electromagnetic environment and environmental noise to generate misjudgment. Meanwhile, most of the existing methods can only roughly judge whether personnel are injured, lack further injury judging capability, and are difficult to meet the requirements of rescue priority scheduling and reasonable resource allocation in complex environments. Therefore, a method for accurately identifying the injury of a wounded person in a complex environment is needed. Disclosure of Invention Aiming at the defects of the prior art, the application aims to provide a battlefield personnel injury intelligent judging method based on multi-source information fusion, which aims to solve the technical problem of insufficient precision of the existing wounded identification method. The first aspect of the application relates to a battlefield personal injury intelligent judging method based on multi-source information fusion, which comprises the following steps: acquiring image data and radar data of the same scene at the same moment through space-time synchronization; the image data is input into a wounded person detection model to obtain a boundary box of a first target and a first detection result of whether the first target is a wounded person or not; Projecting the position of the second target into a coordinate system where a boundary frame of the first target is positioned by using a transformation matrix to obtain the projection position of the second target, wherein the transformation matrix is obtained in advance through joint calibration of an image acquisition device and a radar; if the boundary box of the first target covers the projection position of the second target, the first target and the second target are associated to be the same target; And fusing the first detection result and the second detection result to obtain the detection result of the same target. Preferably, the wounded detection model is obtained based on YOLOv s network model training, wherein a combined structure of a space-to-depth layer and a1×1 convolution layer is adopted to replace the original stride convolution layer in the YOLOv s main network and the neck network; The space-to-depth layer is used for dividing an input feature map into 4 local space regions according to a 2X 2 grid, and converting the 4 local space regions from space dimensions to channel dimensions to realize space dimension reduction without information loss; the 1 x 1 convolution layer is used to compress the channel dimension to one quarter using a1 x 1 convolution kernel. Preferably, the wounded detection model is obtained based on YOLOv s network model training, wherein the original coupling detection head in the YOLOv s network model is replaced by a double-branch decoupling head, and the structure of the double-branch decoupling head is as follows: The independent classification branches comprise a 3-layer 3 multiplied by 3 convolution layer, a batch normalization layer, a SiLU activation layer, a 1-layer 1 multiplied by 1 convolution layer and a Softmax activation layer, and focus loss functions are independently adopted to concentrate on learning class characteristics; Independent regression branches comprise a 3-layer 3 multiplied by 3 convolution layer, a batch normalization layer, a SiLU activation layer, a 1-layer 1 multiplied by 1 convolution layer and a Sigmoid activation layer, and the position features and the confidence features of the learning boundary boxes are concentrated by adopting the complete cross ratio loss function independently. Preferably, the wounded detection model is obtained based on Y