Search

CN-121990138-A - Multi-vision lifesaving system and lifesaving method thereof

CN121990138ACN 121990138 ACN121990138 ACN 121990138ACN-121990138-A

Abstract

The invention discloses a multi-view life-saving method which is characterized by comprising the steps of collecting image information of a monitored water area from different directions through a multi-view visual system, constructing a three-dimensional depth model of the whole water area according to the image information, performing drowning recognition, judging whether a person falls into water or not, determining the position of the person falling into water when the person falls into water is determined, controlling the starting of a life-saving boat based on the position of the person falling into water, tracking and positioning the person falling into water and the life-saving boat in real time through the multi-view visual system, planning an optimal rescue path by combining the three-dimensional depth model, and commanding the life-saving boat to accurately arrive at a designated position to implement rescue. The invention also provides a multi-view life-saving system, which has a low-cost pure image driving scheme, realizes all-weather wide coverage monitoring, full-field Jing Gao precision positioning, automatic ejection rescue and return voyage, has quick response in the whole flow, and improves rescue efficiency.

Inventors

  • LIU BIN
  • LIU YACHAO
  • LUO WEITING
  • XIAO YAQIONG

Assignees

  • 东莞理工学院

Dates

Publication Date
20260508
Application Date
20260127

Claims (10)

  1. 1. A method of multi-vision lifesaving comprising: collecting image information of a monitored water area from different directions through a multi-view vision system; Constructing a three-dimensional depth model of the whole water area according to the image information, performing drowning identification, judging whether a person falling into water exists, and determining the position of the person falling into water when the person falling into water is determined; Starting the lifeboat (2) based on the position of the person falling into water, and tracking and positioning the person falling into water and the lifeboat (2) in real time through a multi-view vision system; and planning an optimal rescue path by combining the three-dimensional depth model, and commanding the lifeboat (2) to accurately arrive at a designated position to rescue.
  2. 2. The method for lifesaving through multiple views according to claim 1, wherein the image information is subjected to reflection inhibition pretreatment, specifically, each image information is segmented, average gray of each segmented block is calculated, when the average gray of the segmented block is larger than a threshold value, the segmented block is judged to be a reflection block, then the average gray value of the reflection block is compressed, the average gray value of the compressed reflection block is smaller than or equal to the threshold value, and finally Gaussian filtering is performed to eliminate noise interference.
  3. 3. The method for multi-vision lifesaving according to claim 1, wherein the automatic correction of the multi-vision system external parameters is required in the tracking and positioning process, specifically, the surface fixed mark of the lifeboat (2) is used as a dynamic calibration object, the coordinates are extracted through the photographed mark image, and the relative positions of the mark points are known in combination, so that the automatic correction of the multi-vision system external parameters is completed.
  4. 4. The method for lifesaving according to claim 1, wherein the three-dimensional depth model is constructed based on a multi-view collection principle by synchronously acquiring images of at least 3 monitoring stations, and is used for realizing spatial separation of an effective water area, a water area background and an obstacle and forming environmental constraint on tracking and positioning.
  5. 5. The method for lifesaving according to claim 4, wherein the object is required to be identified by gesture-motion characteristics when identifying whether a person falls into water, the dual characteristics of the gesture angle of the human body and the motion frequency of the limbs are fused, an identification confidence coefficient model is constructed, when the output value of the confidence coefficient model exceeds a set threshold, the person falling into water is judged, an alarm is triggered, a rescue process is started, a drowning point is positioned, the detection station coordinates of the multi-view system are obtained, the constraint of the detection station coordinates and the three-dimensional depth model is combined, a non-water area background object is filtered first, and the world coordinates of the drowning point are optimized through weighting fusion.
  6. 6. A method of multi-vision rescue as claimed in claim 1, characterized in that the prediction of the real-time position is performed on the basis of the historical position data of the drowning person and the rescue is conducted by guiding the lifeboat (2) with the latest predicted real-time position.
  7. 7. The multi-vision lifesaving method according to claim 1, characterized in that multi-vision and GNSS positioning are adopted for the lifeboat (2) to cooperatively position, the identification measurement of the multi-vision, the target vision measurement and the GNSS data are fused, the accurate positioning of the whole scene is realized through extended Kalman filtering, and the navigation continuity of the lifeboat (2) is ensured.
  8. 8. The method of claim 7, wherein the step of planning the optimal rescue path includes the steps of detecting the position of the obstacle through the multiple vision, constructing a grid map, and planning a collision-free path based on a cost function of an improved A-algorithm, so that the life boat (2) safely and quickly reaches a drowning point.
  9. 9. The method according to claim 4, wherein a deep learning large model is constructed, and a stereoscopic picture of a current scene and context-related information are input in real time according to the three-dimensional deep model, so as to realize end-to-end drowning recognition, wherein the context-related information comprises normal swimming, abnormal swimming or intermittent playing.
  10. 10. A multi-vision lifesaving system is characterized in that a lifeboat (2) is provided with a plurality of fixed marks on the surface of the lifeboat (2) for providing a positioning reference in multi-vision monitoring; The multi-vision system comprises a plurality of vision monitoring stations (1), wherein each vision monitoring station comprises a battery module, cameras, a positioning module, an inertial navigation module and a data acquisition module, the vision monitoring stations (1) are annularly distributed around a monitored water area, the cameras of the vision monitoring stations (1) form multi-angle collaborative shooting to realize full coverage monitoring of a rescue water area, and the cameras can rescue the image information of the water area in real time; The processing system is used for receiving and processing the image information and the space information, calculating the position data of each target in the environment by utilizing a multi-vision measurement method, identifying drowning personnel in the water-domain target by utilizing a three-dimensional perception algorithm, sending a command to control the starting of the lifeboat (2) when the drowning personnel are detected, calculating the position of the lifeboat (2) by utilizing the multi-vision measurement algorithm, and controlling the lifeboat (2) to navigate to the drowning place by utilizing an autonomous navigation algorithm to save life; and the communication system is used for realizing real-time bidirectional communication among the multi-vision system, the processing system and the lifeboat (2).

Description

Multi-vision lifesaving system and lifesaving method thereof Technical Field The invention relates to the technical field of emergency rescue, in particular to a multi-vision lifesaving system and a lifesaving method thereof. Background The method has the advantages that the problems that in the aspect of water area monitoring, a monocular vision low-cost scheme is easy to reflect light on the water surface, the image signal to noise ratio is small and target details are lost due to interference of obstacles, the single-station coverage is small, multi-station deployment requires manual weekly calibration due to temperature and wind load external parameter drift, unmanned on duty is difficult to adapt, the point cloud scheme laser radar has high cost, ultra-large rainy day and fog point cloud loss, poor recognition robustness, single characteristic dependence, difficulty in distinguishing between water playing and drowning and higher misjudgment rate are solved, and the biosensor needs to be actively worn and covered in the whole process; in the aspect of rescue positioning and scheduling, the positioning accuracy of the GNSS under bridges and tree shadows is reduced from 1-3 meters to 10-20 meters, even the GNSS is interrupted for more than 10 seconds, the searching time is increased, inertial navigation errors cannot be accumulated for positioning with long term high accuracy, the existing system is a 'monitoring-alarming-manual scheduling-rescue' sectional framework on the basis of the whole flow, the time consumption for manually inputting coordinates and setting parameters is high, the response is slow, and the operation failure is easy to cause. Disclosure of Invention The present invention has been made to overcome the above-mentioned drawbacks of the prior art, and an object of the present invention is to provide a multi-vision lifesaving system and a rescue control method thereof, which have a low-cost pure image driving scheme, all-weather wide-coverage monitoring, full-field Jing Gao precision positioning, automatic ejection rescue and return voyage are realized, the whole flow response is fast, and the rescue efficiency is improved. To achieve the above object, the present invention provides a multi-vision lifesaving system including: collecting image information of a monitored water area from different directions through a multi-view vision system; Constructing a three-dimensional depth model of the whole water area according to the image information, performing drowning identification, judging whether a person falling into water exists, and determining the position of the person falling into water when the person falling into water is determined; The starting of the lifeboat is controlled based on the position of the person falling into the water, and the person falling into the water and the lifeboat are tracked and positioned in real time through a multi-view visual system; And planning an optimal rescue path by combining the three-dimensional depth model, and commanding the lifeboat to accurately arrive at a designated position to rescue. Further, the image information is subjected to reflection inhibition pretreatment, specifically, each image information is segmented, the average gray level of each segmented block is calculated, when the average gray level of the segmented block is larger than a threshold value, the segmented block is judged to be a reflection block, then the average gray level value of the reflection block is compressed, the average gray level value of the compressed reflection block is smaller than or equal to the threshold value, and finally Gaussian filtering is performed to eliminate noise interference. Furthermore, in the tracking and positioning process, automatic correction of the external parameters of the multi-view system is needed, specifically, the surface fixed mark of the lifeboat is used as a dynamic calibration object, the coordinates are extracted through the shot mark image, and the automatic correction of the external parameters of the multi-view vision system is completed by combining the known relative positions of the mark points. Furthermore, the three-dimensional depth model is constructed by synchronously acquiring images of at least 3 monitoring stations and based on a multi-view geometric principle, and realizes space separation on an effective water area, a water area background and obstacles to form environmental constraint on tracking and positioning. Further, when whether a person falls into water is identified, the object is required to be identified by gesture-motion characteristics, the dual characteristics of the gesture angle of the human body and the motion frequency of limbs are fused, an identification confidence coefficient model is constructed, when the output value of the confidence coefficient model exceeds a set threshold value, the person falling into water is judged, an alarm is triggered, a rescue process is started, a drown