CN-121999198-A - Intelligent flame detection and three-dimensional positioning method
Abstract
The invention relates to the technical field of computer vision and discloses an intelligent flame detection and three-dimensional positioning method. The method comprises the steps of acquiring visible light images and infrared thermal imaging images in real time by utilizing a double-spectrum holder camera, outputting confidence coefficient of a predicted target and boundary frame coordinate information according to the visible light images acquired in real time by utilizing a target detection model, determining a fire source target according to the confidence coefficient of the predicted target and the boundary frame coordinate information, and carrying out three-position coordinate calculation and positioning on the determined fire source target based on the double-spectrum holder camera and a ZED depth camera, wherein the ZED depth camera comprises a ZED camera and a ZED holder. Therefore, the integrated processing of detection, temperature measurement and positioning can be realized, and the sensor system has higher detection precision and space positioning capability compared with the existing single sensor system.
Inventors
- LI FEIFAN
- WANG FANG
- XU YANGYANG
- XING JIAWEI
- LI SHUO
- SHEN ZHIHANG
- YU XIAOFEI
Assignees
- 航天科工智能机器人有限责任公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251215
Claims (8)
- 1. An intelligent flame detection and three-dimensional positioning method is characterized by comprising the following steps: The method comprises the steps that a double-spectrum holder camera is used for collecting visible light images and infrared thermal imaging images in real time, and comprises a double-spectrum holder and a left camera and a right camera which are arranged on the double-spectrum holder; outputting confidence coefficient of a predicted target and boundary frame coordinate information according to a visible light image acquired in real time by using a target detection model, and determining a fire source target according to the confidence coefficient of the predicted target and the boundary frame coordinate information; And carrying out three-position coordinate calculation and positioning on the determined fire source target based on the double-spectrum holder camera and the ZED depth camera, wherein the ZED depth camera comprises a ZED camera and a ZED holder.
- 2. The method of claim 1, wherein the camera calibration parameters include a left camera reference matrix K _left , a right camera reference matrix K _right , a left camera distortion coefficient D _left , a right camera distortion coefficient D _right , and a rotation matrix R and translation vector T between the left camera and the right camera.
- 3. The method of claim 2, wherein outputting confidence level and bounding box coordinate information of the predicted target from the real-time acquired visible light image using the target detection model, and determining the fire source target from the confidence level and bounding box coordinate information of the predicted target comprises: Inputting each frame of visible light image into a YOLO target detection model, wherein the model takes a fire source in each preprocessed frame of visible light image as a target, and outputs confidence coefficient of a predicted target and boundary frame information; Extracting coordinates of four corner points in boundary frame information of each prediction target output by the model; Synchronously acquiring a visible light image and an infrared thermal imaging image by using a preprocessing device, preprocessing the visible light image and the infrared thermal imaging image according to a pre-stored camera calibration parameter to obtain four projection points of four corner points in the infrared thermal imaging image, and taking a rectangular frame formed by surrounding the four projection points in the infrared thermal imaging image as a temperature measuring frame; setting two temperature measuring bands at the position of a temperature measuring frame, acquiring a maximum temperature value T 0_max in the temperature measuring band and coordinates P (x, y) of the position of the maximum temperature value T 0_max in the temperature measuring band by carrying out maximum temperature scanning in the range of the temperature measuring bands, and judging that the predicted target is a suspected fire source when the P (x, y) is in the temperature measuring frame; And 5 temperature measuring points are selected at the bottom of the temperature measuring frame, the temperature value of each temperature measuring point is extracted, the highest temperature value is selected as the characteristic temperature T _max of the targets, when T _max ≥T 0_max and the confidence coefficient of the predicted target is more than or equal to 0.80, a continuous frame confirmation mechanism is started for the suspected fire source, and if all the continuous detection frames meet the judgment conditions, the suspected fire source is confirmed to be the fire source target, wherein T 0_max is the temperature threshold.
- 4. A method according to claim 3, wherein preprocessing the visible light image and the infrared thermographic image according to pre-stored camera calibration parameters comprises: Establishing a geometric corresponding relation between a visible light image and an infrared thermal imaging image plane through a binocular stereoscopic vision principle; Performing de-distortion correction on the visible light image and the infrared thermal imaging image to obtain an undistorted visible light image and an undistorted infrared thermal imaging image; Calculating a basic matrix F according to the left camera internal reference matrix K _left and the right camera internal reference matrix K _right ; For any pixel point p _vis =(x _v ,y _v in the undistorted visible light image), generating a depth hypothesis value Z _i in a preset depth range [ Z _min ,Z _max ] in a preset step size by adopting a multi-depth hypothesis method, and giving projection candidate points p _ir_candidate_i = (x_t_i, y_t_i) of the pixel point p _vis on an infrared image plane, and then calculating a polar line error epsilon _i between each candidate point p _ir_candidate_i and the pixel point p _vis by adopting a polar line constraint error minimization method, and selecting the candidate point with the minimum polar line error epsilon _i as an optimal projection coordinate (x _t ,y _t ) of the pixel point p _vis in the thermal imaging image, wherein i=1, 2, 3.
- 5. The method of claim 4, wherein the basis matrix F is calculated by: F=K _right ^(-T)·[T]×·R·K _left ^(-1), wherein [ T ]. Times.represents an antisymmetric matrix of translation vectors.
- 6. The method of claim 5, wherein the epipolar error, ε _i , is calculated by: ε _i =|p _ir_candidate_i ^T·F·p _vis |。
- 7. The method of claim 6, wherein three-dimensional coordinate resolution and localization of the determined fire source target comprises: Reading the midpoint coordinate of a temperature measuring frame where a fire source target is located, determining the midpoint coordinate of a projection boundary frame in a visible light image through the established geometric corresponding relation, and calculating the angle offset of the fire source target relative to the double-spectrum holder based on the field angle parameter of a camera for collecting the visible light image and the posture information of the double-spectrum holder where the camera for collecting the visible light image is located; Establishing a view angle model, and converting an image coordinate (x _img ,y _img ) into an angle offset (delta theta _pan ,Δθ _tilt ) through the view angle model, wherein delta theta _pan is a horizontal offset angle and delta theta _tilt is a vertical offset angle; Controlling the double-spectrum holder to execute rotation operation, moving a fire source target to the center position of a field of view of a camera for collecting visible light images, and then starting a ZED depth camera to perform three-dimensional detection on the fire source target; The ZED depth camera acquires three-dimensional coordinate information (x _zed ,y _zed ,z _zed ) of a fire source target according to a binocular stereoscopic vision principle, and converts coordinates in a ZED camera coordinate system into coordinates in a ZED holder base coordinate system by utilizing a pre-calibrated hand-eye transformation matrix T _hand_eye ; Establishing a conversion matrix T _platform by utilizing a pre-measured displacement parameter between the ZED holder and the double-spectrum holder, and further converting the coordinates under the ZED holder base coordinate system into the double-spectrum holder base coordinate system; Then combining real-time attitude parameters of the double-spectrum holder, converting the coordinates under the holder base coordinate system into the infrared camera coordinate system through a holder kinematic model, and establishing a geometric corresponding relation from the ZED image to the infrared thermal imaging image; And projecting a temperature measuring frame where a fire source template in the thermal imaging image is positioned into the ZED image, so as to realize fusion of three-dimensional coordinate information and temperature information of a fire source target.
- 8. The method of claim 7, wherein the image coordinates (x _img ,y _img ) are converted to the angular offset (Δθ _pan ,Δθ _tilt ) by: Δθ _pan =arctan(((x _img /W _img -0.5)/0.5)×tan(α _h /2)), Δθ _tilt =arctan(((y _img /H _img -0.5)/0.5)×tan(α _v /2)), Where W _img and H _img are image width and height, respectively, α _h is the horizontal angle of view and α _v is the vertical angle of view.
Description
Intelligent flame detection and three-dimensional positioning method Technical Field The invention relates to the technical field of computer vision, in particular to an intelligent flame detection and three-dimensional positioning method. Background In recent years, the requirements of intelligent fire-fighting systems on the precision and response speed of flame detection are continuously improved, and the traditional flame detection technology relies on a single sensor for detection, so that the intelligent fire-fighting system has obvious limitations. The existing method is difficult to realize the fast identification, temperature measurement and spatial positioning integration of flame in a complex indoor environment, and the early fire early warning and accurate coping efficiency is limited. In addition, most systems lack depth information support, so that positioning accuracy is insufficient, and the requirements of the intelligent fire-fighting system on multidimensional sensing and real-time response cannot be met. Disclosure of Invention The invention provides an intelligent flame detection and three-dimensional positioning method, which can solve the problems that a single sensor is easy to be shielded, the environmental interference is large, and the non-contact temperature measurement and the three-dimensional space positioning are difficult to realize synchronously in the prior art. The invention provides an intelligent flame detection and three-dimensional positioning method, which comprises the following steps: The method comprises the steps that a double-spectrum holder camera is used for collecting visible light images and infrared thermal imaging images in real time, and comprises a double-spectrum holder and a left camera and a right camera which are arranged on the double-spectrum holder; outputting confidence coefficient of a predicted target and boundary frame coordinate information according to a visible light image acquired in real time by using a target detection model, and determining a fire source target according to the confidence coefficient of the predicted target and the boundary frame coordinate information; And carrying out three-position coordinate calculation and positioning on the determined fire source target based on the double-spectrum holder camera and the ZED depth camera, wherein the ZED depth camera comprises a ZED camera and a ZED holder. Preferably, the camera calibration parameters include a left camera internal reference matrix K _left, a right camera internal reference matrix K _right, a left camera distortion coefficient D _left, a right camera distortion coefficient D _right, and a rotation matrix R and translation vector T between the left camera and the right camera. Preferably, outputting the confidence coefficient of the predicted target and the coordinate information of the boundary frame according to the visible light image acquired in real time by using the target detection model, and determining the fire source target according to the confidence coefficient of the predicted target and the coordinate information of the boundary frame comprises: Inputting each frame of visible light image into a YOLO target detection model, wherein the model takes a fire source in each preprocessed frame of visible light image as a target, and outputs confidence coefficient of a predicted target and boundary frame information; Extracting coordinates of four corner points in boundary frame information of each prediction target output by the model; Synchronously acquiring a visible light image and an infrared thermal imaging image by using a preprocessing device, preprocessing the visible light image and the infrared thermal imaging image according to a pre-stored camera calibration parameter to obtain four projection points of four corner points in the infrared thermal imaging image, and taking a rectangular frame formed by surrounding the four projection points in the infrared thermal imaging image as a temperature measuring frame; setting two temperature measuring bands at the position of a temperature measuring frame, acquiring a maximum temperature value T 0_max in the temperature measuring band and coordinates P (x, y) of the position of the maximum temperature value T 0_max in the temperature measuring band by carrying out maximum temperature scanning in the range of the temperature measuring bands, and judging that the predicted target is a suspected fire source when the P (x, y) is in the temperature measuring frame; And 5 temperature measuring points are selected at the bottom of the temperature measuring frame, the temperature value of each temperature measuring point is extracted, the highest temperature value is selected as the characteristic temperature T _max of the targets, when T _max≥T0_max and the confidence coefficient of the predicted target is more than or equal to 0.80, a continuous frame confirmation mechanism is started for the suspected fire so