CN-122024181-A - Intelligent fire detection and positioning method based on visual recognition
Abstract
The invention relates to the technical field of fire detection visual identification, in particular to an intelligent fire detection and positioning method based on visual identification. And carrying out background dynamic modeling on the basic visual flow to obtain a steady-state characteristic model, carrying out spectrum transformation processing on the auxiliary visual flow to generate an enhanced spectrum response flow, extracting fire related spectrum characteristics, and carrying out point-by-point comparison with the steady-state model to mark a disturbance area. And carrying out space domain and time domain two-dimensional analysis on the disturbance area, calculating fire scores by combining spatial features such as outlines, textures, colors and the like and domain features such as area, shape, position drift and the like, and outputting fire judgment results and geographic positioning coordinates according to the space-time information of the high-score disturbance area. The method can reduce environmental interference and improve fire disaster identification accuracy and positioning accuracy.
Inventors
- TIAN GUILAN
Assignees
- 北京金舟建维消防科技股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260415
Claims (10)
- 1. An intelligent fire detection and positioning method based on visual recognition is characterized by comprising the following steps: generating a video source signal based on continuous frame images containing scene panorama acquired by the video sensor group; separating a base visual stream for preliminary analysis and an auxiliary visual stream for cross-validation from the video source signal; Scanning the basic visual flow frame by frame, and generating a steady-state characteristic model of the scene through background dynamic modeling; performing spectrum transformation processing on the auxiliary visual stream to generate an enhanced spectrum response stream; Extracting features matched with a preset potential fire spectrum interval from the enhanced spectrum response flow, generating a potential fire spectrum feature set, performing point-by-point comparison on the potential fire spectrum feature set and the steady-state feature model, marking disturbance areas exceeding steady-state features, and generating a primary disturbance area list; starting multi-stage analysis of the video source signal according to the primary disturbance zone list, wherein the multi-stage analysis comprises a spatial domain analysis stage and a time domain analysis stage; In the space domain analysis stage, calculating the boundary contour, the internal texture complexity and the color statistical distribution of each primary disturbance area; In a time domain analysis stage, tracking the area change, the shape evolution path and the position drift track of each primary disturbance area in a multi-frame image; Combining the space domain analysis result and the time domain analysis result, and calculating the fire score of each primary disturbance area; based on the space-time information of the primary disturbance area with the fire score exceeding the decision threshold, a fire judgment result and geographic positioning coordinates are generated, and a control instruction is output.
- 2. The intelligent fire detection and localization method based on visual recognition according to claim 1, wherein the scanning the basic visual flow frame by frame generates a steady-state feature model of a scene through background dynamic modeling, comprising: Starting from the initial end of the basic visual flow, selecting a plurality of continuous frames as background modeling reference frames; extracting a color value sequence of each pixel point in a background modeling reference frame in a continuous frame; Performing Gaussian mixture model fitting on the color value sequence of each pixel point to generate a multi-mode steady-state Gaussian component set of each pixel point; carrying out weight evaluation on a multi-mode steady-state Gaussian component set of a pixel point, and defining a steady-state Gaussian component with the highest weight as a steady-state characteristic Gaussian component of the pixel point; collecting steady-state characteristic Gaussian components of all pixel points to form a two-dimensional steady-state characteristic matrix; calculating the similarity of steady-state characteristic Gaussian components of adjacent pixel points in a two-dimensional steady-state characteristic matrix in a color space, and constructing a boundary of a steady-state characteristic region based on the similarity; Merging adjacent pixel points with similar steady-state characteristic Gaussian components into steady-state characteristic blocks according to the boundary of the steady-state characteristic region; And allocating a unique feature block identifier for each steady-state feature block, and recording an average color value and a space covariance matrix corresponding to the feature block identifier to jointly form the steady-state feature model.
- 3. The intelligent fire detection and localization method based on visual recognition of claim 2, wherein the performing the spectral transformation on the auxiliary visual stream to generate the enhanced spectral response stream comprises: separating an infrared spectrum band image and a short wave infrared spectrum band image from an original auxiliary visual flow obtained by a multispectral camera; performing histogram equalization processing on the infrared spectrum band image, enhancing the overall contrast of the image, and generating a first processed spectrum image; performing self-adaptive gamma correction on the short-wave infrared spectrum band image, highlighting the energy characteristics of a preset band, and generating a second processed spectrum image; Carrying out pixel-level weighted fusion on the first processing spectrum image and the second processing spectrum image to generate a fused spectrum image; applying a filter based on a preset fire spectrum reference on the fused spectrum image, wherein the preset fire spectrum reference is generated based on a standard flame spectrum database; calculating the response intensity of each pixel in the fused spectrum image and a preset fire spectrum reference to obtain a preliminary spectrum response diagram; Performing multi-scale space morphology operation on the preliminary spectrum response graph, removing isolated noise response points, and connecting adjacent strong response areas to form a communication area; extracting the geometric center coordinates, the boundary circumscribed rectangle and the response intensity peak value of each connected region to form a spectrum feature descriptor list; and marking a corresponding response area on each frame of the original auxiliary visual stream based on the spectral feature descriptor list, and generating the enhanced spectral response stream.
- 4. The intelligent fire detection and localization method based on visual recognition according to claim 3, wherein the generating the set of potential fire spectral features and comparing the set of potential fire spectral features with the steady-state feature model point by point, marking the disturbance area beyond the steady-state feature comprises: traversing each response region in the enhanced spectral response stream defined by the spectral feature descriptor list; searching a pixel point position corresponding to the geometric center coordinate of the response area in the basic visual flow; reading the mean value and the variance of steady-state characteristic Gaussian components corresponding to the pixel point positions from a steady-state characteristic model; Reading the actual color value of the pixel point position from the current frame of the basic visual stream; Calculating the probability that the actual color value falls in a color distribution interval defined by the corresponding steady-state characteristic Gaussian component, and generating a steady-state coincidence value; If the steady state coincidence degree value is lower than a set steady state deviation threshold value, judging that the pixel point position is a steady state deviation point; Collecting all steady-state deviation points, and marking a response area to which the steady-state deviation points belong as a suspicious area; Repeatedly calculating steady-state coincidence degree values of pixel points in all suspicious areas in the basic visual flow, and counting the number and total area occupation ratio of steady-state departure points in each suspicious area; Combining the response intensity peak value of the suspicious region in the enhanced spectrum response flow, and obtaining the comprehensive disturbance intensity of the suspicious region through weighted calculation; And screening suspicious areas with comprehensive disturbance intensity exceeding a disturbance threshold value to form the primary disturbance area list, wherein the items of the primary disturbance area list comprise boundary information of each area and the comprehensive disturbance intensity.
- 5. The intelligent fire detection and localization method based on visual recognition of claim 4, wherein the calculating of boundary contours, internal texture complexity and color statistics distribution for each primary disturbance zone comprises: extracting a pixel point set from boundary information of each primary disturbance area; Performing convex hull calculation on the pixel point set to generate a convex polygon contour of a primary disturbance area, and recording each vertex coordinate of the convex polygon contour; calculating the ratio of the area to the perimeter of the convex polygon profile based on the convex polygon profile, and taking the ratio as a profile compactness characteristic; Extracting the image subblocks after graying in the boundary of the primary disturbance area; applying a local binary pattern operator to the image sub-blocks, counting histograms of different local binary pattern codes, and calculating entropy values of the histograms to serve as internal texture complexity characteristics; in the boundary of the primary disturbance area, the mean value, variance, skewness and kurtosis of all pixel points in a plurality of color channels are counted to form a color statistical distribution vector; and combining the contour compactness characteristic, the internal texture complexity characteristic and the color statistical distribution vector to generate a spatial characteristic vector of the primary disturbance region.
- 6. The intelligent fire detection and localization method based on visual recognition according to claim 5, wherein the tracking of the area change, the shape evolution path and the position drift trajectory of each primary disturbance area in the multi-frame image comprises: dynamically tracking each primary disturbance zone in a number of successive frames following a base visual stream containing primary disturbance zones; In the current frame, taking the center position of the primary disturbance area determined in the previous frame as a starting point, performing template matching in a preset neighborhood range, and determining the approximate position of the primary disturbance area in the current frame; At the approximate position, using the color statistical distribution vector of the frame before the primary disturbance area as a reference, and carrying out area growth segmentation to obtain a segmented area of the current frame; Calculating the area of the dividing area of the current frame, comparing the area with the area of the previous frame, and calculating the area change rate; extracting a convex polygon contour of a current frame segmentation area, calculating Jacquard similarity coefficients of the convex polygon contour and the convex polygon contour of a plurality of previous frames, and describing a shape evolution path; recording the geometric center coordinates of the current frame segmentation area, calculating the Euclidean distance between the geometric center coordinates and the geometric center coordinates of the previous frame, and describing the inter-frame drift; smoothing filtering is carried out on the area change rate sequence, the Jacaded similarity coefficient sequence and the position drift distance sequence of the continuous multiframe, and noise points are removed; Calculating the average value and variance of the smoothed area change rate sequence, calculating the overall descending slope of the Jacquard similarity coefficient sequence, and calculating the accumulated drift amount of the position drift distance sequence; and combining the average value and variance of the area change rate sequence, the overall descending slope of the Jacquard similarity coefficient sequence and the accumulated drift amount of the position drift distance sequence to generate the time domain evolution feature vector of the primary disturbance region.
- 7. The intelligent fire detection and localization method based on visual recognition of claim 6, wherein the integrating the spatial domain analysis result and the temporal domain analysis result, calculating a fire score for each primary disturbance zone, comprises: For each primary disturbance region, splicing the spatial feature vector and the time domain evolution feature vector of each primary disturbance region to form a combined feature vector; Retrieving a plurality of historical cases closest to the current combined feature vector in feature space distance from a historical fire case database; reading a final fire judgment result label corresponding to the historical case; according to the distance between the historical cases and the current combined feature vector, different weights are given to each historical case, and the closer the distance is, the larger the weight is; Carrying out weighted voting on the final fire judgment result label of the historical case, and calculating the ratio of the total weight of the voting result as 'fire' to the total weight of all votes to obtain the primary fire probability based on case matching; The method comprises the steps of carrying out nonlinear combination on the comprehensive disturbance intensity, the internal texture complexity characteristic and the average value of an area change rate sequence of a current primary disturbance area, and mapping the combination into a correction factor; And adjusting the primary fire probability based on case matching by using the correction factors to obtain corrected fire scores.
- 8. The intelligent fire detection and localization method based on visual recognition of claim 7, wherein the generating the fire determination result and the geographic localization coordinates comprises: screening all primary disturbance areas with fire scores exceeding a preset fire judgment threshold value; extracting vertex coordinate sets of the convex polygon outlines from the space feature vectors of the primary disturbance areas meeting the threshold condition; mapping the vertex coordinate set from an image coordinate system to an actual geographic coordinate system through a pre-calibrated homography transformation matrix to obtain polygon boundary coordinates of a fire area in a geographic space; For the same suspected fire source, if primary disturbance areas meeting a threshold condition are corresponding to the primary disturbance areas in the continuous multi-frame, the primary disturbance areas are regarded as different time observations of the same fire event; fusing geographic polygon boundary coordinates calculated by the fire event in different frames, and taking the union of the polygon coordinates as a final positioning geographic area of the fire event; Generating a fire event identifier for each individual final localized geographic area; Binding the fire event identifier, the boundary coordinates of the final positioning geographic area, the first detected time stamp and the corresponding average fire score into a complete fire judgment and positioning record; and packaging the fire judgment and positioning records into a structured data format, and outputting the structured data format serving as core content of the control instruction.
- 9. The intelligent fire detection and localization method based on visual recognition of claim 8, further comprising performing a feedback learning phase after outputting the control command, the feedback learning phase comprising: Continuously collecting a follow-up monitoring video segment with a preset length after the control command triggers an external response; In the subsequent monitoring video section, confirming the real development condition of the fire by manpower or through a more reliable independent sensor, wherein the real development condition comprises whether the fire is real or not and the actual spreading range of the fire, and generating a fire truth value label; Extracting actual evolution data of the region in the primary disturbance region list in the time period from the starting end of the subsequent monitoring video segment, wherein the actual evolution data comprises actual spatial feature change and a time domain evolution track; Comparing the actual evolution data with the initially generated spatial feature vector and the time domain evolution feature vector, and calculating feature prediction deviation; Correlating the characteristic prediction deviation and the initial fire score with the fire truth value label to form a feedback learning sample; and storing the feedback learning sample into a historical fire case database, and updating weight distribution rules or nonlinear combination function parameters based on case matching in a fire score calculation model.
- 10. The intelligent fire detection and localization method based on visual recognition of claim 9, further comprising a scene adaptive modeling phase performed at system initialization, the scene adaptive modeling phase comprising: Under a monitoring scene without fire, collecting video data of a complete natural day as a scene modeling sample; Analyzing a scene modeling sample to identify periodically occurring interference sources in a scene, wherein the interference sources comprise a car lamp, a lighting lamp which is started and a sunlight moving light spot; extracting a visual characteristic mode of each periodic interference source and a time law of appearance of the visual characteristic mode, and generating an interference source characteristic-time mode library; in the subsequent real-time detection, when the spatial characteristics and the occurrence time of the primary disturbance areas identified in the basic visual flow and the auxiliary visual flow exceed a certain pattern matching degree in an interference source characteristic-time pattern library by a preset interference matching threshold, the comprehensive disturbance intensity of the primary disturbance areas is greatly attenuated or is directly removed from a primary disturbance area list; In a scene modeling sample, counting natural fluctuation ranges of steady-state characteristic model parameters under different weather conditions; and taking the natural fluctuation range as a dynamic threshold value for updating the steady-state characteristic model, and triggering slow updating of the steady-state characteristic model only when the color change of the pixel points exceeds the dynamic threshold value in real-time detection, so that the model is prevented from being polluted by transient interference.
Description
Intelligent fire detection and positioning method based on visual recognition Technical Field The invention relates to the technical field of fire detection visual identification, in particular to an intelligent fire detection and positioning method based on visual identification. Background The conventional fire detection technology based on video mostly adopts a single-path visual flow processing mode, directly carries out overall processing on panoramic continuous frame images acquired by a video sensor, completes fire judgment through color, brightness or texture characteristics of a single image domain, does not carry out shunt processing on video signals by a technology of partially introducing spectral characteristics, and only carries out spectral analysis and background modeling operation on the overall images. The technology directly completes suspected region marking by means of single-dimension features, a diversion processing mechanism of basic analysis and cross verification is not arranged, and the matching of the spectrum features and the background model is mainly based on integral comparison. The processing mode of the single vision flow is easily influenced by factors such as ambient light, sundry interference and the like, the pertinence of the fire feature extraction is insufficient, the auxiliary spectrum feature is not subjected to enhancement treatment, the integral comparison with a background steady-state model cannot accurately identify local abnormal disturbance, and a large number of invalid marks can be generated. In the prior art, only single-frame spatial features or simple time domain change judgment is adopted for the analysis of the suspected region, the refined spatial domain features of the disturbance region are not quantized, the continuous time domain tracking is not performed on the dynamic change of the disturbance region, the fire judgment result is easy to deviate, and the geographic positioning accuracy is insufficient. The invention aims to solve the problem of low feature analysis precision caused by non-split video source signals, realize the separation of basic visual flow and auxiliary visual flow, and accurately mark disturbance areas through auxiliary flow spectral enhancement and point-by-point comparison of spectral features and steady-state models. Meanwhile, the problem of single analysis dimension of the disturbance area is solved, space domain multi-feature calculation and time domain dynamic track tracking are performed on the disturbance area, and fire judgment and positioning are completed by depending on a multi-dimensional analysis result. Disclosure of Invention The invention aims to solve the defects in the prior art, and provides an intelligent fire detection and positioning method based on visual identification. In order to achieve the purpose, the invention adopts the following technical scheme that the intelligent fire detection and positioning method based on visual identification comprises the following steps: generating a video source signal based on continuous frame images containing scene panorama acquired by the video sensor group; separating a base visual stream for preliminary analysis and an auxiliary visual stream for cross-validation from the video source signal; Scanning the basic visual flow frame by frame, and generating a steady-state characteristic model of the scene through background dynamic modeling; performing spectrum transformation processing on the auxiliary visual stream to generate an enhanced spectrum response stream; Extracting features matched with a preset potential fire spectrum interval from the enhanced spectrum response flow, generating a potential fire spectrum feature set, performing point-by-point comparison on the potential fire spectrum feature set and the steady-state feature model, marking disturbance areas exceeding steady-state features, and generating a primary disturbance area list; starting multi-stage analysis of the video source signal according to the primary disturbance zone list, wherein the multi-stage analysis comprises a spatial domain analysis stage and a time domain analysis stage; In the space domain analysis stage, calculating the boundary contour, the internal texture complexity and the color statistical distribution of each primary disturbance area; In a time domain analysis stage, tracking the area change, the shape evolution path and the position drift track of each primary disturbance area in a multi-frame image; Combining the space domain analysis result and the time domain analysis result, and calculating the fire score of each primary disturbance area; based on the space-time information of the primary disturbance area with the fire score exceeding the decision threshold, a fire judgment result and geographic positioning coordinates are generated, and a control instruction is output. As a further aspect of the present invention, the scanning the basic visual stream frame by f