CN-121982529-A - Water body color identification method and system based on visual image
Abstract
The invention discloses a water body color identification method and a system based on visual images, which belong to the technical field of water quality monitoring, and comprise the steps of acquiring visual image data of a water body region, decomposing the visual image data into a water body inherent color component and an environment illumination component, and obtaining illumination compensation characteristics through self-adaptive weight compensation; the method comprises the steps of inputting illumination compensation characteristics into a target detection network to generate a detection linked list, extracting effective color characteristics of a water body region, classifying, constructing a scene knowledge base based on historical monitoring data, converting a single frame classification result into a scene adaptation classification result, constructing a monitoring topological graph, calculating a pollutant diffusion rule, building a state matrix for the scene adaptation classification result, fusing the state matrix with the pollution propagation prediction result, and generating an alarm signal when abnormality is indicated. The invention can effectively overcome the influence of ambient light, and combines the prior knowledge of scenes and the diffusion rule of pollutants to realize the real-time monitoring and early warning of the water quality condition.
Inventors
- BAO ZHIHUI
- CAO MING
- SUN YUTAO
- WANG ENXU
Assignees
- 中国铁塔股份有限公司承德市分公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260121
Claims (8)
- 1. The water body color identification method based on the visual image is characterized by comprising the following steps of: Acquiring visual image data containing a water body region, decomposing the visual image data into a water body inherent color component and an ambient illumination component, generating self-adaptive weight according to the ambient illumination component, and performing weighted compensation on the water body inherent color component to obtain illumination compensation characteristics; Inputting the illumination compensation characteristic into a target detection network to generate a detection linked list containing position information, extracting effective color characteristics of a water body region according to the position information, and inputting the effective color characteristics into a color classification network to obtain a single-frame classification result; Constructing a scene knowledge base based on historical monitoring data of the water body region, and converting the single frame classification result into a scene adaptation classification result through priori knowledge in the current scene activation scene knowledge base; Constructing a monitoring topological graph based on the positions of the monitoring points, mapping scene adaptation classification results to the monitoring topological graph, calculating pollutant diffusion rules according to the water flow direction, and generating pollution propagation prediction results; And establishing a state matrix for the scene adaptation classification result, fusing the state matrix with the pollution propagation prediction result, and generating an alarm signal by combining the position information in the detection linked list when the fusion result indicates abnormality.
- 2. The method of claim 1, wherein decomposing the visual image data into a water body intrinsic color component and an ambient illumination component, generating adaptive weights based on the ambient illumination component, and wherein weighting and compensating the water body intrinsic color component to obtain the illumination compensation feature comprises: Dividing visual image data into a plurality of subareas according to a brightness distribution rule, extracting illumination intensity characteristics from each subarea, and decomposing the visual image into a water body inherent color component and an environment illumination component according to a spatial distribution relation of the illumination intensity characteristics; Performing multi-scale decomposition on the ambient light components to obtain ambient light data with different resolutions, calculating the intensity ratio of the ambient light data with adjacent resolutions, and generating an illumination change characteristic diagram; Establishing an illumination change curve for each pixel point according to an illumination change characteristic diagram, extracting fluctuation parameters of the illumination change curve, dividing a water body area into an illumination mutation area and an illumination stabilization area based on the fluctuation parameters, and generating self-adaptive weight distribution according to the light attenuation rates of the illumination mutation area and the illumination stabilization area; And carrying out weighted compensation on the self-adaptive weight distribution and the inherent color components of the water body in a region to obtain preliminary compensation data, and carrying out nonlinear combination on the preliminary compensation data and fluctuation parameters of an illumination change curve to generate illumination compensation characteristics.
- 3. The method of claim 1, wherein inputting the illumination compensation feature into the target detection network to generate a detection linked list comprising location information, and extracting the effective color feature of the water body region based on the location information comprises: Inputting illumination compensation characteristics into a target detection network, wherein the target detection network is provided with a water body characteristic enhancement layer, a regional characteristic extraction layer and a position information fusion layer; Extracting water body regional characteristics through a water body characteristic enhancement layer, inputting the water body regional characteristics into a regional characteristic extraction layer to obtain target characteristics, and processing the target characteristics through a position information fusion layer to generate a detection linked list containing target position coordinates and characteristic descriptions; Calculating the space distance between targets based on the position coordinates in the detection linked list to obtain a distance matrix, constructing a target distribution position diagram according to the distance matrix, determining the neighborhood range of the targets by utilizing the target distribution position diagram, and extracting the combination of the characteristics in the neighborhood range and the characteristics of the targets to obtain enhanced characteristics; defining a target area according to the position coordinates in the detection linked list, extracting multi-scale features in the target area, and calculating weight coefficients of the multi-scale features based on the enhanced features; And carrying out weighted combination on the multi-scale features according to the weight coefficients, fusing the weighted combination result with the features in the neighborhood range, and extracting the effective color features of the water body region.
- 4. The method of claim 1, wherein inputting the valid color features into the color classification network to obtain a single frame classification result comprises: constructing a color classification network, wherein the color classification network comprises a color distribution feature layer, a color texture feature layer and a color space association feature layer; inputting the effective color features into a color classification network, counting global color distribution histograms and average distribution in the color distribution feature layer, extracting local color gradient and direction change information in the color texture feature layer, calculating a color space conversion matrix in the color space correlation feature layer, and generating multi-layer color features; calculating category distinguishing capability scores of the color features of each layer, and carrying out importance assessment on the color features of the plurality of layers according to the category distinguishing capability scores to generate self-adaptive weight coefficients; The method comprises the steps of carrying out weighted combination on multi-layer color characteristics by utilizing a self-adaptive weight coefficient to obtain a comprehensive characteristic vector, carrying out color category discrimination on the comprehensive characteristic vector, and generating a classification prediction result of each layer; And generating a final confidence score according to the consistency degree of the classification prediction results of each layer, and outputting a single-frame classification result based on the final confidence score.
- 5. The method of claim 1, wherein constructing a scene knowledge base based on historical monitoring data of the water body region, converting the single frame classification result to a scene adaptation classification result by prior knowledge in a current scene activation scene knowledge base comprises: extracting scene features based on historical monitoring data of a water body region, constructing the scene features into a feature matrix, and generating a scene knowledge base; extracting the characteristics of the current scene, calculating the distinguishing score of the characteristics of the current scene, and generating a characteristic weight according to the distinguishing score; Calculating the similarity of the current scene characteristics and the scene knowledge base characteristics by utilizing the characteristic weights, activating a historical scene with the similarity larger than a preset similarity threshold value from the scene knowledge base, and extracting priori knowledge in the historical scene; Calculating the class transition probability of the single-frame classification result according to the priori knowledge, and optimizing the single-frame classification result based on the class transition probability to generate candidate classes; and calculating the matching degree of the candidate category and the priori knowledge, and selecting the category with the highest matching degree as a scene adaptation classification result.
- 6. The method of claim 1, wherein constructing a monitoring topology based on monitoring point locations, mapping scene adaptation classification results to the monitoring topology, calculating a pollutant dispersion law from water flow direction, generating a pollution propagation prediction result comprises: Acquiring a spatial relationship of monitoring points based on the positions of the monitoring points, and constructing a monitoring topological graph by combining the water flow direction; analyzing periodic variation rules in historical hydrological data, generating hydrological datum lines, calculating deviation values of hydrological parameters acquired in real time and the hydrological datum lines, and generating hydrological anomaly indexes; weighting the connection relation of the monitoring points in the monitoring topological graph by using the hydrologic anomaly index to generate a propagation influence factor representing the propagation characteristics of pollutants; Mapping the scene adaptation classification result to a monitoring topological graph, and calculating the propagation attenuation of pollutants between adjacent monitoring points according to the water flow direction and the propagation influence factors to obtain a pollutant diffusion rule; And calculating the pollutant concentration and arrival time of the downstream monitoring point based on the pollutant diffusion rule, correcting the calculation result by combining the hydrologic anomaly index, and outputting a pollution transmission prediction result.
- 7. The method of claim 1, wherein establishing a state matrix for the scene adaptation classification result, fusing the state matrix with the pollution propagation prediction result, and when the fusion result indicates abnormality, generating an alarm signal in combination with the position information in the detection linked list comprises: Establishing a sliding time window for scene adaptation classification results, calculating the change of water body color categories in adjacent time windows to obtain a category conversion matrix, extracting a water body color change rule based on the category conversion matrix, and constructing a state matrix; Carrying out time sequence decomposition on the state matrix, extracting fluctuation amplitude and periodic characteristics of state change to calculate a state stability index, analyzing space-time distribution characteristics of pollution propagation prediction results to obtain a prediction reliability index, and carrying out normalization processing on the state stability index and the prediction reliability index to generate a dynamic fusion weight; Performing self-adaptive weighted combination on the state matrix and the pollution propagation prediction result according to the dynamic fusion weight to generate a fusion state reflecting the current scene; identifying an abnormal region based on the fusion state, extracting position coordinates and water flow direction information of the abnormal region from a detection linked list, and analyzing a diffusion path of abnormal pollutants; and dividing a pollution influence area according to the diffusion path, and generating a grading alarm signal based on the range and the degree of the pollution influence area.
- 8. A visual image-based water color recognition system for implementing the method of any one of claims 1-7, the system comprising: The image preprocessing module is used for acquiring visual image data containing a water body region, decomposing the visual image data into a water body inherent color component and an ambient illumination component, generating self-adaptive weight according to the ambient illumination component, and carrying out weighted compensation on the water body inherent color component to obtain illumination compensation characteristics; The target detection module is used for inputting the illumination compensation characteristic into a target detection network to generate a detection linked list containing position information, extracting the effective color characteristic of the water body region according to the position information, and inputting the effective color characteristic into a color classification network to obtain a single-frame classification result; The scene adaptation module is used for constructing a scene knowledge base based on historical monitoring data of the water body area, and converting the single frame classification result into a scene adaptation classification result through priori knowledge in the current scene activation scene knowledge base; The pollution prediction module is used for constructing a monitoring topological graph based on the positions of the monitoring points, mapping the scene adaptation classification result to the monitoring topological graph, calculating a pollutant diffusion rule according to the water flow direction, and generating a pollution propagation prediction result; and the alarm processing module is used for establishing a state matrix for the scene adaptation classification result, fusing the state matrix with the pollution propagation prediction result, and generating an alarm signal by combining the position information in the detection linked list when the fusion result indicates abnormality.
Description
Water body color identification method and system based on visual image Technical Field The invention belongs to the technical field of water quality monitoring, and particularly relates to a water body color identification method and system based on visual images. Background With the acceleration of industrialization progress and the expansion of urban scale, the problem of water pollution is increasingly prominent. The color of the water body is taken as an important visual index of the water quality condition, and the change of the color of the water body often reflects the degree of water pollution. The water color identification mainly depends on manual field sampling and laboratory detection, and the problems of long detection period, high labor cost and the like exist in the mode, so that the real-time requirement of water quality monitoring is difficult to meet. The water quality condition is judged by analyzing the water body image characteristics by the water body color recognition technology based on the visual image, and the method has the advantages of non-contact, real-time, low cost and the like. The existing water body color identification method still has many challenges in practical application, namely the water body color can be caused to present different visual characteristics due to illumination change in the natural environment, the identification accuracy is affected, the image analysis under a single visual angle is easy to be locally interfered, the integral situation of water body pollution is difficult to reflect, the full utilization of historical monitoring data is lacked, and the optimal identification can not be carried out by combining specific scene characteristics. Some methods adopt image enhancement technology to reduce the influence of illumination change, but often neglect the coupling relation between the inherent color of the water body and the environmental illumination, some methods introduce deep learning models to extract features, but fail to effectively utilize the diffusion characteristics of water pollutants, and some methods try to construct a monitoring point network to carry out collaborative analysis, but lack the dynamic prediction capability of pollution propagation rules. These problems limit the application of water color identification technology in actual water quality monitoring. Disclosure of Invention Aiming at the defects of the prior art, the invention provides a water body color identification method and a system based on visual images, which solve the problems. In order to achieve the purpose, the invention is realized by the following technical scheme that the water body color identification method based on the visual image comprises the following steps: Acquiring visual image data containing a water body region, decomposing the visual image data into a water body inherent color component and an ambient illumination component, generating self-adaptive weight according to the ambient illumination component, and performing weighted compensation on the water body inherent color component to obtain illumination compensation characteristics; Inputting the illumination compensation characteristic into a target detection network to generate a detection linked list containing position information, extracting effective color characteristics of a water body region according to the position information, and inputting the effective color characteristics into a color classification network to obtain a single-frame classification result; Constructing a scene knowledge base based on historical monitoring data of the water body region, and converting the single frame classification result into a scene adaptation classification result through priori knowledge in the current scene activation scene knowledge base; Constructing a monitoring topological graph based on the positions of the monitoring points, mapping scene adaptation classification results to the monitoring topological graph, calculating pollutant diffusion rules according to the water flow direction, and generating pollution propagation prediction results; And establishing a state matrix for the scene adaptation classification result, fusing the state matrix with the pollution propagation prediction result, and generating an alarm signal by combining the position information in the detection linked list when the fusion result indicates abnormality. Further, decomposing the visual image data into a water body intrinsic color component and an ambient light component, generating an adaptive weight according to the ambient light component, and performing weighted compensation on the water body intrinsic color component to obtain an illumination compensation feature comprises: Dividing visual image data into a plurality of subareas according to a brightness distribution rule, extracting illumination intensity characteristics from each subarea, and decomposing the visual image into a water body inheren