CN-121861507-B - Urban flood drawing and driving factor analysis method based on multi-source remote sensing data
Abstract
The invention discloses an urban flood drawing and driving factor analysis method based on multisource remote sensing data, which comprises the steps of respectively extracting an optical and SAR semi-permanent water body based on a global surface water body data set and a Sentinel-1 image, taking the intersection of the two to generate an enhanced semi-permanent water body mask, extracting a flood period water body by utilizing the Sentinel-1 image to obtain flood water body distribution, simultaneously combining the Sentinel-2 image to carry out water body extraction accuracy verification, carrying out accuracy improvement on an SRTM DEM based on a DEM error prediction model of a Transformer network to generate a corrected DEM, constructing a multidimensional space analysis model based on the elevation, the gradient and the surface coverage type of the corrected DEM, and identifying the space distribution characteristics and the topography driving mechanism of the flood water body. The invention combines the topography factors to carry out comprehensive analysis, and can improve the space precision and timeliness of flood risk assessment.
Inventors
- XIAO ZITING
- Bian Shuaixiang
- CHEN YANMING
- Xia Jiakang
- TANG XIN
- ZHAO YEZI
- GAO SHIQI
- REN KAIXUAN
- CHU JIAQI
- YANG KUN
Assignees
- 河海大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260318
Claims (5)
- 1. A city flood drawing and driving factor analysis method based on multi-source remote sensing data is characterized by comprising the following steps: S1, respectively extracting an optical semi-permanent water body and an SAR semi-permanent water body based on GSW1.4 and Sentinel-1 SAR images of a global surface water body data set, taking intersection of the two, and generating an enhanced semi-permanent water body mask through an enhanced semi-permanent mask generation algorithm; S2, extracting a water body in a flooding period by utilizing a Sentinel-1 SAR flooding period image of a selected area, removing a stable water body part by utilizing an enhanced semi-permanent water body mask to obtain flooding water body distribution, and simultaneously combining a Sentinel-2 optical image to perform water body extraction accuracy verification; s3, performing precision improvement on the SRTM DEM by adopting a DEM error prediction model based on a transform encoder structure, and generating a corrected DEM; S4, constructing a multidimensional space analysis model based on the corrected DEM, gradient and surface coverage type, and identifying the space distribution characteristics of the flood water body and a topography surface driving mechanism thereof; The specific steps for generating the enhanced semi-permanent water mask are as follows: S11, screening a selected area by adopting a global surface water body data set GSW1.4, and taking pixels with the water body occurrence frequency of more than 80% as semi-permanent water bodies extracted from optical images to obtain a semi-permanent water body mask based on the optical images; Preprocessing all images of a selected area by adopting a Sentinel-1 SAR image to obtain a multi-temporal Sentinel-1 SAR image set, extracting water from each image by using an automatic threshold method, screening a part with the water occurrence frequency of more than 80% as a semi-permanent water extracted from the SAR image, and obtaining a semi-permanent water mask of the SAR image; S12, fusing the semi-permanent water mask based on the optical image and the semi-permanent water mask of the SAR image through an enhanced semi-permanent mask generation algorithm to obtain an enhanced semi-permanent water mask; the specific steps for obtaining the flood water distribution are as follows: S21, utilizing a Sentinel-1 SAR flood phase image of a selected area, selecting vertical emission level to receive polarization data, performing radiometric calibration, terrain correction and noise removal on the image, adopting an improved Lee filtering method to inhibit speckle noise, and performing median synthesis on a multi-temporal image; In the ENVI platform, removing the water body region which exists stably for a long time in the water body median position extracted by the SAR image and the enhanced semi-permanent water body mask range from the water body semi-permanent water body pixels in the flooding period through Band Math operation, and only retaining the newly added temporary ponding and overflowing region in the flooding period to obtain the flooding water body distribution mask; S23, calculating and correcting a normalized difference water index MNDWI by using a Sentinel-2 optical image in the same period as the flood period: , The Green and SWIR are respectively the reflectivity of the Green wave band and the short wave infrared wave band; The method comprises the steps of correcting a normalized difference water index MNDWI, automatically determining a water segmentation threshold by adopting an Otsu method, generating a Sentinel-2 reference water mask, carrying out spatial superposition on the Sentinel-1 flooding period potential water mask and the Sentinel-2 reference water mask, randomly sampling sample pixels, constructing a confusion matrix, calculating overall precision, user precision, producer precision and Kappa coefficient, and carrying out quantitative evaluation on flooding water extraction precision.
- 2. The urban flood drawing and driving factor analysis method based on multi-source remote sensing data according to claim 1, wherein the implementation step of the enhanced semi-permanent mask generation algorithm comprises the following steps: for semi-permanent water mask based on optical image Clipping to the ROI of the selected region and reprojecting; for multi-temporal Sentinel-1 SAR image collection Each image of (a) Performing VH polarization extraction and Lee filtering to obtain a filtered image For each filtered image Performing a calculation of a histogram and an optimization of the threshold using the Otsu method If the pixel satisfies Then judge as the water body Otherwise Obtaining all time phase water mask sets: by calculating the frequency of the water body Obtaining a semi-permanent water mask based on SAR image : If (if) And is also provided with Then , wherein, Indicating the number of total images that the picture element is active, Representing the ith valid image; Finally, the enhanced semi-permanent water mask is obtained : 。
- 3. The urban flood drawing and driving factor analysis method based on multi-source remote sensing data according to claim 1, wherein the specific steps of generating the corrected DEM are as follows: s31, taking an SRTM DEM as basic elevation data, extracting a Slope, a Slope direction Aspect, a topography position index TPI and a topography roughness index TRI, and introducing a ground surface coverage type Land Cover as classification characteristics; s32, constructing a DEM error prediction model based on a transducer encoder, taking a multisource topographic feature as input, and taking a difference value between a high-precision DEM and an SRTM DEM as a supervision tag for training; S33, performing pixel-by-pixel error prediction and correction on the SRTM DEM in the selected area by using the trained DEM error prediction model, and generating a corrected high-precision DEM.
- 4. The urban flood drawing and driving factor analysis method based on multi-source remote sensing data according to claim 3, wherein the DEM error prediction model is input by a multi-source grid feature The output is an error label , wherein, Representing a high precision DEM generated using a LiDAR point cloud, The raw elevation data is represented as such, Representing land cover data; the specific implementation steps for obtaining the DEM error prediction model comprise: s321, reading data matrix for each input grid feature And its null region nodata value; generating effective pixel mask if Or (b) Marked as invalid, wherein, Representing pixel values of an ith row and a jth column; Extracting all effective pixel positions ; Extracting feature vectors for all grids at the same pixel position ; Extracting corresponding labels from an error grid ; S322, setting LandCover as category characteristics, namely extracting integer category vectors ; Deleting LandCover columns to obtain a continuous feature set ; Normalization is performed on all continuous features using a normalizer to obtain normalized results : , Representing a standardized processing operation; s323, construct and contain 、 、 The training set and the testing set are divided according to the proportion; s324, constructing a DEM error prediction model, which comprises the following steps: embedding layer e was constructed for LandCover: Wherein, the method comprises the steps of, Representing an embedding operation; splicing the continuous features and the embedded features to obtain spliced features z: Wherein, the method comprises the steps of, Representing a splicing operation; projecting an input into a transform coding dimension using a linear layer, resulting in : Wherein, the method comprises the steps of, Representing an input of a transducer encoder; Representing a fully connected linear transformation, projecting the feature z to the unified feature dimension required by the transducer encoder; constructing a transducer encoder comprising several layers: Wherein, the method comprises the steps of, Representing the encoding operation; Represent the passing of the first Final feature representation after layer transform encoder; Constructing a regression head: , wherein, The predicted value is represented by a value of the prediction, A representation regression header for mapping the high-dimensional features into a target variable space; And S325, training the DEM error prediction model by adopting a training set, verifying by a testing set, and storing the optimal DEM error prediction model.
- 5. The urban flood drawing and driving factor analysis method based on multi-source remote sensing data according to claim 1, wherein the specific steps of identifying the spatial distribution characteristics of the flooding body and the topography thereof are as follows: s41, taking counties as space units, and counting the area and the duty ratio of the flooding water body to form a regional flooding distribution map; s42, calculating a gradient image based on the corrected DEM, counting the distribution proportion of the flooding water body according to a gradient interval, and identifying ponding characteristics under different topography fluctuation conditions; s43, combining land coverage classification data, counting the distribution proportion of the flood water body in a typical surface type, and establishing a response relation between the surface type and the occurrence of the flood; S44, constructing two-dimensional cross statistical models of gradient-elevation, gradient-earth surface coverage and elevation-earth surface coverage, extracting a distribution rule of the flooding water body under a multi-factor combination condition, and identifying a main terrain earth surface driving mechanism of the flooding space distribution.
Description
Urban flood drawing and driving factor analysis method based on multi-source remote sensing data Technical Field The invention relates to the technical field of remote sensing information processing and geographic information systems, in particular to a city flood drawing and driving factor analysis method based on multi-source remote sensing data. Background In recent years, due to high urban construction density, increased waterproof earth surface proportion and lagged drainage system construction, urban floods are distributed intensively in space and have burst property in time, and a serious challenge is brought to flood control and disaster reduction. With the development of remote sensing and geographic information technology, flood monitoring based on satellite images becomes a research hotspot. The remote sensing data can provide surface change information in a large range and a short time, and provide important support for flood range identification and post-disaster assessment. The data currently in common use mainly include optical images and Synthetic Aperture Radar (SAR) images. The optical image can distinguish water body from non-water body through water body indexes (such as NDWI and MNDWI), but the performance is obviously reduced under the condition of heavy rainfall or cloud coverage. In contrast, SAR images are not affected by illumination and cloud layers, and surface information can be stably obtained in extreme weather. The Sentinel-1 data has become an important data source for urban flood monitoring due to high spatial resolution and short revisit period. The existing flood water body extraction method mostly adopts a threshold segmentation or machine learning classification mode. The Otsu-based automatic threshold segmentation method has higher automation degree, but is easy to be subjected to shadow and scattering interference in urban environments with dense buildings and complex vegetation, so that false recognition is caused. Meanwhile, a single data source is difficult to comprehensively reflect complex surface features, and the stability and the precision of an extraction result are insufficient. Although the multi-source data fusion method can combine SAR and optical image to improve the recognition effect, the traditional desktop software has lower calculation efficiency when processing multi-time phase and multi-source image data, and is difficult to realize automatic, batch and real-time processing. Google EARTH ENGINE (GEE) is used as a new generation cloud computing remote sensing analysis platform, integrates mass remote sensing data and high-performance computing capacity, can realize cloud image processing and visual analysis, and remarkably improves the large-scale flood monitoring efficiency. However, most of the current researches only realize flood extraction or change detection functions on a GEE platform, and have not been effectively integrated with a high-precision terrain correction or deep learning model, so that flood identification precision under complex terrain conditions is insufficient. Terrain is an important factor affecting flood formation and water distribution. SRTM DEM (Shuttle Radar Topography Mission Digital Elevation Model, mission digital elevation model for radar topography mapping of space shuttle) has global coverage and accessibility, but is easily interfered by buildings and vegetation in urban areas, and obvious elevation errors exist. Traditional DEM (Digital Elevation Model ) correction methods such as filtering, local regression and the like rely on artificial feature design, and have limited generalization capability. In recent years, elevation error correction methods (such as convolutional neural network CNN and a transducer model) based on deep learning show superior performance in terms of terrain precision improvement, but related researches are limited to a single data source, and are not effectively combined with a cloud multi-source remote sensing data processing platform. Disclosure of Invention The invention aims to provide a city flood drawing and driving factor analysis method based on multi-source remote sensing data, which utilizes a GEE platform to integrate the multi-source remote sensing data to identify and extract a flood body, uses a Transformer model to correct DEM errors, combines multidimensional topography and earth surface coverage characteristics to realize high-precision flood range identification and space distribution analysis, and provides scientific basis for city flood control and disaster reduction and risk assessment. The technical scheme is that the urban flood drawing and driving factor analysis method based on the multi-source remote sensing data comprises the following steps of: S1, respectively extracting an optical semi-permanent water body and an SAR semi-permanent water body based on GSW1.4 and Sentinel-1 SAR images of a global surface water body data set, taking intersection of the two, an