Search

CN-122020219-A - Multi-coarse-difference identification RAIM method based on DBSCAN clustering

CN122020219ACN 122020219 ACN122020219 ACN 122020219ACN-122020219-A

Abstract

The invention provides a multi-coarse-difference identification RAIM method based on DBSCAN clustering, which comprises the steps of processing received satellite observation data by using a QR parity check method, constructing an observation sample, carrying out standardized processing on the observation sample, analyzing by using a DBSCAN clustering analysis algorithm, identifying abnormal clusters which are different from a normal data distribution mode, judging observation values corresponding to the abnormal clusters as coarse differences, further removing the coarse differences from the observation sample, carrying out iterative processing on the observation sample after the coarse differences are removed until the observation values meet preset inspection conditions, outputting a positioning resolving result, judging that the data of an epoch is unreliable if the observation values still do not meet the inspection conditions after the preset iteration times, and discarding the epoch. The DBSCAN cluster is integrated into the RAIM algorithm, so that the advantages of the DBSCAN cluster in the process of processing the space data containing noise can be fully utilized, noise interference can be effectively eliminated, and the detection capability of the RAIM algorithm in a multi-coarse-difference scene can be further improved.

Inventors

  • YU DEYING
  • WU SHUGUANG
  • WANG ZHIGUO
  • LI HOUPU
  • JI BING
  • BIAN SHAOFENG
  • LI DEYAN
  • ZHANG TAO
  • GAO AO
  • HOU JIAXIN
  • ZHENG GUANG

Assignees

  • 中国人民解放军海军工程大学

Dates

Publication Date
20260512
Application Date
20250701

Claims (10)

  1. 1. The multi-coarse difference identification RAIM method based on DBSCAN clustering is characterized by comprising the following steps of: processing the received satellite observation data by using a QR parity check method, and constructing an observation sample; Carrying out standardization treatment on the constructed observation sample; analyzing the standardized observation samples by using a DBSCAN cluster analysis algorithm, identifying abnormal clusters with differences from the normal data distribution mode, judging the observation values corresponding to the abnormal clusters as coarse differences, and further removing the coarse differences from the observation samples; carrying out iterative processing on the observation sample after removing the rough difference until the observation value meets the preset test condition, and outputting a positioning calculation result; If the observation value still does not meet the inspection condition after the preset iteration times, judging that the data of the epoch is unreliable, and discarding the epoch.
  2. 2. The method of claim 1, wherein the QR parity method comprises: during GNSS positioning, the pseudorange observation equation may be linearized to equation (1): y=Hx+ε (1) Where y represents the difference between the actual observed pseudo-range value and the pseudo-range value calculated approximately from the user position and the clock error, H is the design matrix, x is the state parameter, and ε is the receiver noise, propagation error, satellite ephemeris error and clock error induced observation error.
  3. 3. The method according to claim 2, characterized in that: Estimation of state parameters according to least squares principle Can be expressed as formula (2): The GNSS error equation can be expressed as equation (3): where v is the observation correction.
  4. 4. A method according to claim 3, characterized in that: QR decomposition is performed on the design matrix H to obtain (n-k) ×n-dimensional parity space T (n-k)×n , expressed as formula (4): Wherein Q k×n is an orthogonal matrix in k×n dimensions, R k×k is an upper triangular matrix in k×k dimensions, n is the number of independent observation values, and k is the number of unknown parameters; Wherein QR parity vector t is defined as expressed as equation (5): t=Ty (5) By using Instead of the observation vector y, th=0 is obtained from the QR decomposition property, and ε= -v is expressed by equation (6): E[t]=E[-Tv]=-TE[v]=Tε (6) Expanding the formula (6) to obtain a formula (7): T 1 ε 1 +T 2 ε 2 +…+T n ε n =t (7) Where ε i (i=1, 2,..n) is the negative residual of the observations i, T i is determined by the geometry of the satellite, ε i is determined by the observations characteristics, so T i ε i is determined by both geometry and observations characteristics.
  5. 5. The method according to claim 4, wherein: the observation sample a can be defined as formula (8): A=[T 1 ε 1 ,T 2 ε 2 ,…,T i ε i ] (8)
  6. 6. a method according to claim 3, characterized in that: the distance between the observations i and the QR parity vector t is used to evaluate the reliability of the observations: When the observed value i has a rough difference, the distance between the T i ε i and the QR parity check vector T is similar, and when a plurality of rough differences exist, the influence of the rough differences on the QR parity check vector T appears as an outlier in the observed sample.
  7. 7. The method according to claim 1, characterized in that: In performing the DBSCAN cluster analysis algorithm, the data points are divided into: The core point is that if the EPs neighborhood of a certain point at least contains MinPts points, the point is the core point; If the point number in the Eps neighborhood of a certain point is less than MinPts, but the point is positioned in the Eps neighborhood of a certain core point, the point is the boundary point; Noise point if a point is neither a core point nor a boundary point, then the point is identified as a noise point.
  8. 8. The method according to claim 7, wherein: when executing the DBSCAN cluster analysis algorithm, the method comprises the following steps: randomly selecting an unaccessed data point from the data set, and judging whether the data point is a core point or not; If the initial point is a core point, forming a new cluster by taking the initial point as a center, and adding all points in the Eps neighborhood of the new cluster, wherein the newly added points are continuously checked whether to be core points or not, and the points in the Eps neighborhood are further expanded into the cluster; in the process of expanding the clusters, if a certain data point is found to be neither in any formed clusters nor meets the condition of becoming a core point, marking the data point as a noise point; the above steps are repeated with a new starting point being selected from the unviewed data points continued until all data points have been visited and assigned to a corresponding cluster or marker as noise points.
  9. 9. The method according to any one of claims 1 to 7, further comprising the step of analyzing the coarse recognition performance, in particular: Calculating the recognition rate, namely counting the number of the abnormal observed values which are correctly recognized as the rough difference, dividing the number by the total number of the abnormal observed values to obtain the recognition rate, and evaluating the recognition capability of an algorithm on the abnormal observed values; Calculating the false detection rate, counting the number of normal observation values which are incorrectly identified as coarse differences, dividing the number by the total number of the normal observation values to obtain the false detection rate, and reflecting the probability of false triggering of an alarm under the condition of no fault by the algorithm; Calculating the miss rate, counting the number of abnormal observed values which are not recognized as rough differences, dividing the number by the total number of the abnormal observed values to obtain the miss rate, wherein the miss rate is complementary with the recognition rate and is used for reflecting the omission degree of an algorithm in the aspect of detecting a real fault satellite; Calculating the accuracy rate, counting the number of the observed values which are correctly identified as the coarse differences and the total number of the observed values which are identified as the coarse differences, dividing the number of the observed values which are correctly identified as the coarse differences by the total number of the observed values which are identified as the coarse differences, and obtaining the accuracy rate for measuring the accuracy of identification; f1 fraction is calculated, the accuracy rate and the recognition rate are substituted into a harmonic mean formula, and F1 fraction is calculated and used for comprehensively evaluating the detection performance of the algorithm.
  10. 10. The method according to any one of claims 1 to 9, wherein: when the observation sample is subjected to standardization treatment, the following specific method is adopted: Firstly, calculating the mean value mu and the standard deviation sigma of each feature dimension in an observation sample, wherein for the ith feature dimension, the mean value mu i =(1/n)*Σ j x ij and the standard deviation sigma i =√[(1/(n-1))*Σ j (x ij -μ i ) 2 of the feature dimension are calculated, n is the number of the observation samples, and x ij represents the value of the jth observation sample in the ith feature dimension; Then, carrying out standardized transformation on each data point x ij in the observation sample to obtain a standardized value z ij =(x ij -μ i )/σ i , wherein the data mean value of each characteristic dimension is 0 and the standard deviation is 1 through the transformation; Determining a minimum value min and a maximum value max of each characteristic dimension in the observation sample, and for the ith characteristic dimension ,min i =min(x i1 ,x i2 ,...,x in ),max i =max(x i1 ,x i2 ,...,x in ); Performing standardized transformation on each data point x ij in the observation sample to obtain a standardized value y ij =(x ij -min i )/(max i -min i , and linearly mapping the data of each characteristic dimension into a [0,1] interval through the transformation; Finding out the maximum value abs_max of the absolute value of each characteristic dimension in the observation sample, and for the ith characteristic dimension, abs_max i =max(|x i1 |,|x i2 |,...,|x in |); Calculating a scaling factor j required for fractional scaling normalization, so that 10 j ≤abs_max i <10 j+1 ; normalized transformation is performed on each data point x ij in the observation sample to obtain a normalized value w ij =x ij /10 j , and the data of each feature dimension is scaled by shifting the decimal point position through the transformation.

Description

Multi-coarse-difference identification RAIM method based on DBSCAN clustering Technical Field The invention relates to the technical field of global navigation satellite systems, in particular to a multi-coarse-difference identification RAIM method based on DBSCAN clustering, aiming at improving the integrity, reliability and accuracy of a positioning result in a multi-coarse-difference scene. Background With the wide application of Global Navigation Satellite Systems (GNSS) in various fields such as aviation, navigation, land traffic, mapping, agriculture, and robotic navigation, the requirements for the safety and reliability of positioning results are increasing. Receiver Autonomous Integrity Monitoring (RAIM) algorithms play a growing role as key technologies to ensure the safety and reliability of GNSS positioning results. The RAIM algorithm can monitor the positioning result in real time by utilizing redundant observation information received by the GNSS receiver. Once faults or errors occur in the positioning process, the RAIM algorithm can give an alarm in time, and an important guarantee is provided for safe operation in the related field. For example, in the field of aviation, accurate positioning and timely integrity warning are critical to flight safety, and in the field of aviation, RAIM algorithms facilitate safe sailing of ships under complex sea conditions. In recent years, GNSS technology has continuously advanced. On the one hand, the number of satellites observable by users is continuously increased, the constellation configuration is optimized, and the multi-system fusion positioning mode is gradually popularized. These developments have significantly improved the positioning accuracy and reliability of GNSS, providing better positioning services for users. However, on the other hand, the increased number of satellites and the susceptibility of GNSS signals to interference in complex application scenarios lead to an increased risk of coarse occurrence of multiple satellites at the same time. The existence of the gross error seriously threatens the integrity of the navigation system, influences the accuracy of the positioning result, and even can cause safety accidents. Therefore, how to effectively cope with the multi-coarse-fine scenario becomes an important challenge for the current RAIM algorithm research. The shortcomings of conventional RAIM algorithms, such as least squares residual and parity vector, can exhibit good performance when dealing with a single gross error. But in the event of multiple satellite failures, the effectiveness of these algorithms is severely challenged. Because the fault mode is more complicated under the multi-coarse-difference scene, the traditional algorithm is difficult to accurately identify and separate a plurality of coarse differences, so that the probability of false detection and missed detection is increased, and the requirements on positioning integrity in practical application cannot be met. The frame and the model are in the aspect that although the optimal integrity and continuity distribution frame under multiple faults exists in the market at present, a statistical decision basis is laid for multiple fault detection, a multiple constellation dynamic distribution model is introduced, a subset selection strategy is optimized, the robustness of the system is effectively improved, a theory of quantifying fault event contribution to continuity risk is established, the upper limit of integrity risk is strictly deduced, and in practical application, a large amount of priori information and complex calculation process are needed by the frames and the models, so that the implementation difficulty and the calculation cost of an algorithm are increased. In the aspect of the detection strategy, the prior art provides a mixed detection strategy capable of balancing detection sensitivity and calculation efficiency by comparing the performance boundaries of generalized likelihood ratio detection and solution separation detection. However, the stability and adaptability of the performance of the strategy are still further improved when facing complex and changeable practical application scenarios. In the aspect of algorithm innovation, various algorithm innovation achievements appear on the market, such as realizing fault signal separation by utilizing independent component analysis, a novel RAIM algorithm based on an observation data set density center, a RAIM algorithm based on a probabilistic neural network multi-epoch residual error, a track plane grouping strategy adopted to reduce the scale of an advanced RAIM (ADVANCED RAIM, ARAIM) monitoring subset, a multi-hypothesis de-separation ARAIM algorithm (Multiple Hypothesis Solution Separation ARAIM, MHSSARAIM), a Kalman filtering (KALMAN FILTERING, KF) enhanced fault detection and elimination method based on colored measurement noise, a GNSS fault improvement detection method based on fractional informa