Search

CN-122027978-A - Visible light positioning fingerprint library capacity expansion method and system based on MLP

CN122027978ACN 122027978 ACN122027978 ACN 122027978ACN-122027978-A

Abstract

The application discloses a visible light positioning fingerprint library capacity expansion method and system based on MLP, and belongs to the technical field of indoor positioning. The method comprises the steps of sparsely collecting coordinates of a plurality of reference points and received signal intensity values from a plurality of LED light sources in a two-dimensional positioning area to form an initial sparse fingerprint data set, constructing a learnable mapping model which takes the coordinates as input and takes signal intensity feature vectors as output, training the model by utilizing the data set to enable the model to learn a nonlinear mapping relation from the coordinates to the signal intensity, inputting the coordinates of points to be interpolated in the area into the trained model to predict the signal intensity feature vectors, and finally combining the initial data and the predicted data to generate a visible light positioning fingerprint library with higher grid density. The application reconstructs the high-density fingerprint library from the sparse data by using the machine learning model, thereby remarkably reducing the workload and cost of fingerprint acquisition and simultaneously maintaining high positioning precision.

Inventors

  • XIONG JIANHUI
  • XU SHIWU
  • ZHENG YIXIN
  • JIA WENKANG
  • ZHENG ZHONGHUA

Assignees

  • 福建师范大学协和学院

Dates

Publication Date
20260512
Application Date
20260311

Claims (10)

  1. 1. The visible light positioning fingerprint library expansion method based on the MLP is characterized by comprising the following steps of: S1, in a two-dimensional positioning area, acquiring two-dimensional coordinates of a plurality of reference points at a first sampling interval in a sparse mode, and receiving signal intensity values from a plurality of LED light sources corresponding to each reference point to form receiving signal intensity feature vectors of the reference points, wherein the two-dimensional coordinates of all the reference points and the receiving signal intensity feature vectors form an initial sparse fingerprint data set; s2, constructing a learnable mapping model, wherein the learnable mapping model takes two-dimensional coordinates of the reference points as model input and takes received signal strength characteristic vectors of the reference points as model output; s3, training the learnable mapping model by using the initial sparse fingerprint data set, and optimizing the super parameters of the learnable mapping model to enable the learnable mapping model to learn a nonlinear mapping relation from two-dimensional coordinates to received signal strength feature vectors; s4, inputting two-dimensional coordinates of points to be interpolated in the two-dimensional positioning area except the reference point into the trained leachable mapping model, and outputting a received signal strength characteristic vector predicted by each point to be interpolated by the leachable mapping model; and S5, combining the two-dimensional coordinates of each reference point and the received signal intensity characteristic vector thereof in the initial sparse fingerprint data set with the two-dimensional coordinates of each point to be interpolated and the predicted received signal intensity characteristic vector thereof obtained in the step S4 to generate a visible light positioning fingerprint library with grid density higher than that of the initial sparse fingerprint data set.
  2. 2. The MLP-based visible light localization fingerprint library expansion method of claim 1, wherein in step S2, the learnable mapping model is a multi-layer perceptron neural network model, and in step S3, super parameters of the multi-layer perceptron neural network model are automatically optimized by a Bayesian optimization method to complete multi-layer perceptron neural network model training.
  3. 3. The MLP-based visible light localization fingerprint library expansion method of claim 1, wherein said learnable mapping model is a parameterized physical model based on lambert' S radiation law, steps S2 and S3 comprising: S21, dividing the two-dimensional positioning area into K signal characteristic subareas through a clustering algorithm based on the received signal intensity characteristic vectors of all reference points in the initial sparse fingerprint data set; S31, establishing a parameterized visible light signal propagation physical model for each signal characteristic subarea, wherein the physical model of the kth subarea is used for predicting a received signal intensity vector from N LED light sources to any point in the subarea, and the predicted component of the kth subarea for the nth light source is represented by yk and N, and the calculation formula is as follows: yk,n=αk,n‖p−ln‖2βk,n+γk,n; Wherein p is a two-dimensional coordinate, ln is a known horizontal coordinate of an nth LED light source, alpha k and n are gain coefficients related to the light source emission power and the overall average reflectivity of a subarea, beta k and n are path loss indexes, gamma k and n are environmental noise substrate offsets, and alpha k, n, beta k, n and gamma k and n jointly form a learnable parameter set thetak of a physical model of the subarea; S32, respectively optimizing a learnable parameter set thetak of the corresponding physical model by using reference point data belonging to each signal characteristic sub-region through a least square method; the step S4 specifically comprises the steps of firstly judging the signal characteristic subarea k of any two-dimensional coordinate to be interpolated according to the two-dimensional coordinate to be interpolated, and then calling a physical model corresponding to the subarea to calculate the predicted received signal intensity characteristic vector.
  4. 4. The MLP-based visible light localization fingerprint library expansion method of claim 1, further comprising a cross-modality pre-training phase based on image data prior to step S1, specifically comprising: s10a, acquiring a global image covering the two-dimensional positioning area; S10b, constructing an image feature extraction network, wherein the network comprises a general feature extraction trunk and a pre-measurement head, and the network is pre-trained by an image context prediction task through the global image, wherein the image context prediction task is to randomly cut out a plurality of image blocks from the global image and train the network to predict the relative positions of adjacent image blocks in the global image according to the pixel content of one image block; S10c, discarding the prediction head after the pre-training of the image feature extraction network is completed, and fixing the weight parameters of the general feature extraction trunk to be used as an image feature extractor used in the subsequent step; in step S2, the input of the leachable mapping model is that the two-dimensional coordinates of the reference point and the spliced vector of the image features, which are extracted by the image feature extraction network and correspond to the two-dimensional coordinates in the global image, are obtained.
  5. 5. The MLP-based visible light localization fingerprint library expansion method of claim 1, wherein the calculation formula of the loss function L used in training the learnable mapping model in step S3 is as follows: L=Ldata+λ⋅Lphysics; Wherein Ldata is a data fitting loss for constraining an error between a predicted value and a true value of an MLP neural network model, lphysics is a physical regularization loss, and the calculation mode is that for each training sample coordinate, a theoretical signal intensity vector from each LED light source to the point is calculated according to a lambert radiation model, a mean square error between the model predicted signal intensity vector and the theoretical signal intensity vector is calculated, and lambda is a regularization coefficient.
  6. 6. The MLP-based visible light localization fingerprint library expansion method according to claim 1, wherein in step S1, received signal strength feature vectors of a plurality of reference points are sparsely collected in the two-dimensional localization area, and sampling positions of the reference points are determined by the following offline optimization process: S101, constructing a simulation environment for reinforcement learning training based on physical layout parameters of the two-dimensional positioning area, wherein the simulation environment comprises a visible light signal propagation simulator, the physical layout parameters at least comprise installation coordinates of each LED light source, the boundary and barrier position of the positioning area and a basic attenuation model of light signal propagation, and the visible light signal propagation simulator is used for calculating a received signal intensity feature vector corresponding to any coordinate in the area; s102, optimizing and modeling a sampling position as a sequence decision problem, wherein the core elements of the sequence decision problem are defined as follows: A state st represents the coordinate set of the selected sampling point when the decision step t is cut; action at, representing that in step t, the next sampling coordinate is selected from the candidate coordinate set which is not sampled yet; The method for calculating the rewards rt comprises the steps of obtaining analog signal intensity data of updated sampling point coordinates through the visible light signal propagation simulator by utilizing all sampling point coordinates corresponding to the updated state st+1, training an MLP neural network proxy model with the same structure as that in the step S2 based on the analog signal intensity data, and taking negative values of analog signal intensity prediction errors of the proxy model on all candidate coordinates of the whole positioning area as the rewards rt; s103, training a strategy network by using a reinforcement learning algorithm in the simulation environment, wherein the strategy network takes a state st as an input, outputs probability distribution for selecting each candidate coordinate as a next sampling point, and the training target of the strategy network is to maximize accumulated rewards of the whole decision sequence from an initial empty state to the sampling point reaching a preset threshold M; S104, running the trained strategy network from an initial empty state in a simulation environment to generate a sequence containing M sampling coordinates; and S105, according to the sampling coordinate sequence, data acquisition is carried out in an actual positioning environment, and the received signal strength characteristic vectors of the plurality of reference points are obtained.
  7. 7. The MLP-based visible light localization fingerprint library expansion method of claim 6, wherein in step S103, a strategy network is trained using a reinforcement learning algorithm, comprising: S1031, initializing a strategy network pi phi, wherein the parameter of the strategy network pi phi is phi, and outputting the action probability distribution of each candidate coordinate by taking the coded state st as input; S1032, adopting a near-end strategy optimization algorithm to carry out iterative updating on the parameter phi of the strategy network pi phi, and specifically comprising the following steps: s1032a, executing and collecting a batch of track data in the simulation environment, wherein the track data comprises a state st, an action at and a rewarding rt sequence by using a strategy network pi old with the current parameter phi old; s1032b, calculating an estimated value At of the dominance function of each decision step t based on the collected track data; S1032c, constructing and optimizing a clipped substitute objective function LCLIP (phi) to update parameters, wherein the objective function constrains policy update amplitude through clipping probability ratio, and the mathematical expression is as follows: LCLIP(φ)=Etminrt(φ)⋅At, clip(rt(φ),1−ε,1+ε)⋅At; Wherein, rt (phi) =pi phi (at-st) pi phi old (at-st) is the probability ratio of new and old strategies to the same action, epsilon is a preset positive super parameter, clip (·) is a clipping function, et [ · ] decides the expectation of step t; S1032d, maximizing LCLIP (phi) through a gradient ascent algorithm to obtain updated strategy network parameters phi new.
  8. 8. The MLP based visible light localization fingerprint library expansion method of claim 7, further comprising: s1033, in the training process of the strategy network pi phi, adopting a course learning strategy, wherein the course learning strategy comprises a plurality of training phases, each training phase comprises a self-defined triplet, the triplet comprises a candidate coordinate set Ci, a signal propagation simulator Ei and training iteration times Ni, and the training is executed according to the following rules: S1033a, setting the total number K of course phases to be more than or equal to 2, setting the candidate coordinate set C1 as a uniform downsampling subset of the complete set CK, and/or setting the signal propagation simulator E1 as a simplified model; s1033b for i=1 to K, the following steps are performed: Executing S1032 said near-end policy optimization algorithm Ni iterations in a simulation environment defined by < Ci, ei >, updating a policy network pi phi; if i < K, entering the next stage, and updating the simulation environment configuration to < Ci+1, ei+1 >; s1034, after training of K course stages is completed, a strategy network with complete final training is obtained.
  9. 9. The MLP-based visible light localization fingerprint library expansion method of claim 1, wherein prior to step S1, the method further comprises: Loading a semantic partition map corresponding to the two-dimensional positioning area, wherein the map divides the area into m semantic categories, and each category is associated with a reference sampling interval dm; for any candidate coordinate (x, y) to be sampled, the following is performed: Determining the main semantic category Cmean to which the device belongs, and acquiring a reference interval dmain of the device; Calculating the Euclidean distance Dedge from the coordinate to the nearest semantic partition boundary; the final sampling interval dfinal (x, y) is determined according to the following rule: If de > T1, dfinal = dmain; If T2< ridge is less than or equal to T1, dfinal = dmain ×α; if de is less than or equal to T2, dfinal =dmin; Wherein T1 and T2 are distance threshold values, T1> T2>0, alpha is a reduction coefficient, and 0< alpha <1; Generating a reference point set P1 for acquisition in the step S1 by adopting a maximum minimization criterion according to dfinal of each coordinate obtained through calculation, and ensuring that the sampling point density is higher in a smaller area dfinal; the optimization objective of the maximum minimization criterion includes maximizing the minimum distance between sampling points throughout the region, while constraining the candidate distribution of sampling points by mapping dfinal to spatial weights such that the sampling point density is higher in regions with smaller dfinal values.
  10. 10. A MLP-based visible light localization fingerprint library expansion system for performing the method of any one of claims 1 to 9, the system comprising: the data acquisition module is configured to sparsely acquire two-dimensional coordinates of a plurality of reference points and received signal intensity values from a plurality of LED light sources corresponding to each reference point in a two-dimensional positioning area at a first sampling interval to form received signal intensity feature vectors of each reference point, and the two-dimensional coordinates of all the reference points and the received signal intensity feature vectors thereof form an initial sparse fingerprint data set; The model construction module is configured to construct a learnable mapping model, wherein the learnable mapping model takes two-dimensional coordinates of the reference points as model input and takes received signal strength characteristic vectors of the reference points as model output; The model training module is configured to train the learnable mapping model by using the initial sparse fingerprint data set, optimize the super parameters of the learnable mapping model and enable the learnable mapping model to learn a nonlinear mapping relation from two-dimensional coordinates to received signal strength feature vectors; the coordinate interpolation module is configured to input two-dimensional coordinates of points to be interpolated except the reference point in the two-dimensional positioning area to the trained leachable mapping model, and the leachable mapping model outputs a received signal strength feature vector predicted by each point to be interpolated; And the fingerprint library generating module is configured to combine the two-dimensional coordinates of each reference point and the received signal intensity characteristic vector thereof in the initial sparse fingerprint data set with the two-dimensional coordinates of each point to be interpolated and the predicted received signal intensity characteristic vector thereof obtained by the coordinate interpolation module to generate a visible light positioning fingerprint library with grid density higher than that of the initial sparse fingerprint data set.

Description

Visible light positioning fingerprint library capacity expansion method and system based on MLP Technical Field The application relates to the technical field of indoor visible light positioning, in particular to a visible light positioning fingerprint library capacity expansion method and system based on MLP. Background Along with the development of the Internet of things and intelligent terminals, the demands of scenes such as storage and the like on indoor high-precision positioning are increasing. The visible light positioning technology has the advantages of no electromagnetic interference, high safety, high theoretical precision and the like, and becomes one of the research hot spots of indoor positioning. Among them, fingerprint positioning based on received signal strength is a main method for realizing high-precision positioning, but the application is limited by the dense sampling needed in the fingerprint library construction process, resulting in large workload and high cost. In order to reduce the construction cost of fingerprint libraries, researchers have proposed a variety of sparse fingerprint library expansion methods. For example, interpolation is performed through an optical propagation model, but the assumption of the channel model by the method is strong, the adaptability is limited in a complex environment, and research is also carried out on processing the signal mapping problem by utilizing a machine learning method, but the method focuses on an online positioning stage. In the prior art, how to directly learn the mapping relation from the sparsely sampled coordinate-signal intensity data by utilizing the strong nonlinear fitting capability of machine learning so as to efficiently and robustly generate a high-density fingerprint library is still a problem to be solved. Disclosure of Invention Therefore, a technical scheme for expanding visible light positioning fingerprint library based on MLP is needed to solve the technical problems of how to reconstruct high-density positioning fingerprint library efficiently and accurately from sparse sampled signal intensity data through a machine learning model, so as to overcome the defects of large workload and high cost caused by dense sampling in the traditional fingerprint positioning method. In order to achieve the above object, in a first aspect, the present application provides a visible light positioning fingerprint library expansion method based on MLP, comprising the steps of: S1, in a two-dimensional positioning area, acquiring two-dimensional coordinates of a plurality of reference points at a first sampling interval in a sparse mode, and receiving signal intensity values from a plurality of LED light sources corresponding to each reference point to form receiving signal intensity feature vectors of the reference points, wherein the two-dimensional coordinates of all the reference points and the receiving signal intensity feature vectors form an initial sparse fingerprint data set; s2, constructing a learnable mapping model, wherein the learnable mapping model takes two-dimensional coordinates of the reference points as model input and takes received signal strength characteristic vectors of the reference points as model output; s3, training the learnable mapping model by using the initial sparse fingerprint data set, and optimizing the super parameters of the learnable mapping model to enable the learnable mapping model to learn a nonlinear mapping relation from two-dimensional coordinates to received signal strength feature vectors; s4, inputting two-dimensional coordinates of points to be interpolated in the two-dimensional positioning area except the reference point into the trained leachable mapping model, and outputting a received signal strength characteristic vector predicted by each point to be interpolated by the leachable mapping model; and S5, combining the two-dimensional coordinates of each reference point and the received signal intensity characteristic vector thereof in the initial sparse fingerprint data set with the two-dimensional coordinates of each point to be interpolated and the predicted received signal intensity characteristic vector thereof obtained in the step S4 to generate a visible light positioning fingerprint library with grid density higher than that of the initial sparse fingerprint data set. In step S3, the super parameters of the multi-layer perceptron neural network model are automatically optimized by a Bayesian optimization method to complete the training of the multi-layer perceptron neural network model. Further, the learnable mapping model is a parameterized physical model based on lambert' S radiation law, and steps S2 and S3 include: S21, dividing the two-dimensional positioning area into K signal characteristic subareas through a clustering algorithm based on the received signal intensity characteristic vectors of all reference points in the initial sparse fingerprint data set; S31, est