Search

CN-121984644-A - Wireless communication coding rate estimation method and system based on environment feature fusion

CN121984644ACN 121984644 ACN121984644 ACN 121984644ACN-121984644-A

Abstract

The invention discloses a wireless communication coding rate estimation method and system based on environment feature fusion, which belong to the field of wireless communication and comprise the steps of obtaining event stream data and RGB image data, obtaining event stream data tensor and RGB image data tensor through preprocessing, inputting the event stream data tensor and the RGB image data tensor into a trained coding rate estimation model to obtain the environment coding rate of a current time step, wherein training of the coding rate estimation model comprises the steps of constructing a training data set, constructing the coding rate estimation model, including a scene semantic information extraction module, an event encoder, a conditional diffusion module, a multi-scale feature fusion module and a coding rate estimation module, training the coding rate estimation model on the training data set through combining a coding rate estimation loss function, and outputting the trained coding rate estimation model. The invention establishes the mapping relation between the environment perception information and the wireless communication coding rate, and realizes the effective estimation of the communication coding rate.

Inventors

  • HU YULIN
  • LU YUXI
  • SUN PENG
  • GAO WEI
  • Liao Jingrui

Assignees

  • 武汉大学

Dates

Publication Date
20260505
Application Date
20260408

Claims (10)

  1. 1. A wireless communication coding rate estimation method based on environmental feature fusion, comprising: S1, acquiring event stream data and RGB image data; S2, respectively preprocessing event stream data and RGB image data to obtain event stream data tensors and RGB image data tensors; S3, inputting the event stream data tensor and the RGB image data tensor into a trained coding rate estimation model to obtain the environment coding rate of the current time step, wherein training of the coding rate estimation model comprises the following steps: Constructing an event stream and an RGB image training data set; The method comprises the steps of constructing a coding rate estimation model, a scene semantic information extraction module, an event coder, a condition diffusion module, a multi-scale feature fusion module, a coding rate estimation module and a coding rate estimation module, wherein the scene semantic information extraction module is used for extracting global scene semantic information from input RGB image data to generate an embedded vector; And combining the coding rate estimation loss function, training the coding rate estimation model on the constructed event stream and the RGB image training data set, and outputting the trained coding rate estimation model.
  2. 2. The method for estimating a wireless communication coding rate based on the fusion of environmental features according to claim 1, wherein S2 comprises: preprocessing event stream data, namely intercepting the event data through a time window, geometrically correcting the event data, normalizing the time stamp in the corrected event data, and processing the event data normalized by the time stamp through a pixel Grid method to obtain event stream data tensor; The RGB image data is preprocessed by time alignment, normalization and format conversion, so that the RGB image data and the truncated event data are kept synchronous in the time dimension, and an RGB image data tensor is obtained.
  3. 3. The method for estimating a wireless communication coding rate based on the fusion of environmental features according to claim 1, wherein the construction of the scene semantic information extraction module comprises: Carrying out numerical normalization and space zero padding on input RGB image data to obtain tensor data; inputting the tensor data into a visual transducer model, and outputting high-level semantic features; projecting the high-level semantic features to a low-dimensional potential semantic space to obtain the weight of each potential semantic; And forming an embedding matrix based on each potential semantic, and multiplying the weight of each potential semantic by the embedding matrix to obtain an embedding vector.
  4. 4. The method for estimating a coding rate of wireless communication based on fusion of environmental features according to claim 1, wherein the constructing of the event encoder comprises: and performing size filling on the event stream data, and mapping the event stream data subjected to size filling to a potential space through a variation self-encoder to generate event characteristics.
  5. 5. The method for estimating a wireless communication coding rate based on the fusion of environmental features according to claim 1, wherein the constructing of the conditional diffusion module comprises: Modeling the embedded vector and the event feature by adopting a potential space diffusion network based on U-Net; and taking the middle feature diagram in the modeling result as a high-level semantic feature.
  6. 6. The method for estimating a wireless communication coding rate based on the fusion of environmental features according to claim 1, wherein the constructing of the multi-scale feature fusion module comprises: Outputting high-level semantic features as features with different scales, up-sampling low-scale features, and performing feature stitching on the low-scale features; And based on the feature splicing result, performing cross-scale feature fusion on the features with different scales to obtain fusion features.
  7. 7. The method for estimating a coding rate of wireless communication based on fusion of environmental features according to claim 1, wherein the coding rate estimation module comprises: linear mapping is carried out on the fusion features through convolution to obtain high-dimensional semantic features; And carrying out multi-layer transposition convolution and deconvolution on the high-dimensional semantic features to obtain space reconstruction features, and obtaining a coding rate estimation result.
  8. 8. The method for estimating a coding rate of wireless communication based on fusion of environmental characteristics according to claim 1, wherein the expression of the coding rate estimation loss function is: , Wherein, the Representing the coding rate estimation loss function, i represents The ith of the events is a new event, Indicating the number of events to be performed, Representing the asymmetric error penalty factor, Representing the difference between the predicted code rate and the true code rate in the log domain, The weight term is represented by a weight term, Representing estimation th The code rate obtained for each sample is determined, Represent the first The true code rate of the individual samples, Represents a very small positive number of the number, Representing the smoothl 1 loss function.
  9. 9. The wireless communication coding rate estimation system based on the environmental feature fusion, which is characterized by being used for realizing the wireless communication coding rate estimation method based on the environmental feature fusion as claimed in any one of claims 1 to 8, comprising: The data acquisition module is used for acquiring event stream data and RGB image data; The data preprocessing module is used for respectively preprocessing the event stream data and the RGB image data to obtain event stream data tensor and RGB image data tensor; the coding rate estimation module is used for inputting the event stream data tensor and the RGB image data tensor into a trained coding rate estimation model to obtain the environment coding rate of the current time step, and training the coding rate estimation model comprises the following steps: Constructing an event stream and an RGB image training data set; The method comprises the steps of constructing a coding rate estimation model, a scene semantic information extraction module, an event coder, a condition diffusion module, a multi-scale feature fusion module, a coding rate estimation module and a coding rate estimation module, wherein the scene semantic information extraction module is used for extracting global scene semantic information from input RGB image data to generate an embedded vector; And combining the coding rate estimation loss function, training the coding rate estimation model on the constructed event stream and the RGB image training data set, and outputting the trained coding rate estimation model.
  10. 10. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor, when executing the computer program, implements the steps of the wireless communication coding rate estimation method based on the fusion of environmental features of any one of claims 1 to 8.

Description

Wireless communication coding rate estimation method and system based on environment feature fusion Technical Field The invention belongs to the field of wireless communication, and particularly relates to a wireless communication coding rate estimation method and system based on environment feature fusion. Background Wireless communication is an information artery supporting the interconnection of everything, and plays a central role in the fields of key infrastructure, civilian applications and the like. With the use of the Base Station (BS), the User Equipment (UE), and other communication nodes, low-latency and reliable data transmission is required in a complex and time-varying environment, especially in dynamic scenarios such as the internet of vehicles. The communication system needs to adaptively adjust the transmission parameters according to the current communication conditions. Among them, reasonable selection of communication coding rate is one of important factors affecting system performance. In existing research, selection of a communication coding rate is typically based on Channel State Information (CSI), historical feedback Information, or a preset policy. The supportable coding rate of the communication link is estimated, for example, by measuring historical parameters such as Signal-to-Noise Ratio (SNR), channel quality finger information (Channel Quality Indication, CQI), bit error rate, or number of retransmissions. However, the above-described approach is primarily focused on the physical layer or link layer characteristics of the communication link itself. In addition, in a scenario where the delay requirement is high, the code length used for transmission is typically much smaller than the feedback period of the CQI, resulting in information lag or distortion for code rate selection, which affects the spectral efficiency of the system. The existing research has limited utilization degree of the environment semantic features, has insufficient modeling capability of the internal association between the environment information and the communication coding rate, and is easy to cause the problem of spectrum efficiency reduction caused by unreasonable coding rate selection when facing the complex nonlinear relation. Thus, in a dynamic environment, accurate and stable estimation of communication coding rates to achieve high reliability low latency communication remains a significant technical challenge. Disclosure of Invention The invention aims at solving the problem that the spectrum efficiency is reduced due to unreasonable coding rate selection in the high dynamic scene and complex nonlinear relation in the prior art, and provides a wireless communication coding rate estimation method based on environment characteristic fusion, which comprehensively characterizes the current communication environment by acquiring different types of environment perception data, fully utilizes environment semantic information contained in RGB image data and low-delay environment change characteristics reflected by event data, and the multi-mode environment information is fused based on the condition diffusion module, so that the mapping relation between the environment sensing information and the wireless communication coding rate is established, the effective estimation of the communication coding rate is realized, the accuracy and the stability of the coding rate estimation are improved in a complex and dynamic communication environment, and the spectrum utilization efficiency and the communication reliability are improved. According to an aspect of the present invention, there is provided a wireless communication coding rate estimation method based on environment feature fusion, including: S1, acquiring event stream data and RGB image data; S2, respectively preprocessing event stream data and RGB image data to obtain event stream data tensors and RGB image data tensors; S3, inputting the event stream data tensor and the RGB image data tensor into a trained coding rate estimation model to obtain the environment coding rate of the current time step, wherein training of the coding rate estimation model comprises the following steps: Constructing an event stream and an RGB image training data set; The method comprises the steps of constructing a coding rate estimation model, a scene semantic information extraction module, an event coder, a condition diffusion module, a multi-scale feature fusion module, a coding rate estimation module and a coding rate estimation module, wherein the scene semantic information extraction module is used for extracting global scene semantic information from input RGB image data to generate an embedded vector; And combining the coding rate estimation loss function, training the coding rate estimation model on the constructed event stream and the RGB image training data set, and outputting the trained coding rate estimation model. Further, the step S2 includes: preprocessing e