Search

CN-121994248-A - Target position determining method and device

CN121994248ACN 121994248 ACN121994248 ACN 121994248ACN-121994248-A

Abstract

The application relates to a target position determining method and device. The method comprises the steps of obtaining sensing data of a target object collected by at least one sensing device at the current moment, determining a first feature map of the sensing data corresponding to the sensing device according to each sensing device, encoding the pose of feature points in the first feature map according to the relative pose data corresponding to the sensing device to obtain a second feature map corresponding to the sensing device, and determining target position information of the target object at the current moment according to the second feature map corresponding to each sensing device. By adopting the method, the position detection precision can be improved.

Inventors

  • DAI FEI
  • HE HAO
  • HUANG YONGBO
  • ZHANG BAOLIN
  • WEN YINAN
  • LIANG JIACHEN

Assignees

  • 重庆蓝电汽车科技有限公司

Dates

Publication Date
20260508
Application Date
20260226

Claims (10)

  1. 1. A method of determining a target location, the method comprising: acquiring sensing data of a target object acquired by at least one sensing device at the current moment, and relative pose data between a reference coordinate system and a target coordinate system corresponding to each sensing device; Determining a first characteristic diagram of sensing data corresponding to each sensing device; performing pose coding on the feature points in the first feature map by using the relative pose data corresponding to the sensing equipment to obtain a second feature map corresponding to the sensing equipment; And determining target position information of the target object at the current moment according to the second feature map corresponding to each sensing device.
  2. 2. The method of claim 1, wherein the relative pose data comprises a pose transformation matrix for describing pose transformation between the reference coordinate system and a corresponding target coordinate system, wherein the pose encoding is performed on feature points in the first feature map by the relative pose data corresponding to the sensing device to obtain a second feature map corresponding to the sensing device, and the method comprises at least one of the following steps: Inputting the pose transformation matrix corresponding to the sensing equipment and the first feature map into a target coding network to obtain a second feature map corresponding to the sensing equipment; Inputting the pose transformation matrix corresponding to the sensing equipment, the first feature map and scene association information corresponding to the sensing data into a target coding network to obtain a second feature map corresponding to the sensing equipment; And carrying out fusion processing on the pose code and the first feature map to obtain a second feature map corresponding to the sensing equipment.
  3. 3. The method according to claim 1 or 2, wherein determining the target position information of the target object according to the second feature map corresponding to each sensing device includes: feature fusion is carried out on the second feature graphs corresponding to the sensing devices, and fusion feature graphs are obtained; And determining target position information of the target object according to the fusion feature map.
  4. 4. A method according to claim 3, wherein said determining target location information of said target object from said fused feature map comprises: Aiming at any adjacent line of feature vectors in the fusion feature map, carrying out feature enhancement on the forward feature vector according to the backward feature vector to obtain a target feature map at the current moment; and determining target position information of the target object according to the target feature map.
  5. 5. The method of claim 4, wherein determining target location information for the target object based on the target feature map comprises: acquiring a time sequence fusion characteristic diagram of the target object at the current moment corresponding to the historical moment; back-projecting the time sequence fusion feature map under the historical time to a reference coordinate system of the current time corresponding to the target feature map to obtain a back-projection feature map; performing feature fusion on the target feature map and the back projection feature map to obtain a time sequence fusion feature map at the current time; And determining the target position information of the target object according to the time sequence fusion characteristic diagram at the current time.
  6. 6. The method of claim 3, wherein the feature fusing the second feature map corresponding to each sensing device to obtain a fused feature map includes: Performing feature fusion on a second feature map corresponding to each sensing device to obtain a multi-scale feature map, wherein the multi-scale feature map comprises a first scale feature map and a second scale feature map, and the first spatial resolution of the first feature map is smaller than the second spatial resolution of the second feature map; And fusing the first characteristic information of the preset central area in the first scale characteristic map with the second characteristic information in the second scale characteristic map to obtain a fused characteristic map.
  7. 7. The method of claim 6, wherein the fusing the first feature information of the preset central area in the first scale feature map with the second feature information in the second scale feature map to obtain a fused feature map includes: Fusing first characteristic information of a preset central area in the first scale characteristic map with second characteristic information in the second scale characteristic map to obtain an initial fused characteristic map; and carrying out interpolation processing on the initial fusion feature map to obtain the fusion feature map.
  8. 8. A method of determining a target location, the method comprising: Acquiring sensing data of a target object acquired by at least one sensing device at the current moment; performing feature fusion on a first feature map corresponding to the sensing data of each sensing device to obtain a fusion feature map; Aiming at any adjacent line of feature vectors in the fusion feature map, carrying out feature enhancement on the forward feature vector according to the backward feature vector to obtain a target feature map at the current moment; and determining target position information of the target object according to the target feature map.
  9. 9. A target position determining apparatus, the apparatus comprising: The first acquisition module is used for acquiring sensing data of a target object acquired by at least one sensing device at the current moment and relative pose data between a reference coordinate system and a target coordinate system corresponding to each sensing device; The first determining module is used for determining a first characteristic diagram of sensing data corresponding to each sensing device; The encoding module is used for encoding the pose of the feature points in the first feature map according to the relative pose data corresponding to the sensing equipment to obtain a second feature map corresponding to the sensing equipment; And the second determining module is used for determining the target position information of the target object at the current moment according to the second feature diagrams corresponding to the sensing devices.
  10. 10. A target position determining apparatus, the apparatus comprising: The second acquisition module is used for acquiring the sensing data of the target object acquired by at least one sensing device at the current moment; the fusion module is used for carrying out feature fusion on the first feature graphs corresponding to the sensing data of each sensing device to obtain fusion feature graphs; The enhancement module is used for carrying out feature enhancement on the forward feature vector according to the backward feature vector aiming at any adjacent line feature vector in the fusion feature map to obtain a target feature map at the current time; And the third determining module is used for determining the target position information of the target object according to the target feature map.

Description

Target position determining method and device Technical Field The present application relates to the field of target detection technologies, and in particular, to a method and apparatus for determining a target position. Background With the development of target detection technology, the sensor equipment is deployed on the drive test to sense the target object in the surrounding environment and position the target object, so that the tracking and behavior prediction of the continuous path of the target can be realized. In the related art, target positions can be respectively identified through different types of sensing equipment, and then the target positions are fused to obtain a final position. In the position determining mode, a certain error exists between the determined final position and the actual position, and the position detection precision is affected. Disclosure of Invention In view of the foregoing, it is desirable to provide a target position determining method and apparatus that improves position detection accuracy. In a first aspect, the present application provides a method for determining a target position, including: acquiring sensing data of a target object acquired by at least one sensing device at the current moment and relative pose data between a reference coordinate system and a target coordinate system corresponding to each sensing device; determining a first feature map of sensing data corresponding to the sensing devices for each sensing device; performing pose coding on the feature points in the first feature map by using relative pose data corresponding to the sensing equipment to obtain a second feature map corresponding to the sensing equipment; and determining target position information of the target object at the current moment according to the second feature map corresponding to each sensing device. In the above steps, the sensing data of the target object acquired by at least one sensing device at the current moment is acquired, and the first feature map of the sensing device corresponding to the sensing data is determined for each sensing device, so that each sensing data is converted into a feature representation form suitable for subsequent processing and fusion, and a foundation is provided for the depth integration of subsequent multi-source data. The method has the advantages that the relative pose data are introduced, the pose coding is carried out on the feature points in the first feature map by the aid of the relative pose data corresponding to the sensing equipment, so that the feature points contain clear space attributes, the perception and representation capacity of a moving target can be improved, and meanwhile, compared with the geometric calculation processing of the original data, the method has the advantages that the relative pose data are internalized in the feature, the calculation redundancy is reduced, and the complexity of data processing is reduced. And determining the target position information of the target object at the current moment according to the second characteristic diagrams corresponding to the sensing devices, so that error accumulation is reduced, and the position detection precision is improved. In an alternative embodiment of the first aspect, the relative pose data includes a pose transformation matrix for describing pose transformation between a reference coordinate system and a corresponding target coordinate system, the feature points in the first feature map are subjected to pose coding according to the relative pose data corresponding to the sensing device to obtain a second feature map corresponding to the sensing device, and the relative pose data includes at least one of inputting the pose transformation matrix and the first feature map corresponding to the sensing device into a target coding network to obtain the second feature map corresponding to the sensing device, inputting the pose transformation matrix, the first feature map and scene related information corresponding to the sensing data corresponding to the sensing device into the target coding network to obtain the second feature map corresponding to the sensing device, performing coding processing on the pose transformation matrix corresponding to the sensing device to obtain a pose code, and performing fusion processing on the pose code and the first feature map to obtain the second feature map corresponding to the sensing device. In the above steps, by introducing the target coding network, the coding mapping relationship learned by the target coding network can be utilized to perform pose coding on the feature points in the first feature map by using the relative pose data corresponding to the sensing device, so that the coding efficiency of pose coding on the feature points is improved. Meanwhile, by introducing scene association information, the pose coding and feature fusion strategy can be dynamically adjusted, the adaptability of the feature representa