Search

CN-122024507-A - Auxiliary positioning method and system based on multiple sensors

CN122024507ACN 122024507 ACN122024507 ACN 122024507ACN-122024507-A

Abstract

The invention provides an auxiliary positioning method and system based on multiple sensors, which solve the technical problem of poor sensing capability to an operation environment in the operation of the existing platform. The method comprises the steps of obtaining looking-around sensing image data, front sensing image data and side sensing distance data based on a platform body, determining active sensing data of an included angle between the platform body and a central line of a front transfer flat plate through the front sensing image data, determining active sensing data of a posture difference between the platform body and a side transfer vehicle through the side sensing distance data, displaying the looking-around sensing image through a human-computer interaction interface, superposing a graphic identifier of the platform body in the looking-around sensing image according to a position mapping relation between the images, and forming a graphical suggestion when the platform body is maneuvered according to the active sensing data. The platform driver can park the vehicle to the designated position more accurately by means of the auxiliary positioning process, adjustment time is shortened, and convenience is provided for subsequent operation.

Inventors

  • YAN ZEXIN
  • WANG MINGFEI
  • LI HONGNA
  • LI WENHUA

Assignees

  • 北京特种机械研究所

Dates

Publication Date
20260512
Application Date
20251203

Claims (10)

  1. 1. A multi-sensor based auxiliary seating method, comprising: Acquiring looking-around perceived image data, front perceived image data and side perceived distance data based on a platform body; Active perception data of the included angle between the platform body and the central line of the front transfer flat plate is determined through the front perception image data, and active perception data of the difference between the platform body and the gesture of the side transport vehicle is determined through the side perception distance data; And displaying the looking-around perceived image through a human-computer interaction interface, superposing the graphic identification of the platform body in the looking-around perceived image according to the position mapping relation between the images, and forming a graphic suggestion when the platform body is maneuvered according to the active perceived data.
  2. 2. The multi-sensor based assisted in-place method of claim 1, wherein the acquiring look-around perceived image data, front perceived image data, and side perceived distance data based on the platform body comprises: Establishing the time sequence of the looking-around perceived image data, the front perceived image data and the side perceived distance data; Forming position mapping by the partial looking-around perceived image of the transport vehicle comprising the side and the side perceived distance data in the looking-around perceived image at the same time node; and at the same time node, mapping the local looking-around perceived image comprising the front transfer flat plate in the looking-around perceived image with the front perceived image to form a position.
  3. 3. The multi-sensor based auxiliary seating method of claim 1, wherein the determining active sensing data of the included angle of the platform body and the center line of the front transfer plate from the front sensing image data comprises: calibrating the width of the platform vehicle, and determining the position of the center line of the platform in the front perceived image according to the calibration and the generation parameters of the front perceived image; identifying the contour position of a front transfer panel in the front perceived image, and determining the center line position of the panel in the front perceived image according to the contour position; And forming a central line included angle quantization according to the position comparison of the central line of the platform in the front perceived image and the central line of the flat plate.
  4. 4. The multi-sensor based auxiliary seating method of claim 1, wherein the determining active perceived data of a difference in attitude of the platform body and the side-carrying vehicle from the side perceived distance data comprises: identifying point cloud data of the reflection targets according to the lateral perception distance data, and determining that the platform reaches the transfer range according to the point cloud data of the two reflection targets; determining target coordinates of each reflection target relative to the laser radar according to the point cloud data of the two reflection targets; And determining the relative height deviation, the relative longitudinal deviation and the relative transverse deviation of the lateral conveying vehicles of the platform body according to the target coordinates, and forming an orientation included angle of the platform body and the conveying vehicles according to deviation data.
  5. 5. The multi-sensor-based auxiliary positioning method according to claim 1, wherein the look-around perceived image data is acquired by means of fisheye cameras, the fisheye cameras are arranged in the circumferential direction of the platform, and the angles of view of adjacent fisheye cameras are superimposed on each other.
  6. 6. The multi-sensor-based auxiliary positioning method according to claim 1, wherein the lateral sensing distance data is obtained through multi-line lidar, the multi-line lidar is symmetrically arranged on two sides of the platform, a pair of reflection targets are horizontally arranged on the corresponding side wall of the transportation vehicle, the distance between the reflection targets is fixed, the heights of the reflection targets are the same, and a high-reflectivity material is adopted.
  7. 7. The multi-sensor based auxiliary seating method of claim 1, wherein the active sensing data of the attitude discrepancy comprises a relative lateral distance, a relative altitude deviation, a relative longitudinal deviation, and an included angle of orientation from a delivery location of the delivery vehicle.
  8. 8. The multi-sensor based assisted placement method of claim 1, wherein the active sensing data of the centerline angle comprises a spacing at zero angle.
  9. 9. A multi-sensor based auxiliary seating system, comprising: A memory for storing program code during processing of the multisensor based assisted positioning method of any one of claims 1 to 8; and a processor for executing the program code.
  10. 10. A multi-sensor based auxiliary seating system, comprising: The data acquisition device is used for acquiring the looking-around perceived image data, the front perceived image data and the lateral perceived distance data based on the platform body; the data forming device is used for determining active sensing data of an included angle between the platform body and the central line of the front transfer flat plate through the front sensing image data and determining active sensing data of the gesture difference between the platform body and the lateral transport vehicle through the lateral sensing distance data; The interface generating device is used for displaying the looking-around perceived image through the human-computer interaction interface, superposing the graphic identification of the platform body in the looking-around perceived image according to the position mapping relation among the images, and forming the graphic suggestion when the platform body is maneuvered according to the active perceived data.

Description

Auxiliary positioning method and system based on multiple sensors Technical Field The invention relates to the technical field of vehicle positioning, in particular to an auxiliary positioning method and system based on multiple sensors. Background The middle and large loading platform (the manned vehicle with the vehicle length not smaller than 5m and the vehicle width not smaller than 2 m) usually completes driving actions through a driver in the transportation and maneuvering processes, 360-degree surrounding environment image display can be realized through the arrangement of a looking-around camera, the visual field range of the driver is expanded, and more accurate and visual operation instructions cannot be provided for the driver. Because the platform is large in size and heavy in load, the perception information provided by the looking-around image is very limited under the condition of no obvious mark, and a driver drives the vehicle to reach a designated position only by means of the looking-around image and personal experience, the operation of personnel is difficult. Especially aiming at the requirements of centering and lateral positioning, larger errors exist in the transverse and longitudinal distances, and the frequent adjustment affects the subsequent operation links. Improving the environmental awareness of the loading platform is beneficial to assisting the driver in reaching the designated location more accurately. Disclosure of Invention In view of the above problems, the embodiment of the invention provides an auxiliary positioning method and system based on multiple sensors, which solve the technical problem of poor sensing capability to the working environment in the operation of the existing platform. The auxiliary positioning method based on the multiple sensors provided by the embodiment of the invention comprises the following steps: Acquiring looking-around perceived image data, front perceived image data and side perceived distance data based on a platform body; Active perception data of the included angle between the platform body and the central line of the front transfer flat plate is determined through the front perception image data, and active perception data of the difference between the platform body and the gesture of the side transport vehicle is determined through the side perception distance data; And displaying the looking-around perceived image through a human-computer interaction interface, superposing the graphic identification of the platform body in the looking-around perceived image according to the position mapping relation between the images, and forming a graphic suggestion when the platform body is maneuvered according to the active perceived data. In an embodiment of the present invention, the acquiring the looking-around perceived image data, the front perceived image data, and the side perceived distance data based on the platform body includes: Establishing the time sequence of the looking-around perceived image data, the front perceived image data and the side perceived distance data; Forming position mapping by the partial looking-around perceived image of the transport vehicle comprising the side and the side perceived distance data in the looking-around perceived image at the same time node; and at the same time node, mapping the local looking-around perceived image comprising the front transfer flat plate in the looking-around perceived image with the front perceived image to form a position. In an embodiment of the present invention, the active sensing data for determining an included angle between the platform body and the center line of the front transfer plate according to the front sensing image data includes: calibrating the width of the platform vehicle, and determining the position of the center line of the platform in the front perceived image according to the calibration and the generation parameters of the front perceived image; identifying the contour position of a front transfer panel in the front perceived image, and determining the center line position of the panel in the front perceived image according to the contour position; And forming a central line included angle quantization according to the position comparison of the central line of the platform in the front perceived image and the central line of the flat plate. In an embodiment of the present invention, the determining the active sensing data of the difference between the attitude of the platform body and the lateral transport vehicle according to the lateral sensing distance data includes: identifying point cloud data of the reflection targets according to the lateral perception distance data, and determining that the platform reaches the transfer range according to the point cloud data of the two reflection targets; determining target coordinates of each reflection target relative to the laser radar according to the point cloud data of the two reflection targets; And d