CN-121997630-A - Tower crane working scene modeling method and device
Abstract
The application discloses a method and a device for modeling a working scene of a tower crane, and belongs to the technical field of computers. The tower crane working scene modeling method comprises the steps of obtaining current frame point cloud data of a working scene of a tower crane and pose information of a laser radar, which are collected by the laser radar of an amplitude-variable trolley of the tower crane, in the rotation movement process of a large arm of the tower crane, obtaining angular velocity information of the large arm of the tower crane when the laser radar collects the current frame point cloud data, and analyzing the current frame point cloud data based on the angular velocity information and the pose information to obtain a working scene model corresponding to the current frame point cloud data. According to the method and the device for modeling the working scene of the tower crane, disclosed by the application, the movement track and the environment characteristic coordinates of the tower crane can be estimated more accurately by fusing the angular speed information of the big arm of the tower crane when the laser radar collects point cloud data, so that the accuracy of obtaining the working scene model of the tower crane is higher.
Inventors
- REN MINGTIAN
Assignees
- 北京东土科技股份有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251219
Claims (12)
- 1. The modeling method for the working scene of the tower crane is characterized by comprising the following steps of: Acquiring current frame point cloud data of a working scene of a tower crane and pose information of a laser radar, which are acquired by the laser radar of a luffing trolley of the tower crane, in the rotation movement process of a large arm of the tower crane, and acquiring angular velocity information of the tower crane when the laser radar acquires the current frame point cloud data; and analyzing the current frame point cloud data based on the angular velocity information and the pose information, and acquiring a working scene model corresponding to the current frame point cloud data.
- 2. The method for modeling a working scene of a tower crane according to claim 1, wherein analyzing the current frame point cloud data based on the angular velocity information and the pose information, and obtaining a working scene model corresponding to the current frame point cloud data, comprises: for each point in the point cloud data of the current frame, acquiring position information corresponding to each point based on the angular velocity information and the pose information, wherein the position information comprises a transformation matrix between a large-arm coordinate system and a world coordinate system; And acquiring a working scene model corresponding to the current frame point cloud data based on the position information corresponding to each point in the current frame point cloud data.
- 3. The method for modeling a working scene of a tower crane according to claim 2, wherein the obtaining the position information corresponding to each point based on the angular velocity information includes: Acquiring the angular speed of the tower crane body and an angular speed component corresponding to the rotation of the large arm based on the angular speed information; And carrying out motion transformation on each point based on the angular speed and the pose information of the tower crane body to obtain the transformation matrix corresponding to each point, wherein the transformation matrix is used for indicating the position information corresponding to each point.
- 4. A tower crane working scenario modeling method according to claim 3, wherein after said motion transforming each point based on the angular velocity of the tower crane body, the method further comprises: Acquiring point cloud residual errors of each point based on the position information of the target plane in the working scene; and iterating based on the point cloud residual error of each point, and updating the transformation matrix corresponding to each point.
- 5. The method for modeling a working scene of a tower crane according to claim 2, wherein the obtaining the working scene model corresponding to the current frame point cloud data based on the position information corresponding to each point in the current frame point cloud data includes: And acquiring a working scene model corresponding to the current frame point cloud data based on the position information corresponding to each point in the current frame point cloud data and the position information of a target point in a historical working scene model, wherein the historical working scene model is the working scene model corresponding to each frame point cloud data before the current frame, and the target point is a point with a distance smaller than a target threshold value from each point in the current frame point cloud data.
- 6. The method for modeling a working scene of a tower crane according to any one of claims 1 to 5, wherein the obtaining angular velocity information of the tower crane when the lidar collects the current frame point cloud data includes: and acquiring the angular velocity information acquired by an inertial measurement unit arranged on the amplitude-variable trolley.
- 7. The tower crane working scene modeling method according to claim 6, wherein an included angle between a horizontal axis and a gravity direction in a laser radar coordinate system used by the laser radar is smaller than an included angle threshold value under the condition that the inertial measurement unit is built in the laser radar.
- 8. The tower crane operating scenario modeling method according to claim 6, wherein in a case where the lidar is provided separately from the inertial measurement unit, a distance between the lidar and the inertial measurement unit is less than a distance threshold.
- 9. A tower crane working scene modeling device is characterized in that, The acquisition module is used for acquiring current frame point cloud data of a working scene of the tower crane and pose information of the laser radar, which are acquired by the laser radar of the luffing trolley of the tower crane, in the process of the rotation movement of the big arm of the tower crane, and acquiring angular velocity information of the tower crane when the laser radar acquires the current frame point cloud data; and the modeling module is used for analyzing the current frame point cloud data based on the angular velocity information and the pose information and obtaining a working scene model corresponding to the current frame point cloud data.
- 10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the tower crane operating scenario modeling method of any one of claims 1-8 when the computer program is executed by the processor.
- 11. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a tower crane operating scenario modeling method according to any one of claims 1-8.
- 12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the tower crane operating scenario modeling method of any one of claims 1-8.
Description
Tower crane working scene modeling method and device Technical Field The application belongs to the technical field of computers, and particularly relates to a method and a device for modeling a working scene of a tower crane. Background In the related art, the modeling of the working scene of the tower crane is carried out, so that a technical foundation can be provided for automatic driving of the tower crane. Currently, tower crane operational scenario modeling is typically performed based on a LiDAR inertial odometer (LiDAR-Inertial odometry, LIO) system. The tower crane working scene modeling based on LIO generally comprises the steps of performing multi-angle scanning through a laser radar arranged on a tower crane arm crane, collecting point cloud data of a tower crane working site, analyzing the point cloud data, and mapping to obtain an environment map of an area below the tower crane. However, the above method has a disadvantage of limited accuracy. Disclosure of Invention The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a method and a device for modeling the working scene of the tower crane, which can improve the accuracy of modeling the working scene of the tower crane. In a first aspect, the application provides a method for modeling a working scene of a tower crane, which comprises the following steps: Acquiring current frame point cloud data of a working scene of a tower crane and pose information of a laser radar, which are acquired by the laser radar of a luffing trolley of the tower crane, in the rotation movement process of a large arm of the tower crane, and acquiring angular velocity information of the tower crane when the laser radar acquires the current frame point cloud data; and analyzing the current frame point cloud data based on the angular velocity information and the pose information, and acquiring a working scene model corresponding to the current frame point cloud data. According to the method for modeling the working scene of the tower crane, in the process of carrying out instant positioning and map construction based on the laser radar inertial odometer, the angular velocity information of the large arm of the tower crane when the laser radar acquires point cloud data is fused, so that the motion track and the environment characteristic coordinate of the tower crane can be estimated more accurately, the accuracy of a working scene model of the tower crane is higher, and the accuracy of modeling the working scene of the tower crane can be improved. According to an embodiment of the present application, the analyzing the current frame point cloud data based on the angular velocity information and the pose information, and obtaining a working scene model corresponding to the current frame point cloud data, includes: for each point in the point cloud data of the current frame, acquiring position information corresponding to each point based on the angular velocity information and the pose information, wherein the position information comprises a transformation matrix between a large-arm coordinate system and a world coordinate system; And acquiring a working scene model corresponding to the current frame point cloud data based on the position information corresponding to each point in the current frame point cloud data. According to an embodiment of the present application, the obtaining, based on the angular velocity information, the position information corresponding to each point includes: Acquiring the angular speed of the tower crane body and an angular speed component corresponding to the rotation of the large arm based on the angular speed information; And carrying out motion transformation on each point based on the angular speed and the pose information of the tower crane body to obtain the transformation matrix corresponding to each point, wherein the transformation matrix is used for indicating the position information corresponding to each point. According to one embodiment of the present application, after performing motion transformation on each point based on the angular velocity of the tower crane body to obtain the transformation matrix corresponding to each point, the method further includes: Acquiring point cloud residual errors of each point based on the position information of the target plane in the working scene; and iterating based on the point cloud residual error of each point, and updating the transformation matrix corresponding to each point. According to an embodiment of the present application, the obtaining a working scene model corresponding to the current frame point cloud data based on the location information corresponding to each point in the current frame point cloud data includes: And acquiring a working scene model corresponding to the current frame point cloud data based on the position information corresponding to each point in the current frame point