CN-122008195-A - Three-dimensional visualization method of five-degree-of-freedom hybrid robot
Abstract
The invention discloses a three-dimensional visualization method of a five-degree-of-freedom hybrid robot, which relates to the field of robot technology visualization and comprises the following steps of acquiring real-time joint data acquired by position sensors deployed on all active joints of the robot, performing forward kinematics calculation on the real-time joint data to obtain tail end pose data of the robot, performing inverse kinematics calculation on the tail end pose data to obtain theoretical pose data describing the spatial position and the pose of all parts of the robot, acquiring and analyzing a three-dimensional surface model file of the robot, and extracting initial geometric data of all parts. The invention realizes the constraint calculation of the closed-loop kinematic chain of the parallel-serial mechanism through the kinematics of the five-degree-of-freedom parallel-serial robot, and ensures the accurate description of the pose of each component.
Inventors
- WANG MANXIN
- CHEN LONGTENG
- FENG HUTIAN
Assignees
- 南京理工大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260126
Claims (10)
- 1. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot is characterized by comprising the following steps of: acquiring real-time joint data acquired by position sensors deployed on each active joint of the robot; performing forward kinematics calculation on the real-time joint data to obtain the tail end pose data of the robot; Performing inverse kinematics calculation on the tail end pose data to obtain theoretical pose data describing the spatial position and the pose of each part of the robot; acquiring and analyzing a three-dimensional surface model file of the robot, and extracting initial geometric data of each part; performing redundancy elimination and smoothing treatment on the initial geometric data of each part to obtain optimized model data of each part, and loading the optimized model data of each part into a video memory of a graphic processing unit through a graphic interface; based on the optimized model data loaded to the video memory, invoking a graphics rendering pipeline to execute view transformation and illumination calculation, and generating rendering image data of each part; and applying the theoretical pose data to a graphic rendering pipeline, driving a graphic processing unit to perform space transformation and dynamic assembly on the rendering image data of each part based on the theoretical pose data, and synthesizing and outputting a dynamic three-dimensional visual image of the whole robot.
- 2. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot, which is disclosed in claim 1, is characterized by comprising the steps of: The method comprises the steps of receiving motor pulse data which are collected by an encoder and correspond to each active joint movement, and transmitting the motor pulse data to a main control unit through an industrial bus; In the main control unit, calculating to obtain a displacement variable according to motor pulse data corresponding to the linear active joint, the single-circle pulse number of the encoder and the lead of the lead screw; calculating to obtain an angle variable according to motor pulse data corresponding to the rotary active joint and the single-circle pulse number of the encoder; and the displacement variable and the angle variable are used as real-time joint data.
- 3. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 1, wherein the forward kinematics calculation is performed on the real-time joint data, and the method specifically comprises the following steps: inputting the displacement variable of the linear active joint into a pre-trained BP neural network model to obtain initial position estimation of the end point of the parallel mechanism; Carrying out iterative refinement on the initial position estimation by adopting a Newton-Lapherson iterative method to obtain high-precision parallel mechanism tail end position data; and calculating based on the end position data of the parallel mechanism to obtain end pose data by combining the angle variable of the rotary active joint and the geometric parameter of the serial rotating head structure of the robot.
- 4. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 3, wherein the BP neural network model is obtained by training the following steps: randomly sampling the coordinate position of the tail end point in the working space of the parallel mechanism of the robot; calculating the displacement of the linear active joint corresponding to each coordinate position through inverse kinematics; taking the displacement as an input characteristic, and taking a corresponding coordinate position as an output label to form a training sample set; Initializing weight and threshold of BP neural network; Adopting a sparrow search algorithm, and carrying out iterative optimization on an initial weight and a threshold value by taking a network prediction error on a sample set as fitness to obtain an optimized initial parameter; and taking the optimized initial parameters as a starting point, and performing supervised training on the BP neural network by using a sample set until convergence to obtain a pre-trained BP neural network model.
- 5. The method for three-dimensional visualization of a five-degree-of-freedom hybrid robot of claim 4 wherein the sparrow search algorithm parameters include population size, maximum number of iterations, and ratio of discoverers, enrollees and alertors.
- 6. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 1, wherein the three-dimensional surface model file is acquired and analyzed, and the method specifically comprises the following steps: reading an STL model file in an ASCII format; extracting normal vector data of the triangle surface by identifying a key word 'facetnormal' in the file, and extracting vertex coordinate data of the triangle surface by identifying a key word 'vertex'; And determining the total number of triangle faces contained in the three-dimensional surface model file according to the line number of the STL model file.
- 7. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 6, wherein the method is characterized by performing redundancy elimination and smoothing on the initial geometric data of each part, and specifically comprises the following steps: traversing all vertex coordinate data of each part, merging vertexes with the same spatial position, and distributing indexes for each unique vertex to generate a vertex coordinate array and an index array; Calculating an average normal vector corresponding to each unique vertex based on the index array and initial surface normal vector data of the corresponding part, and generating a vertex normal vector array of the corresponding part; Iterative adjustment is carried out on the vertex positions in the vertex coordinate array by adopting a Laplace smoothing algorithm, and the vertex normal vector array of the corresponding part is recalculated based on the adjusted vertex coordinates; the optimization model data at least comprises a vertex coordinate array, a vertex normal vector array and an index array of the corresponding parts.
- 8. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 7, wherein the method is characterized in that the optimized model data is loaded into a video memory of a graphics processing unit through a graphics interface, and specifically comprises the following steps: creating and binding a vertex array object for each part, and creating at least two vertex buffer objects; storing a vertex coordinate array in the optimized model data corresponding to each part to a first vertex buffer area object, and storing a vertex normal vector array to a second vertex buffer area object; Configuring vertex attribute pointers of the vertex array object to respectively describe organization formats, offset and step sizes of vertex coordinate data and vertex normal vector data; and associating the first vertex buffer object and the second vertex buffer object to the vertex array object, and uploading the vertex array object to a video memory of the graphic processing unit.
- 9. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 1, wherein the method comprises the steps of performing view transformation calculation, and specifically comprises the following steps: setting spherical coordinate parameters of a virtual camera, wherein the spherical coordinate parameters at least comprise an azimuth angle, a pitch angle and an observation radius; calculating the three-dimensional position coordinates of the virtual camera under the world coordinate system according to the azimuth angle, the pitch angle and the observation radius; taking the origin of a coordinate system of the robot as an observation target point, and constructing a view transformation matrix from a world coordinate system to a camera coordinate system based on the position coordinates of the virtual camera and the observation target point; An orthogonal projection mode is adopted, and an orthogonal projection matrix is constructed according to preset projection cube parameters; and converting the vertex coordinates of each part subjected to the theoretical pose data transformation from a world coordinate system to standardized equipment coordinates based on the view transformation matrix and the orthogonal projection matrix.
- 10. The three-dimensional visualization method of the five-degree-of-freedom hybrid robot of claim 1, wherein the algorithm for performing illumination calculation is a Phong illumination model algorithm, and specifically comprises the following steps: calculating the ambient light component acting on each part model; calculating diffuse reflection light components based on the light source direction and normal vectors of the corresponding part model surfaces; based on the observer direction and the reflected light direction, calculating specular reflection highlight components; and superposing the ambient light component, the diffuse reflection light component and the specular reflection high light component, fusing the ambient light component, the diffuse reflection light component and the specular reflection high light component with the surface color of the part model, and outputting the final illumination color corresponding to each point on the surface of the part model.
Description
Three-dimensional visualization method of five-degree-of-freedom hybrid robot Technical Field The invention relates to the field of robot technology visualization, in particular to a three-dimensional visualization method of a five-degree-of-freedom hybrid robot. Background Three-dimensional visualization is one of the core technical supports of the processing type hybrid robot, and directly determines the depth and breadth of intelligent application of the processing type hybrid robot. The three-dimensional visualization can be understood as a technical means for intuitively presenting the motion state, the operation scene and the component morphology of the series-parallel robot through a digital modeling and real-time rendering technology. Three-dimensional visualizations can be generally divided into two categories, modeling visualizations and motion visualizations. The motion visualization refers to real-time visualization presentation when the robot executes multi-axis linkage machining, path planning verification and machining effect prejudgment under the complex working condition, and the motion visualization degree shows the multi-axis cooperative control and machining process monitoring capability for the hybrid robot. The operation visualization is important in performance verification and application optimization of the hybrid robot, can intuitively reflect the coordination state of each motion axis, the rationality of interpolation paths and the accuracy of processing effects, and is a key index for measuring the intelligent level and the actual operation reliability of the robot. Along with the introduction of multi-axis linkage real-time monitoring requirements of the hybrid robot, a three-dimensional visual landing application mode is gradually paid attention to. The traditional three-dimensional visualization realization method is characterized in that a position sensor similar to a motor encoder is arranged at an active joint of a robot, a system building method such as a DH parameter method or URDF description method is used for building a coordinate system of each connecting rod of the robot, and pose mapping is realized by combining positive kinematics solution, so that real-time updating and three-dimensional model driving of each connecting rod pose are realized, but the method is only suitable for robots only with open-loop motion chains, such as serial robots and foot robots, and cannot express the motion constraint characteristic of parallel branched chains in a parallel mechanism. Disclosure of Invention Aiming at the defects existing in the prior art, the invention provides a three-dimensional visualization method of a five-degree-of-freedom hybrid robot. In order to achieve the above object, the technical scheme of the present invention is as follows: a three-dimensional visualization method of a five-degree-of-freedom hybrid robot comprises the following steps: acquiring real-time joint data acquired by position sensors deployed on each active joint of the robot; performing forward kinematics calculation on the real-time joint data to obtain the tail end pose data of the robot; Performing inverse kinematics calculation on the tail end pose data to obtain theoretical pose data describing the spatial position and the pose of each part of the robot; acquiring and analyzing a three-dimensional surface model file of the robot, and extracting initial geometric data of each part; performing redundancy elimination and smoothing treatment on the initial geometric data of each part to obtain optimized model data of each part, and loading the optimized model data of each part into a video memory of a graphic processing unit through a graphic interface; based on the optimized model data loaded to the video memory, invoking a graphics rendering pipeline to execute view transformation and illumination calculation, and generating rendering image data of each part; and applying the theoretical pose data to a graphic rendering pipeline, driving a graphic processing unit to perform space transformation and dynamic assembly on the rendering image data of each part based on the theoretical pose data, and synthesizing and outputting a dynamic three-dimensional visual image of the whole robot. Preferably, acquiring real-time joint data specifically includes: The method comprises the steps of receiving motor pulse data which are collected by an encoder and correspond to each active joint movement, and transmitting the motor pulse data to a main control unit through an industrial bus; In the main control unit, calculating to obtain a displacement variable according to motor pulse data corresponding to the linear active joint, the single-circle pulse number of the encoder and the lead of the lead screw; calculating to obtain an angle variable according to motor pulse data corresponding to the rotary active joint and the single-circle pulse number of the encoder; and the displacement variable and the an