Search

CN-120807957-B - Precise recognition grabbing system and method for picking and restoring hooks AI based on pattern recognition

CN120807957BCN 120807957 BCN120807957 BCN 120807957BCN-120807957-B

Abstract

The invention relates to the technical field of image state recognition, in particular to a picking and resetting hook AI accurate recognition grabbing system and method based on pattern recognition, wherein node construction is carried out through contour change, boundary difference value and gray level dynamics of a hook component in an image sequence frame, and edge displacement accumulation analysis is combined, meanwhile, cosine values and coordinate difference values among nodes are combined and compared through a graph neural network to construct a path jump sequence, so that response sensitivity to state mutation is enhanced, the method has the advantages that the critical path of morphological evolution can be stably extracted under the condition of complex background interference or local shielding, the anti-interference performance and fault tolerance of spatial path identification are effectively improved, statistical modeling is further carried out on state rate mutation points through a hidden Markov model, segment merging and invalid segment eliminating operations are carried out on abnormal point segments through matching standard state modes, a state label sequence is constructed, and accurate division of high-confidence and multi-segment continuous states is realized.

Inventors

  • YU WEIHUA
  • WANG XIN
  • CHEN JUNJIE
  • SU YAO
  • HUANG XIN
  • ZHOU XIAODONG
  • YANG GUODONG
  • Rao Zhihao
  • HE HAISHENG
  • ZHANG JIAFENG

Assignees

  • 国能宁夏大坝四期发电有限公司

Dates

Publication Date
20260508
Application Date
20250707

Claims (7)

  1. 1. An off-hook AI accurate recognition grabbing system based on pattern recognition, which is characterized by comprising: The dynamic node construction module acquires an image sequence frame set in the process of hook picking and re-hooking operation through a camera, performs hook component contour mapping, boundary difference value calculation, regional gray level change calculation and edge displacement accumulation treatment, and establishes a sequence node structure body; The state path analysis module is used for carrying out joint comparison of a node cosine value and a coordinate difference value based on the sequence node structure body by adopting a graph neural network, extracting path weight branches, counting state vector jump amplitudes and combining the state vector jump amplitudes into a sequence to obtain key morphological characteristics; the state path parsing module includes: The point association judging sub-module is used for carrying out numerical cosine comparison between any two node state vectors and extracting an included angle value based on the sequence node structural body by adopting a graph neural network, carrying out weighted combination of the distance variable between the center coordinates of the nodes and the included angle value and comparing the weighted combination with a preset threshold value, reserving node pairs meeting the connection condition and forming a continuous connection mapping relation to obtain a graph structure connection relation table; The path weight extraction submodule is used for carrying out statistics on the change amplitude of the state vector between adjacent nodes in all the connection paths based on the graph structure connection relation table, superposing the statistics to obtain a path weight value, carrying out calculation on the product of the length and the weight of all the paths, extracting the maximum value path in sequence, and carrying out complete extraction on the node index in the maximum path branch to obtain an adaptive path weight set; The state segment construction submodule is used for extracting state value mutation points among path node indexes based on the adaptive path weight set, screening according to an amplitude threshold value, aggregating sequence data on two sides of a jump point to form a state change segment, and carrying out continuous segment merging treatment on a segment direction change trend to obtain key morphological characteristics; the state label judging module is used for screening state rate mutation points by adopting a hidden Markov model based on the key morphological characteristics, performing standard value matching and classification on the state change fragments, merging repeated label fragments and removing invalid fragments, and establishing a polymorphic label sequence; The attitude angle calculating module is used for calculating the trend of the offset and the included angle of the central point of the region based on the polymorphic tag sequence, extracting continuous change values of the pitching, yawing and rolling angles according to the direction channel, and obtaining an attitude rotation angle vector set; The space state fusion module is used for binding each frame of state and posture combination based on the posture rotation angle vector set and the polymorphic label sequence, repeatedly combining the statistical difference range and generating a combined label, and establishing a fusion label result set; the graph neural network is prepared according to the formula: ; Wherein: representing a multi-dimensional weighted match between node i and node j, Representing the state vector of node i after the graph neural network has been encoded, Representing the state vector of node j after the graph neural network has been encoded, Representing the two norms of the state vector of node i, Representing the two norms of the node j state vector, Representing a small constant that prevents divide-by-zero errors, Representing the central coordinate vector of node i in image space, Representing the central coordinate vector of node j in image space, Representing the euclidean distance between node i and node j, The weighting coefficients representing the spatial distance components are, Representing the absolute value of the difference between the displacement change rate means of node i and node j over the history multiframe, The weighting coefficients representing the dynamic difference components are, Representing the timestamp index of node i, Representing the timestamp index of node j, Representing the maximum timestamp index in the current input frame sequence, A weighting coefficient representing a time decay component; Firstly, carrying out structural feature coding on candidate nodes in an input image to generate a node state vector 、 Then calculate the mould length 、 In combination with small constants Performing cosine included angle calculation to avoid zero instability, obtaining a state parameter included angle value of a node pair through an inverse cosine function to represent the similarity of zirconia, and extracting the central coordinates of the node 1 and the node in the image 、 Calculating Euclidean distance as space distribution characteristic, multiplying weight coefficient Forming a first weighted component, further extracting the moving path of the node in the continuous frames, fitting the displacement change rate, taking the mean value and calculating the absolute difference As a dynamic behavior variability measure and multiplied by a weight Generating a second weighted component and then obtaining the time stamps of nodes i and j 、 Joint global maximum timestamp Normalized to obtain a time attenuation coefficient and multiplying the time attenuation coefficient by Forming a third weighted component, and finally adding the three weighted results and the included angle value to form a complete multi-dimensional node pair association value Comparing the node connection relation with a threshold value to judge whether to construct a node connection relation or not, and further generating a graph structure connection relation table for subsequent grabbing path planning and target object positioning; the hidden Markov model is as follows: ; Wherein: Indicated at the frame time The label state is as follows Is set to the optimal path probability value of (a), Indicating the last frame time In the label state of Is set to the optimal path probability value of (a), Representing slave tag status Transition to tag state Is used to determine the probability of a state transition, Representing state transition slave To the point of Compatibility weighting factors on paragraph structures, Representing tag status In the first place The stability-based weight adjustment coefficients in the class label mode, Representing tag status For frame time Department observations Is used for the observation probability of (a), Representing tag status At the current frame Timing confidence correction factor on Representing the total number of states in the set of tag hidden states, Is shown in a frame Observing characteristics of the extracted label index difference value; Firstly, extracting label index of every frame from continuous video frame to form frame sequence label observation set By calculating the state of the previous frame Is the optimal path probability of (a) Basic scoring as path continuation, combined with state transition probabilities Evaluating slave states Transitioning to a current candidate state At the same time, carrying out structural consistency check on the paragraph structure diagram, and extracting structural compatibility factors Strengthening logically continuous label paragraph paths, and calculating label states through label mode recognition models In the first place Stability weights in class label mode Preferentially reserving label paths with high label occurrence frequency and stable duration, and then utilizing state For the current observed value Matching probability of (a) Performing observation consistency evaluation, and finally combining the positions of the time frames Calculating a timing confidence factor Dynamically adjusting state probability in the later frame, weighting all factors, and selecting the maximum path probability from all the pre-states to finally obtain And a label state sequence is constructed by pushing backwards according to the sequence, so that a polymorphic label sequence with continuity, inheritance and segmentation recognition capability is formed, and the polymorphic label sequence is used for guiding the grabbing strategy generation and the label recognition precision improvement in the multi-target picking and re-hooking process.
  2. 2. The pattern recognition based complex hook AI precision recognition grabbing system of claim 1, wherein the dynamic node construction module comprises: The image contour mapping sub-module is used for acquiring an image sequence frame set in the process of hook picking operation through a camera, carrying out gray level abrupt change point positioning processing on an edge area in a hook component image, carrying out contour boundary point alignment and track communication verification on the same component among adjacent frames, calculating a pixel moving path of boundary points with the same number in continuous frames, recording corresponding frame index information and establishing a contour corresponding mapping table; The boundary parameter extraction submodule is used for obtaining and calculating a position change amplitude value based on the corresponding coordinate difference value of the boundary point position on the basis of the corresponding mapping table of the contour, calculating the average value of gray pixel points of each frame in the mapping area, carrying out difference processing on the average value and the previous frame result, and carrying out superposition analysis on the boundary curvature change quantity and the displacement amplitude to generate a displacement change characteristic group; and the state sequence generation sub-module is used for combining and integrally converting characteristic point values under each frame according to a time sequence based on the displacement change characteristic group into a state vector structure, binding frame indexes of all state vectors and recombining the state vectors into a data block in a frame sequence form, calibrating continuous jump points and direction trend information among frames and establishing a sequence node structure body.
  3. 3. The pattern recognition-based complex hook AI accurate recognition grasping system according to claim 1, wherein the path weight value is composed of an accumulated value of the change amplitude of the state vector between adjacent nodes in the connection path, and is used for measuring the state change intensity on the whole path; And carrying out continuous segment merging processing on the direction change trend, obtaining a direction change angle by calculating a central coordinate connecting line direction vector between a starting point and a termination point of each segment of segment, and judging whether the direction change angle belongs to the same trend interval or not by adopting an included angle threshold value.
  4. 4. The pattern recognition based off-hook AI precision recognition capture system of claim 1, wherein the status tag determination module comprises: The mutation point extraction submodule is used for carrying out calculation processing on the change rate of the difference value between each state vector frame based on the key morphological characteristics, carrying out local peak search on continuous frame rate values and carrying out adjacent difference value amplitude screening, extracting the change direction and marking mutation frame positions after high-amplitude points are ordered according to indexes, and obtaining a rate mutation candidate group; The standard value matching sub-module is used for carrying out difference value calculation on the state value of each mutation frame and each dimension value in the standard label template based on the rate mutation candidate group, judging the difference value result and the upper and lower boundary ranges simultaneously, coding point labeling labels conforming to the conditions of all the dimension ranges, recording corresponding frame indexes and generating a label index structure set; and the label sequence construction submodule is used for carrying out extraction of adjacent inter-frame difference values in continuous label indexes and paragraph connectivity judgment by adopting a hidden Markov model based on the label index structure set, carrying out similar merging operation on continuous segments and constructing a frame segment identification table, carrying out label inheritance merging on segments with frame distances smaller than a set threshold value, removing zero label paragraphs and establishing a polymorphic label sequence.
  5. 5. The pattern recognition-based off-hook AI-accurate recognition grasping system according to claim 1, wherein the attitude angle estimation module includes: the center point measuring sub-module is used for extracting bounding boxes of each frame of target area image based on the polymorphic tag sequence, averaging coordinate values of the upper left corner and the lower right corner of the bounding boxes, calculating a rectangular center, performing Euclidean distance calculation on the rectangular center coordinate and the image center point coordinate, combining the rectangular center coordinate and the image center point coordinate into two-dimensional offset vectors, and arranging all the offset vectors according to a frame index sequence to obtain a space offset vector set; the included angle trend pushing sub-module is used for calculating continuous vector included angles of which three frames are a group based on the space offset vector set, calculating absolute value of the included angle of the direction change between the head vector and the tail vector of each group, judging ascending trend and descending trend, binding trend information with corresponding frame indexes, marking the angle direction trend category of each frame and generating a direction included angle marking group; And the angle channel extraction submodule is used for dividing all frames into a pitching channel, a yawing channel and a rolling channel according to trend categories based on the direction included angle mark group, extracting amplitude variation values of continuous included angle trend segments in each type of channels and combining the amplitude variation values into floating segments, carrying out channel numbering additional processing on each segment to generate an angle sequence, and obtaining an attitude rotation angle vector set.
  6. 6. The pattern recognition-based off-hook AI precision recognition grabbing system of claim 1, wherein the spatial state fusion module comprises: The state posture binding sub-module is used for carrying out alignment processing on the state labels and the angle vector numbers under each frame index based on the posture rotation angle vector set and the polymorphic label sequence, carrying out range screening on confidence sequencing values, selecting matched frame numbers, carrying out splicing and binding of the label numbers and triaxial angle values, and establishing a binding index mapping matrix to generate a frame-level binding label pair; The combined difference value calculation sub-module is used for carrying out combined extraction processing on all triaxial angle vector values under the same state category based on the frame-level binding tag pairs, calculating the maximum amplitude and the minimum amplitude difference value of each group of angle sequences and generating a fluctuation interval value, counting the angle change frequency in each state combination, defining stable and unstable range sections and generating an attitude difference value evaluation set; And the structural result generating sub-module is used for carrying out frame sequence restoration operation of binding information of each state label and each evaluation segment based on the attitude difference evaluation set, carrying out joint sequencing of the state channels and the angle channels on the combined data sequence, carrying out double-channel filling on each frame of state code and the angle number, marking the double-channel filling into a frame index column, and establishing a fusion labeling result set.
  7. 7. The pattern recognition-based off-hook AI precision recognition grabbing method is characterized in that the pattern recognition-based off-hook AI precision recognition grabbing system is executed, and comprises the following steps: s1, extracting edge contour values, boundary gray values, regional gray average values and edge coordinate offset of lifting hook assemblies in frames based on an image sequence frame set acquired by a camera, arranging contour point sets according to time sequence, superposing edge contour offset tracks and recording corresponding frame serial numbers by calculating matching position coordinate differences and gray value change amounts of edge contour points between adjacent frames, establishing a node index matrix and associating corresponding edge data structures, and establishing a sequence node structure; S2, based on the sequence node structure body, extracting node state vectors, calculating Euclidean distance between cosine included angles and central coordinates of edge morphological features, weighting the cosine included angles and the Euclidean distance between the center coordinates, screening node pairs lower than a threshold value to generate a connection index table, constructing a path set, counting path jump amplitudes, screening a maximum path, extracting mutation point frame numbers, constructing a connection structure by adopting a graph neural network according to the similarity of the included angles, extracting jump paths, acquiring key morphological feature indexes, classifying the key morphological feature indexes into the same group, performing grouping judgment on the included angles of the directional vectors among the nodes in the same group, merging similar direction segments, and carrying out structural construction of the connection relation among the nodes and jump path extraction through the graph neural network to acquire key morphological features; S3, calculating a node position change rate sequence in each section based on the key morphological characteristics, acquiring a jump rate value, carrying out section-by-section comparison matching on the jump rate value and a set standard state rate reference value, identifying a frame number sequence corresponding to a matching section, marking a state type index, carrying out repeated section merging operation on the state type index, eliminating a section with undefined classification, and carrying out jump rate stability judgment and multi-section state sequence optimization through a hidden Markov model to acquire a polymorphic label sequence; s4, based on the polymorphic tag sequence, extracting a central coordinate point of a corresponding area of each frame, calculating a central point displacement vector value between adjacent frames, extracting an included angle between adjacent vectors according to the frame sequence, recording an included angle trend change value of each direction channel, separating a pitching angle change sequence, a yawing angle change sequence and a rolling angle change sequence according to time sequences, and respectively executing continuous segment judgment and direction trend classification of the angle sequences to obtain an attitude rotation angle vector set; And S5, based on the attitude rotation angle vector set and the polymorphic label sequence, extracting a combined value of each frame of state label and the triaxial attitude angle, counting repeated states and attitude combinations, calculating a triaxial angle difference value range between the combinations, carrying out index merging processing on a combined segment with the difference value range lower than a set tolerance, generating a continuous segment label index table, and establishing a fusion labeling result set.

Description

Precise recognition grabbing system and method for picking and restoring hooks AI based on pattern recognition Technical Field The invention relates to the technical field of image state recognition, in particular to a picking and restoring hook AI accurate recognition grabbing system and method based on pattern recognition. Background The technical field of state recognition is focused on recognizing, judging and classifying specific states of a target object or environment in a mode of calculating models, image processing algorithms, sensor data and the like, is widely applied to industrial automation, intelligent manufacturing, medical monitoring and robot vision systems, and is used for extracting spatial positions, morphological characteristics and physical states of the target by combining models such as image segmentation, target detection, state reasoning and the like, and carrying out subsequent control decisions based on the spatial positions, the morphological characteristics and the physical states. The automatic system is an automatic system integrating artificial intelligent visual recognition and mechanical execution technology, acquires hook body image information through an image acquisition device, judges the current state by using a pattern recognition algorithm, drives an end execution mechanism to complete corresponding unhooking or unhooking operation, improves recognition accuracy and grabbing stability, reduces misoperation risk, and is suitable for application scenes with complex hook body state, irregular distribution or limited operation space. In the prior art, a short plate exists in the aspects of recognition precision and continuous state tracking, a continuous model of state evolution along with time cannot be established, when the state of a hook body is slightly deformed or shielded and interfered, a static judging mechanism cannot distinguish normal operation from error state, misjudgment is caused, the interaction between a motion path and an attitude angle is difficult to accurately grasp due to the lack of joint analysis of the motion path and the attitude angle in the recognition mechanism, when the hook body is subjected to unhooking operation, the influence of the change of the missed angle on a grabbing point is easy to miss when the hook body is rotated or slightly rolled, an actuating mechanism applies force in an improper direction, an included angle deviation is generated, the hook body is caused to fall off or mechanical damage, the recognition efficiency and the operation precision are obviously reduced, and the automation level and the operation safety of the whole system are restricted. Disclosure of Invention The invention aims to solve the defects in the prior art, and provides a picking hook AI accurate recognition grabbing system and method based on pattern recognition. In order to achieve the purpose, the invention adopts the following technical scheme that the precise recognition and grabbing system for the picking and restoring hook AI based on pattern recognition comprises the following steps: The dynamic node construction module acquires an image sequence frame set in the process of hook picking and re-hooking operation through a camera, performs hook component contour mapping, boundary difference value calculation, regional gray level change calculation and edge displacement accumulation treatment, and establishes a sequence node structure body; The state path analysis module is used for carrying out joint comparison of a node cosine value and a coordinate difference value based on the sequence node structure body by adopting a graph neural network, extracting path weight branches, counting state vector jump amplitudes and combining the state vector jump amplitudes into a sequence to obtain key morphological characteristics; the state label judging module is used for screening state rate mutation points by adopting a hidden Markov model based on the key morphological characteristics, performing standard value matching and classification on the state change fragments, merging repeated label fragments and removing invalid fragments, and establishing a polymorphic label sequence; The attitude angle calculating module is used for calculating the trend of the offset and the included angle of the central point of the region based on the polymorphic tag sequence, extracting continuous change values of the pitching, yawing and rolling angles according to the direction channel, and obtaining an attitude rotation angle vector set; and the space state fusion module is used for binding each frame of state and posture combination based on the posture rotation angle vector set and the polymorphic label sequence, repeatedly combining the statistical difference range and generating a joint label, and establishing a fusion label result set. As a further aspect of the present invention, the dynamic node construction module includes: The image contour mapping sub-mod