CN-121148018-B - Container operation behavior recognition and analysis method and system based on AI
Abstract
The application relates to the technical field of computer vision and artificial intelligence, and discloses a container operation behavior recognition and analysis method and system based on AI, wherein the method comprises the steps of obtaining an initial behavior sequence of a target object and generating an action segment; generating a significant feature vector through threshold judgment based on the generated behavior change rate, obtaining a feature map, generating a weighted feature combination based on head rotation recognition and time attenuation mode detection, refining the feature map into a microscopic action category through a multi-layer classification model, performing similarity matching with a reference behavior library, judging to generate a behavior optimization feedback sequence, combining the adaptability of a risk probability judgment behavior recognition framework to a specific scene, regenerating the regulated behavior change rate and performing iterative updating if the risk probability judgment behavior recognition framework is not suitable, and outputting a final progressive recognition result. The application realizes the dynamic identification and feedback optimization of the whole process of container operation, and improves the safety and efficiency of container operation.
Inventors
- WANG FANG
- TANG XIANG
- ZHANG BO
- SONG JINYUAN
- DUAN QIANQIAN
- ZHENG HUANHUAN
- WANG PENG
- GUO TIANZHAO
- ZHAO CHUNZHENG
- SUO DONG
- SHEN YINLONG
- YUAN CHUANJIE
Assignees
- 郑州综合交通运输研究院有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251010
Claims (7)
- 1. A container operation behavior recognition and analysis method based on AI is characterized by comprising the steps of S1, obtaining an initial behavior sequence of a target object in a container operation scene, obtaining an action segment of the target object based on the initial behavior sequence, S2, generating a behavior change rate of the target object based on the action segment, judging whether the behavior change rate exceeds a preset rate threshold value, if so, generating a salient feature vector, and determining a corresponding macroscopic flow stage based on the salient feature vector, S3, generating a feature map based on the salient feature vector, judging whether a time attenuation mode exists in the feature map, and obtaining a weighted feature combination by a feature weighting technology, S4, refining the macroscopic flow stage of the target object based on the weighted feature combination, obtaining a microscopic action category, generating a matching degree based on the microscopic action category and a pre-established reference behavior library, judging whether the matching degree reaches a preset similarity threshold value, and outputting a progressive recognition result if so, and generating a behavior optimization feedback sequence; judging whether the behavior change rate exceeds a preset rate threshold, if so, generating a significant feature vector, wherein the method comprises the steps of comparing the behavior change rate of each time window with the preset rate threshold, and if so, defining the time window as a high-dynamic window; Generating an optimized behavior recognition frame and judging whether the behavior recognition frame is suitable for a specific scene in a container operation scene or not, wherein the optimizing behavior recognition frame is used for optimizing an operation flow model by adopting a sequence iteration updating technology based on a behavior optimization feedback sequence to generate an optimized behavior recognition frame, and acquiring a new macroscopic flow stage based on the behavior recognition frame; Generating an adjusted behavior change rate, and re-judging whether the adjusted behavior change rate is greater than a preset rate threshold value or not until a final progressive recognition result is output, wherein the method comprises the steps of acquiring a new initial behavior sequence based on a new macroscopic flow stage, generating the adjusted behavior change rate based on the new initial behavior sequence, acquiring a significant feature vector based on high dynamic window data if the adjusted behavior change rate is greater than the preset rate threshold value, mapping the significant feature vector to the macroscopic flow stage by adopting a predictive trend generation technology, and outputting the final progressive recognition result.
- 2. The method of claim 1, wherein obtaining motion segments of the target object comprises acquiring motion data of the target object based on video data, the target object comprising an operator and a boom, analyzing the motion data to obtain an initial sequence of limb motions of the target object using a limb gesture resolution technique, determining joint positions by a key point detection technique, identifying hand motion trajectories from the initial sequence based on a gesture trajectory tracking technique, generating trajectory data by tracking temporal-spatial variations of key points of the hand, obtaining motion segments based on the initial sequence of limb motions and the hand motion trajectories, combining a preset threshold of motion amplitude and time interval, the motion segments being composed of a combination of successive limb motions and gesture motions.
- 3. The method of claim 1, wherein generating the behavior change rate of the target object based on the motion segment comprises dividing the motion segment into a plurality of time windows based on time intervals and motion amplitudes, obtaining gaze direction data of the target object for each time window using a gaze focus analysis technique, determining gaze direction by eye key point localization, generating gaze change frequency within the time window based on the gaze direction data, generating a time sequence feature vector in combination with spatiotemporal features of the motion segment, and generating the behavior change rate based on the time sequence feature vector, wherein the time sequence feature vector comprises joint features of motion speed and gaze change, and the behavior change rate is obtained by gradient analysis of the time sequence feature vector.
- 4. The method of claim 1, wherein determining whether a time attenuation mode exists in the feature map, if so, adjusting feature weights of the salient feature vectors by a feature weighting technique to obtain weighted feature combinations, comprises processing the salient feature vectors by a head rotation recognition technique to generate the feature map, wherein the head rotation recognition technique recognizes head actions by analyzing angle changes of head key points, and determining whether the time attenuation mode exists by using a time attenuation mode detection algorithm based on the feature map, if so, dynamically adjusting the feature weights of the salient feature vectors by a joint angle quantization technique based on angle change amplitudes of limb joints to obtain weighted feature combinations, wherein the time attenuation mode is determined by a time sequence attenuation rule of feature intensities in the feature map.
- 5. The method of claim 1, wherein determining whether the degree of matching reaches a preset similarity threshold, outputting a progressive recognition result if the degree of matching reaches the preset similarity threshold, and generating a behavior optimization feedback sequence comprises obtaining a feature vector based on a microscopic motion category, generating the degree of matching by comparing the feature vector with motion template data in a pre-established reference behavior library, wherein the microscopic motion category represents a type and intensity of a boom swing and an operator hand operates a specific motion behavior, generating the progressive recognition result if the degree of matching is greater than or equal to the preset similarity threshold, wherein the progressive recognition result comprises a label and a confidence coefficient of the motion category, and adjusting the progressive recognition result by a real-time deviation correction technology to generate the behavior optimization feedback sequence.
- 6. An AI-based container operations behavior recognition and analysis system for implementing an AI-based container operations behavior recognition and analysis method as recited in any one of claims 1-5, comprising: the data preprocessing module is used for acquiring an initial behavior sequence of a target object in the container operation scene and acquiring an action fragment of the target object based on the initial behavior sequence; The analysis and identification module is used for generating a behavior change rate of the target object based on the action segment, judging whether the behavior change rate exceeds a preset rate threshold, if so, generating a significant feature vector, and determining a corresponding macroscopic flow stage based on the significant feature vector; the feature weighting module is used for generating a feature map based on the significant feature vector, judging whether a time attenuation mode exists in the feature map, and if so, adjusting the feature weight of the significant feature vector through a feature weighting technology to obtain a weighted feature combination; The action matching module is used for refining the macroscopic flow stage where the target object is located through the weighted feature combination, obtaining a microscopic action category, matching with a pre-established reference behavior library based on the microscopic action category, generating a matching degree, judging whether the matching degree reaches a preset similarity threshold, outputting a progressive recognition result if the matching degree reaches the preset similarity threshold, and generating a behavior optimization feedback sequence; The frame optimizing module is used for generating an optimized behavior recognition frame through an optimizing algorithm based on the behavior optimizing feedback sequence and judging whether the behavior recognition frame is suitable for a specific scene in the container operation scene; And the iteration output module is used for generating an adjusted behavior change rate when the behavior recognition framework is not suitable for a specific scene, and re-judging whether the adjusted behavior change rate is greater than a preset rate threshold value or not until a final progressive recognition result is output.
- 7. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the method of any of claims 1-5.
Description
Container operation behavior recognition and analysis method and system based on AI Technical Field The application relates to the technical field of computer vision and artificial intelligence, in particular to a container operation behavior identification and analysis method and system based on AI, which are widely applied to the fields of port and logistics hub container operation behavior analysis, intelligent monitoring and the like. Background With the rapid development of global logistics and international land transportation, container operations have become one of the most critical links in international trade. The container operation not only comprises operations such as hoisting, carrying, stacking and the like, but also involves complex personnel cooperation and equipment interaction, and the operation links have higher and higher requirements on actions, behaviors, operation specifications and operation environments of operators. However, the current inland hub container operation management system mainly relies on manual monitoring, and real-time analysis and intelligent identification of complex operation behaviors are difficult to realize. At present, the monitoring and management of inland hub container operation mainly depend on manual observation and experience judgment, and the problems of poor real-time performance, insufficient recognition precision, limited early warning capability for abnormal behaviors and the like exist. Although the existing behavior recognition technology can achieve a certain recognition effect in static or single scenes, in complex and changeable container operation environments, fine actions and behavior changes of operators are often difficult to accurately capture, misjudgment or missed judgment is easy to occur particularly in high-dynamic scenes, dynamic optimization capability of recognition models is lacking, and requirements of inland junction intellectualization and security operation cannot be met. Therefore, the application provides an AI-based container operation behavior recognition and analysis method and system, which can accurately and efficiently recognize various operation behaviors in container operation in different illumination, environments and complex operation scenes by utilizing computer vision and deep learning technology and combining time sequence feature analysis, multi-mode data fusion and other methods, thereby improving the safety level and the operation efficiency of hub operation and promoting the development of the hub operation to an intelligent and automatic direction. Disclosure of Invention In order to solve the technical problems, the application provides an AI-based container operation behavior recognition and analysis method and system, which analyze and optimize recognition results of various operation behaviors in the container operation process in real time and provide a more intelligent safety monitoring and operation optimization scheme for container operation. In a first aspect, the present application provides a method for identifying and analyzing container operation behaviors based on AI, the method comprising: Step S1, acquiring an initial behavior sequence of a target object in a container operation scene, and acquiring an action segment of the target object based on the initial behavior sequence; Step S2, generating a behavior change rate of a target object based on the action segment, judging whether the behavior change rate exceeds a preset rate threshold, if so, generating a salient feature vector, and determining a corresponding macroscopic flow stage based on the salient feature vector; Step S3, generating a feature map based on the significant feature vector, judging whether a time attenuation mode exists in the feature map, and if so, adjusting feature weights of the significant feature vector through a feature weighting technology to obtain a weighted feature combination; Step S4, refining the macro flow stage where the target object is located through the weighted feature combination to obtain a micro action category, matching the micro action category with a pre-established reference behavior library to generate a matching degree, judging whether the matching degree reaches a preset similarity threshold, outputting a progressive recognition result if the matching degree reaches the preset similarity threshold, and generating a behavior optimization feedback sequence; step S5, generating an optimized behavior recognition frame through an optimization algorithm based on the behavior optimization feedback sequence, and judging whether the behavior recognition frame is suitable for a specific scene in container operation scenes; and S6, if not, generating an adjusted behavior change rate, and re-judging whether the adjusted behavior change rate is greater than the preset rate threshold value or not until a final progressive recognition result is output. In a second aspect, the present application provi