Search

CN-121999539-A - Multi-view collaborative operation behavior risk assessment method and system

CN121999539ACN 121999539 ACN121999539 ACN 121999539ACN-121999539-A

Abstract

The invention discloses a multi-view collaborative operation behavior risk assessment method and a system thereof, which relate to the technical field of behavior risk assessment and comprise the steps of synchronously acquiring initial image sequences of a plurality of views around an operation station, extracting 2D skeleton feature data of human body key points of each view, and extracting environment semantic map data containing 3D geometric boundaries of equipment entities and coordinates of operation points of the equipment, projecting each 2D skeleton feature data to a world coordinate system consistent with the environment semantic map through space coordinate mapping transformation, acquiring space mapping coordinates of the human body key points of each view, and carrying out space aggregation on space mapping coordinates of different views belonging to the same human body target to generate an initial 3D fusion skeleton stream.

Inventors

  • WANG JINGLUAN
  • WANG XUPENG
  • LI ZE

Assignees

  • 西安理工大学

Dates

Publication Date
20260508
Application Date
20260410

Claims (10)

  1. 1. The multi-view collaborative operation behavior risk assessment method is characterized by comprising the following steps of: synchronously acquiring initial image sequences of a plurality of visual angles around a working station, extracting 2D skeleton feature data of human body key points of each visual angle, and calling environment semantic map data comprising 3D geometric boundaries of equipment entities and coordinates of equipment operation points; Projecting each 2D skeleton feature data to a world coordinate system consistent with the environment semantic map through space coordinate mapping transformation to obtain space mapping coordinates of human body key points under each view angle, and performing space aggregation on the space mapping coordinates of different view angles belonging to the same human body target to generate an initial 3D fusion skeleton stream; Performing space topology comparison on the initial 3D fusion skeleton flow and the 3D geometrical boundary of the equipment entity, identifying a physical conflict event that the human body key point penetrates through the 3D geometrical boundary of the equipment entity in a world coordinate system, and calculating physical space displacement deviation between the key point of the tail end of the human body operation limb and the equipment operation point; calculating a real-time space reliability factor for the feature data corresponding to each view angle based on the occurrence frequency of the physical conflict event and the physical space displacement deviation, and carrying out weighted fusion on the feature data of each view angle by utilizing the real-time space reliability factor to output a risk assessment result of the operation behavior; And according to the risk assessment result, selecting a view angle, in which the real-time space reliability factor meets a preset stability threshold, within a preset time window as a reference view angle, taking the reference view angle as a reference, performing online correction of mapping transformation parameters on the rest view angles with the physical conflict event to obtain correction parameters, and synchronously adjusting the sampling frequency of a corresponding data acquisition source and the fusion weight of the characteristic data of the corresponding view angle in the weighted fusion process within the next sampling period based on the correction parameters.
  2. 2. The multi-view collaborative job behavior risk assessment method according to claim 1, further comprising: Acquiring real-time running state parameters of equipment to be monitored in the operation process; Matching the real-time running state parameters with a preset mapping relation between equipment states and operation points, and calculating real-time space coordinates of the operation points of corresponding equipment in the environment semantic map under the world coordinate system; Updating the environment semantic map based on the real-time space coordinates, and recalculating physical space displacement deviation between key points of the tail ends of the human body operation limbs and the equipment operation points according to the updated environment semantic map; And checking and correcting the judging result of the physical conflict event by using the updated physical space displacement deviation.
  3. 3. The multi-view collaborative work behavior risk assessment method of claim 2, wherein the calculation of real-time spatial coordinates comprises the steps of: Acquiring real-time offset and rotation angle of each moving component in the equipment to be monitored relative to an equipment datum point according to the real-time running state parameters; Substituting the real-time offset and the rotation angle into a preset equipment state-operation point mapping relation, and calculating to obtain the relative coordinates of the operation point under a local coordinate system with the equipment datum point as an origin, wherein the equipment state-operation point mapping relation is a chained space transformation matrix determined by the physical connection sequence among all moving components of the equipment to be monitored; And converting the relative coordinates into real-time space coordinates of the operating point in the world coordinate system by using pose parameters of the equipment datum point in the environment semantic map in the world coordinate system.
  4. 4. The multi-view collaborative job behavior risk assessment method according to claim 1, further comprising: Acquiring a standard knowledge base trained by historical safety operation data, wherein the standard knowledge base comprises standard space-time distribution probabilities of human body 3D skeleton postures in different operation stages in the environment semantic map; identifying a current operation stage according to the initial 3D fusion skeleton flow, and calling corresponding standard space-time distribution probability from the standard knowledge base; calculating a behavior consistency value between 3D skeleton gesture data provided by each current visual angle and the standard space-time distribution probability; Calculating to obtain a space consistency factor based on the occurrence frequency of the physical conflict event and the physical space displacement deviation; And carrying out weighted fusion on the behavior consistency value and the space consistency factor to generate a comprehensive real-time space credibility factor.
  5. 5. The multi-view collaborative work behavior risk assessment method according to claim 1, wherein the adjustment of the sampling frequency and the fusion weight comprises the steps of: Calculating information complementarity between the feature data corresponding to each view angle and the space mapping coordinates under the reference view angle; identifying a contribution potential value of each view angle to a risk assessment result based on the information complementarity and the correction parameters; And according to the contribution potential value, priority ordering is carried out on each view angle, the sampling frequency of each corresponding data acquisition source is positively correlated and distributed according to the priority ordering in the next sampling period, and the fusion weight of the characteristic data of each corresponding view angle in the process of executing weighted fusion is synchronously and proportionally adjusted.
  6. 6. The multi-view collaborative work behavior risk assessment method according to claim 5, wherein the computing of information complementarity comprises: Determining an optical axis included angle of each view angle relative to the reference view angle under the world coordinate system, and calculating a view field coverage overlapping rate of each view angle on the key point of the human body based on the optical axis included angle; Calculating imaging distances of the human body key points under the visual angles relative to corresponding data acquisition sources, and determining unit spatial resolution of each visual angle based on the imaging distances; And combining the coverage overlapping rate of the view fields and the unit space resolution, and quantifying the capability of each view angle to provide the increment characteristic beyond the reference view angle to obtain the information complementation degree.
  7. 7. The multi-view collaborative job behavior risk assessment method according to claim 1, wherein the identification of the physical conflict event comprises the steps of: constructing motion track flows of the human body key points according to displacement vectors of the human body key points between adjacent sampling moments; Extracting intersection point coordinates of the motion trail stream passing through the 3D geometric boundary of the equipment entity, and identifying the space entering depth of the intersection point coordinates in the equipment entity; And calculating the duration of the motion trail flow in the space entering depth, and judging to trigger the physical conflict event when the space entering depth exceeds a preset depth threshold and the duration exceeds a preset time threshold.
  8. 8. The multi-view collaborative operation behavior risk assessment system is characterized by comprising: the data acquisition module is used for synchronously acquiring initial image sequences of a plurality of visual angles around the working station, extracting 2D skeleton characteristic data of human body key points of each visual angle, and calling environment semantic map data containing 3D geometric boundaries of equipment entities and coordinates of equipment operation points; the mapping aggregation module is used for projecting each 2D skeleton characteristic data to a world coordinate system consistent with the environment semantic map through space coordinate mapping transformation, obtaining space mapping coordinates of human body key points under each view angle, and carrying out space aggregation on the space mapping coordinates of different view angles belonging to the same human body target to generate an initial 3D fusion skeleton flow; The identification module is used for carrying out space topology comparison on the initial 3D fusion skeleton flow and the 3D geometrical boundary of the equipment entity, identifying a physical conflict event that the key points of the human body penetrate through the 3D geometrical boundary of the equipment entity in a world coordinate system, and calculating the physical space displacement deviation between the key points of the tail end of the human body operation limb and the equipment operation point; The fusion evaluation module is used for calculating a real-time space reliability factor for the characteristic data corresponding to each view angle based on the occurrence frequency of the physical conflict event and the physical space displacement deviation, and carrying out weighted fusion on the characteristic data of each view angle by utilizing the real-time space reliability factor to output a risk evaluation result of the operation behavior; And the calibration feedback module is used for selecting the view angle of which the real-time space reliability factor meets a preset stability threshold value in a preset time window as a reference view angle according to the risk assessment result, taking the reference view angle as a reference, performing on-line correction of mapping transformation parameters on the rest view angles with the physical conflict event to obtain correction parameters, and synchronously adjusting the sampling frequency of a corresponding data acquisition source and the fusion weight of the characteristic data of the corresponding view angle in the weighted fusion process in the next sampling period based on the correction parameters.
  9. 9. An electronic device comprising a memory and a processor, wherein the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, the computer-executable instructions when executed by the processor performing the steps of the method according to any one of claims 1 to 7.
  10. 10. A computer storage medium having stored thereon computer executable instructions which when executed by a processor perform the steps of the method according to any of claims 1 to 7.

Description

Multi-view collaborative operation behavior risk assessment method and system Technical Field The invention relates to the technical field of behavior risk assessment, in particular to a multi-view collaborative operation behavior risk assessment method and a system thereof. Background In modern manufacturing workshops, in order to ensure personal safety of operators and to standardize operation processes, the behavior of the operators is generally monitored in real time by using a visual monitoring system deployed around a workstation. The existing technical scheme mainly adopts a simple integration mode of a single camera or multiple cameras, extracts a human body gesture framework through a deep learning algorithm, and combines a predefined illegal action template for comparison, so that the primary identification and risk assessment of dangerous operation behaviors are realized. However, the existing assessment method and system mainly focus on feature extraction and pattern matching of a two-dimensional or three-dimensional skeleton of a human body, but often neglect associated semantic constraint between a human body motion track and a physical operation space when processing dynamic interaction relation between personnel and peripheral equipment, because of lack of depth perception of an occupied state of an entity obstacle in the operation space, when the system faces industrial common working conditions such as complex angles, illumination drafts or local shielding, the system can only guess gestures through visual features, so that generated gesture data is extremely easy to generate serious logical violation with the actual layout of the physical space, and because of behavior identification deviation caused by space dimension logic verification deletion, the system is difficult to accurately peel algorithm noise generated due to visual shielding or perspective offset, so that robustness of risk assessment results is insufficient, false alarm rate is difficult to reduce, and the requirement of industrial-level high-reliability safety monitoring cannot be met. Disclosure of Invention The present application has been made in view of the above-mentioned state of the art. The embodiment of the application provides a multi-view collaborative operation behavior risk assessment method and a system thereof, which can accurately identify and reject unreal action data generated by shielding or perspective offset, and solve the false alarm problem caused by logic verification deficiency in a complex space by the traditional algorithm. According to one aspect of the present application, there is provided a multi-view collaborative job behavior risk assessment method, including: synchronously acquiring initial image sequences of a plurality of visual angles around a working station, extracting 2D skeleton feature data of human body key points of each visual angle, and calling environment semantic map data comprising 3D geometric boundaries of equipment entities and coordinates of equipment operation points; Projecting each 2D skeleton feature data to a world coordinate system consistent with the environment semantic map through space coordinate mapping transformation to obtain space mapping coordinates of human body key points under each view angle, and performing space aggregation on the space mapping coordinates of different view angles belonging to the same human body target to generate an initial 3D fusion skeleton stream; Performing space topology comparison on the initial 3D fusion skeleton flow and the 3D geometrical boundary of the equipment entity, identifying a physical conflict event that the human body key point penetrates through the 3D geometrical boundary of the equipment entity in a world coordinate system, and calculating physical space displacement deviation between the key point of the tail end of the human body operation limb and the equipment operation point; calculating a real-time space reliability factor for the feature data corresponding to each view angle based on the occurrence frequency of the physical conflict event and the physical space displacement deviation, and carrying out weighted fusion on the feature data of each view angle by utilizing the real-time space reliability factor to output a risk assessment result of the operation behavior; And according to the risk assessment result, selecting a view angle, in which the real-time space reliability factor meets a preset stability threshold, within a preset time window as a reference view angle, taking the reference view angle as a reference, performing online correction of mapping transformation parameters on the rest view angles with the physical conflict event to obtain correction parameters, and synchronously adjusting the sampling frequency of a corresponding data acquisition source and the fusion weight of the characteristic data of the corresponding view angle in the weighted fusion process within the next sampling perio