CN-122024311-A - Pilot behavior detection method and device and aircraft
Abstract
The application discloses a pilot behavior detection method and device and an airplane, and belongs to the technical field of pilot detection. The method comprises the steps of obtaining image data, constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm according to the image data, constructing a human body posture model based on a human body posture estimation algorithm according to the image data, obtaining behavior data according to interaction of the human body posture model under the three-dimensional scene model, and carrying out early warning on the behavior data under the condition that the behavior data exceeds a preset behavior standard range. By combining the gesture of the pilot with the three-dimensional scene where the pilot is, the behavior of the pilot is comprehensively and intuitively monitored and analyzed according to the interaction between the gesture of the pilot and the three-dimensional scene, so that the accuracy of pilot behavior detection is improved.
Inventors
- WANG YING
- ZHENG XIAODAN
- HAO XIAOHUI
- LIU HONGTAO
- LI ZEREN
- ZHAO XU
Assignees
- 中国商用飞机有限责任公司上海飞机设计研究院
Dates
- Publication Date
- 20260512
- Application Date
- 20260108
Claims (11)
- 1. A pilot behavior detection method, comprising the steps of: acquiring image data; Constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm according to the image data; Constructing a human body posture model based on a human body posture estimation algorithm according to the image data; Acquiring behavior data according to interaction of the human body posture model under the three-dimensional scene model; And under the condition that the behavior data exceeds a preset behavior standard range, carrying out early warning on the behavior data.
- 2. The method of claim 1, wherein the acquiring image data comprises: Acquiring an image sequence through a plurality of vision sensors in different directions; and performing time alignment on the image sequence to obtain the image data.
- 3. The method of claim 1, wherein constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm from the image data comprises: Constructing a sparse three-dimensional point cloud and a dense environment model according to the image data; And registering and fusing the sparse three-dimensional point cloud and the dense environment model to obtain the three-dimensional scene model.
- 4. A method according to claim 3, wherein constructing a sparse three-dimensional point cloud from the image data comprises: extracting characteristic points in the image data; estimating the pose of the camera according to the characteristic points; And constructing the sparse three-dimensional point cloud according to the characteristic points and the camera pose.
- 5. A method according to claim 3, wherein registering and fusing the sparse three-dimensional point cloud with the dense environmental model to obtain a three-dimensional scene model comprises: Taking the dense environment model as an environment reference, carrying out point cloud registration on the sparse three-dimensional point cloud and the dense environment model, wherein the dense environment model is obtained in advance through a deep learning algorithm; performing gesture optimization on the sparse three-dimensional point cloud and the dense environment model based on the point cloud registration result; and carrying out scale consistency adjustment on the sparse three-dimensional point cloud and the dense environment model to obtain the three-dimensional scene model.
- 6. The method of claim 1, wherein constructing a human body pose model based on a human body pose estimation algorithm from the image data comprises: identifying coordinates of skeletal joints of the pilot from the image data; projecting the skeleton node into the three-dimensional scene model according to the coordinates of the skeleton node; And restoring the gesture of the pilot according to the coordinates of the skeleton node in the three-dimensional scene model, and constructing the human gesture model.
- 7. The method of claim 1, wherein obtaining behavior data from interactions of the human body pose model under the three-dimensional scene model comprises: Acquiring the behavior of a pilot according to the spatial distance and direction relation of the human body posture model under the three-dimensional scene model; and storing the behaviors of the pilot in the form of structured data to obtain the behavior data.
- 8. The method of claim 1, wherein the pre-warning if the behavioral data is outside the behavioral criteria comprises: And under the condition that the behavior data exceeds the behavior standard range, early warning is carried out in a voice prompt, visual alarm or event pushing mode.
- 9. The method according to claim 1, wherein the method further comprises: and displaying the three-dimensional scene model and the human body posture model through a visual interface.
- 10. A pilot behavior detection device, comprising: the data acquisition module is used for acquiring image data; the scene construction module is used for constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm according to the image data; The gesture construction module is used for constructing a human gesture model based on a human gesture estimation algorithm according to the image data; The behavior analysis module is used for acquiring behavior data according to interaction of the human body posture model under the three-dimensional scene model; and the early warning module is used for carrying out early warning on the behavior data under the condition that the behavior data exceeds a preset behavior standard range.
- 11. An aircraft, characterized in that a method according to any one of claims 1-9 is applied or that an apparatus according to claim 10 is included.
Description
Pilot behavior detection method and device and aircraft Technical Field The application relates to the technical field of pilot detection, in particular to a pilot behavior detection method and device and an airplane. Background In the aeronautical mission of an aircraft, the operation behavior of the aircraft pilot involves complex interactions between the aircraft pilot, the aircraft piloting equipment and the environment, such as the reading of instruments by the pilot, the operation of the joystick and the cooperation with the co-pilot, embedded in specific spatial relationships and semantic environments. In the related technology, whether the behavior of the pilot accords with the standard is detected only through the action of the pilot or the state of the pilot, but because the driving operation has deep association with the space environment of the cockpit, if the driving operation is separated from the space environment in the cockpit, the operation is only analyzed from isolated behavior or physiological data, the association between the pilot and the space environment in the cockpit is lacking, the real background and the potential safety hazard of the behavior are difficult to accurately restore, and the accuracy of the behavior detection of the pilot is influenced. Disclosure of Invention The embodiment of the application provides a pilot behavior detection method and device and an airplane, and aims to improve the accuracy of pilot behavior detection. In a first aspect, an embodiment of the present application provides a pilot behavior detection method, including the steps of: acquiring image data; constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm according to the image data; Constructing a human body posture model based on a human body posture estimation algorithm according to the image data; Acquiring behavior data according to interaction of the human body posture model under the three-dimensional scene model; And under the condition that the behavior data exceeds the preset behavior standard range, early warning is carried out on the behavior data. In some embodiments, acquiring image data includes: Acquiring an image sequence through a plurality of vision sensors in different directions; And (5) time aligning the image sequences to obtain image data. In some embodiments, constructing a three-dimensional scene model based on a three-dimensional scene reconstruction algorithm from image data includes: Constructing a sparse three-dimensional point cloud and a dense environment model according to the image data; And registering and fusing the sparse three-dimensional point cloud and the dense environment model to obtain a three-dimensional scene model. In some embodiments, constructing a sparse three-dimensional point cloud from image data includes: Extracting characteristic points in the image data; Estimating the pose of the camera according to the feature points; and constructing a sparse three-dimensional point cloud according to the feature points and the camera pose. In some embodiments, registering and fusing the sparse three-dimensional point cloud with the dense environment model to obtain a three-dimensional scene model includes: Taking a dense environment model as an environment reference, carrying out point cloud registration on a sparse three-dimensional point cloud and the dense environment model, and obtaining the dense environment model in advance through a depth learning algorithm; performing gesture optimization on the sparse three-dimensional point cloud and the dense environment model based on the point cloud registration result; and carrying out scale consistency adjustment on the sparse three-dimensional point cloud and the dense environment model to obtain a three-dimensional scene model. In some embodiments, constructing a human body pose model based on a human body pose estimation algorithm from image data includes: Identifying coordinates of skeleton joints of the pilot according to the image data; Projecting the skeleton joint points into the three-dimensional scene model according to the coordinates of the skeleton joint points; and restoring the gesture of the pilot according to the coordinates of the skeleton node in the three-dimensional scene model, and constructing a human gesture model. In some embodiments, obtaining behavior data from interactions of a human body pose model under a three-dimensional scene model includes: Acquiring the behavior of a pilot according to the spatial distance and direction relation of the human body posture model in the three-dimensional scene model; and storing the behaviors of the pilot in the form of structured data to obtain behavior data. In some embodiments, the pre-warning is performed when the behavior data is out of the behavior standard range, including: and under the condition that the behavior data exceeds the behavior standard range, early warning is carried out