CN-122004856-A - Pilot eye movement monitoring and early warning method
Abstract
The invention relates to the technical field of aviation, in particular to a pilot eye movement monitoring and early warning method. The method comprises the steps of obtaining eye movement time sequence data corresponding to a target pilot, determining fixation area data corresponding to the target pilot according to the eye movement time sequence data, wherein the fixation area data comprise fixation area change tracks in a first preset time period before the current moment and fixation time periods corresponding to effective fixation areas, and outputting target early warning information corresponding to the target pilot based on the fixation area data. Meanwhile, the early warning can pertinently prompt the 'region to be focused', guide pilots to quickly correct the attention, and directly reduce the flight safety risk caused by equipment monitoring loss.
Inventors
- YANG WEIPING
- LUO QINGFENG
- WANG TIANZE
- ZHAN ZHENGYONG
- SI HAIQING
- LI YIXUAN
- WANG HAIBO
- PAN TING
- ZHAO YAN
- LI GEN
- LIU LINGBO
Assignees
- 中国航空工业集团公司西安飞行自动控制研究所
- 南京航空航天大学
Dates
- Publication Date
- 20260512
- Application Date
- 20251211
Claims (10)
- 1. A pilot eye movement monitoring and early warning method, the method comprising: acquiring eye movement time sequence data corresponding to a target pilot; Determining gazing area data corresponding to the target pilot according to the eye movement time sequence data, wherein the gazing area data comprises gazing area change tracks in a first preset duration before the current moment and gazing duration corresponding to each effective gazing area; and outputting target early warning information corresponding to the target pilot based on the gazing region data.
- 2. The method of claim 1, wherein the determining gaze area data corresponding to the target pilot based on the eye movement timing data comprises: extracting key time sequence characteristics from the eye movement time sequence data, and calculating dispersion among the fixation points in the first preset time length; clustering the fixation points according to the dispersion degree; Determining each gaze point of which the dispersion is smaller than a preset dispersion threshold as an effective gaze group; Determining effective gazing coordinates corresponding to the effective gazing groups according to two-dimensional projection coordinates of each effective gazing point in each effective gazing group in a cabin coordinate system; The scene image comprises a main flight display image, a navigation display image, an engine display and unit warning system image, a cockpit display image, a mode control panel image and a cockpit external view image; Performing target detection on the scene image, and determining each subarea in the scene image and the area coordinates of each subarea in the cabin coordinate system, wherein each subarea comprises a gesture indication subarea, a height subarea, a track subarea, a distance subarea, an engine parameter subarea, a fuel subarea, a cockpit display subarea, a control panel subarea and an external view subarea; Matching the effective gazing coordinates corresponding to the effective gazing groups with the region coordinates corresponding to the subregions; According to the matching result, determining the effective gazing area corresponding to each effective gazing group; Ordering the effective gazing areas according to a time sequence, and determining gazing area change tracks in a first preset duration before the current moment; and determining the gazing duration corresponding to each effective gazing area according to the time sequence among the effective gazing points corresponding to each effective gazing area.
- 3. The method of claim 1, wherein the target pre-warning information comprises device monitoring absence pre-warning, and the outputting the target pre-warning information corresponding to the target pilot based on the gaze area data comprises: Acquiring a current flight phase and key state parameters corresponding to a target aircraft, wherein the key state parameters comprise a flight attitude angle, a motion parameter and a device state; Determining a corresponding gazing area of the target pilot in a second preset duration according to the current flight stage, the key state parameters and the gazing area data; Acquiring an actual gazing area corresponding to the target pilot in the second preset time period; and if the actual gazing area is inconsistent with the gazing area, outputting the equipment monitoring lack early warning.
- 4. A method according to claim 3, wherein said determining, based on the current flight phase, the key state parameters and the gaze area data, a corresponding region at which the target pilot should gaze for a second predetermined period of time comprises: The scene image comprises a main flight display image, a navigation display image, an engine display and unit warning system image, a cockpit display image, a mode control panel image and a cockpit external view image; Performing target detection on the scene image, and determining each subarea in the scene image, wherein each subarea comprises a gesture indication subarea, a height subarea, a track subarea, a distance subarea, an engine parameter subarea, a fuel oil subarea, a cockpit display subarea, a control panel subarea and an external view subarea; extracting features of the subareas to obtain static features and dynamic features corresponding to the subareas; inputting the current flight stage, the static features and the dynamic features corresponding to the sub-regions of the key state parameters to a preset sub-region importance determination model, and outputting importance scores corresponding to the sub-regions; And inputting the current flight stage, the key state parameters, the gazing region data and the importance scores corresponding to the subareas into a preset gazing region determining model, and outputting the gazing region.
- 5. The method of claim 4, wherein inputting the static feature and the dynamic feature of the current flight phase, the key state parameter, and each of the sub-regions into a preset sub-region importance determination model, and outputting an importance score corresponding to each of the sub-regions, comprises: Calculating basic contribution degrees of each feature in the static feature and the dynamic feature corresponding to each sub-region of the current flight stage and the key state parameter to the importance of the sub-region; Detecting whether the key state parameter exceeds a corresponding parameter threshold; According to the detection result, adjusting each basic contribution degree to obtain a target contribution degree; For each subarea, if the target contribution degree of the target feature to the subarea in the third preset time period is larger than a preset contribution degree threshold value, determining the target feature as a core feature corresponding to the subarea; for the core features of each sub-region, determining initial weights and reference scores corresponding to the core features according to the target contribution degrees corresponding to the core features; Adjusting the initial weight corresponding to the core feature according to the change condition of the target contribution degree corresponding to each core feature to obtain the target weight corresponding to each core feature; Calculating the relevance between the core features of the sub-regions; Correcting the reference score corresponding to each core feature according to the relevance among the core features to obtain a feature synergy score corresponding to each core feature; and obtaining importance scores corresponding to the subareas according to the feature synergy scores corresponding to the core features and the target weights.
- 6. The method of claim 4, wherein inputting the current flight phase, the key state parameters, the gaze area data, and the importance scores corresponding to the sub-areas into a preset gaze area determination model, and outputting the region to be gazed comprises: The input layer of a preset gazing area determining model carries out coding processing on the current flight stage, the key state parameters, the gazing area data and the importance scores corresponding to the subareas; Performing feature extraction on the gazing region data by an LSTM layer of a preset gazing region determining model, capturing a time dependency relationship of gazing region switching, and generating a time sequence feature vector; the attention layer of the preset gazing region determining model performs feature extraction on the importance scores corresponding to the subareas, and generates weight vectors according to the score-attention weight mapping relation; Multiplying the time sequence feature vector with the weight vector to obtain a weighted time sequence feature; compressing the weighted time sequence characteristics to generate compressed characteristics; Splicing the current flight stage and the key state parameters to generate static characteristics; Performing cross product on the compression characteristic and the static characteristic to generate an interaction characteristic; a first layer in a full-connection layer of a preset gazing area determining model splices the weighted time sequence feature, the current flight stage, the key state parameter and the interaction feature to generate a gazing area fusion feature; nonlinear mapping is carried out on the fusion features of the gazing areas through a ReLU activation function, and an intermediate feature vector is output; a second layer in a full-connection layer of a preset gazing area determining model generates topN mask features according to the importance scores corresponding to the subareas; processing the intermediate feature vector and the topN mask feature through a Sigmoid activation function to obtain a final feature; and presetting an output layer of a fixation area determination model, and outputting the fixation area based on the final characteristics.
- 7. The method of claim 1, wherein the target pre-warning information includes fatigue status pre-warning, and the outputting the target pre-warning information corresponding to the target pilot based on the gaze area data includes: Acquiring pupil diameter time sequence data corresponding to the target pilot and the complexity of a current flight task; Determining initial fatigue probability corresponding to the target pilot based on the gazing region data, the pupil diameter time sequence data and the current flight mission complexity; the personal history baseline model is obtained based on the normal driving data corresponding to the target pilot and the fatigue driving data; correcting the initial fatigue probability based on the personal history baseline model to obtain a target fatigue probability corresponding to the target pilot; And if the target fatigue probability is larger than a preset fatigue probability threshold, outputting the fatigue state early warning.
- 8. The method of claim 7, wherein the determining the initial fatigue probability corresponding to the target pilot based on the gaze area data, the pupil diameter timing data, the current flight mission complexity comprises: inputting the gazing region data, the pupil diameter time sequence data and the current flight task into a preset fatigue probability determining model in a complex way, Extracting features of the gazing region data to obtain gazing region features, wherein the gazing region features comprise switching frequency, stay duty ratio and region jump entropy; Extracting features of the pupil diameter time sequence data to obtain a physiological feature vector, wherein the physiological feature vector comprises pupil diameter fluctuation rate and transient contraction frequency; determining a feature threshold corresponding to the gazing area feature corresponding to the current flight task complexity according to a task complexity-feature threshold mapping table; According to the characteristic threshold value corresponding to the gazing region characteristic, calculating an initial superscalar multiple corresponding to the gazing region characteristic; The physiological characteristic base line corresponding to the target pilot is obtained, wherein the physiological characteristic base line comprises a pupil diameter fluctuation rate base line and a transient contraction frequency base line; Calculating initial baseline deviation degree corresponding to the physiological feature vector based on the physiological feature baseline; Acquiring complexity weight information corresponding to the current flight task complexity; Correcting the initial superscalar multiple and the initial baseline deviation degree based on complexity weight information corresponding to the current flight task complexity to obtain a target superscalar multiple and a target baseline deviation degree; Constructing an interaction matrix based on the target superscalar multiple and the target baseline deviation degree; Fusing the interaction matrix, the target superscalar multiple and the target baseline deviation degree to obtain fatigue probability fusion characteristics; and determining the initial fatigue probability corresponding to the target pilot based on the fatigue probability fusion characteristics.
- 9. The method of claim 1, wherein the target pre-warning information includes operation-gaze inconsistency pre-warning, the outputting the target pre-warning information corresponding to the target pilot based on the gaze area data, comprising: Acquiring operation data corresponding to the target pilot, wherein the operation data comprises an operation object, an operation parameter and an operation execution time period; Matching the operation data with the gazing area data, and determining the target matching degree of the operation data and the gazing area data; and if the matching degree is smaller than a preset matching degree threshold, outputting the operation-gazing inconsistency early warning.
- 10. The method of claim 9, wherein the matching the operation data with the gaze area data, determining a target match for the operation data with the gaze area data, comprises: Extracting characteristics of an operation object in the operation data and each gazing area in the gazing area data, and calculating a first matching degree between the operation object and the gazing area; Extracting features of an operation execution time period in the operation data and a fixation time length corresponding to each fixation area in the fixation area data, and calculating a second matching degree between the operation execution time period and the fixation time length; calculating the behavior rationality corresponding to the operation data and the gazing area data based on a preset historical safe operation template; and determining the target matching degree corresponding to the operation data and the gazing area data according to the first matching degree, the second matching degree and the behavior rationality.
Description
Pilot eye movement monitoring and early warning method Technical Field The invention relates to the technical field of aviation, in particular to a pilot eye movement monitoring and early warning method. Background In the field of aviation, safe flight of aircraft is always a major concern. With the continuous development of aviation technology, the hardware performance and reliability of an aircraft are improved remarkably, however, flight accidents still occur. It has been shown from the relevant statistics that about 70% of the current flight incidents are due to crew artifacts, and that pilot mishandling accounts for the greatest proportion of these artifacts. The pilot acts as a direct operator of the aircraft, each of its operational actions being closely linked to the flight status of the aircraft. In complex flight environments, such as take-off and landing under severe weather conditions, route planning and avoidance of a busy airspace, and the like, pilots face huge working pressure, and misjudgment or improper operation easily occurs. For example, when landing under low visibility conditions, the pilot may excessively depress the nose due to visual judgment errors, resulting in abnormal landing attitude of the aircraft, thereby causing serious accidents. Therefore, the pilot eye movement monitoring and early warning method is a problem to be solved urgently. Disclosure of Invention In view of the above, the invention provides a pilot eye movement monitoring and early warning method to solve the problem of providing a pilot eye movement monitoring and early warning method. The first aspect of the invention provides a pilot eye movement monitoring and early warning method, which comprises the steps of obtaining eye movement time sequence data corresponding to a target pilot, determining fixation area data corresponding to the target pilot according to the eye movement time sequence data, wherein the fixation area data comprise fixation area change tracks in a first preset time before the current moment and fixation time lengths corresponding to effective fixation areas, and outputting target early warning information corresponding to the target pilot based on the fixation area data. In an alternative implementation mode, determining gazing area data corresponding to a target pilot according to eye movement time sequence data comprises the steps of carrying out key time sequence feature extraction on the eye movement time sequence data, calculating dispersion among gazing points in each first preset time period, clustering the gazing points according to the dispersion, determining each gazing point with the dispersion smaller than a preset dispersion threshold value as an effective gazing group, determining effective gazing coordinates corresponding to the effective gazing group according to two-dimensional projection coordinates of each effective gazing point in a driving cabin coordinate system in each effective gazing group, obtaining scene images corresponding to the target pilot, wherein the scene images comprise a main flight display image, a navigation display image, an engine display and unit warning system image, a driving cabin display image, a mode control panel image and a driving cabin exterior view image, carrying out target detection on the scene images, determining each subarea in the scene images and area coordinates of each subarea in a driving cabin coordinate system, each subarea comprises a gesture indication subarea, a high subarea, a flight path subarea, a distance, an engine parameter, a fuel subarea, a cabin display panel and a control subarea and area corresponding to each effective gazing point in each effective gazing point coordinate system, determining effective gazing point coordinates corresponding to each effective gazing area in each effective gazing point group according to a first time sequence, determining a corresponding gazing point sequence of the effective gazing point according to the effective gazing point in each effective gazing point coordinate sequence, and a first time sequence matching sequence of the effective gazing point and the effective gazing point according to the corresponding effective gazing point, and determining the gazing duration corresponding to each effective gazing area. In an alternative embodiment, the target early warning information comprises equipment monitoring lack early warning, target early warning information corresponding to a target pilot is output based on the gazing area data, the target early warning information comprises the steps of obtaining a current flight phase corresponding to a target aircraft and key state parameters, wherein the key state parameters comprise a flight attitude angle, a motion parameter and equipment states, determining a gazing area corresponding to the target pilot in a second preset duration according to the current flight phase, the key state parameters and the gazing area data, obtaining an