CN-121996972-A - Evaluation method and system for teaching effectiveness of teacher class based on AI assistance
Abstract
The application discloses an AI-assisted assessment method and system for teaching effectiveness of teachers. The evaluation method based on the teaching effectiveness of the teacher class with the aid of the AI comprises the steps of obtaining a denoised pure voice data set, a light optimized clear image data set and a screened effective audio fragment, inputting the denoised pure voice data set, the light optimized clear image data set and the screened effective audio fragment into a 3D-LCBQN neural network, and accordingly generating a teacher effective behavior quantization set, a student effective behavior quantization set and a teacher class teaching task propulsion effective behavior quantization set. The method of the application simultaneously observes and evaluates the propulsion status of the teaching task of teachers and students through introducing feature fusion, image recognition and voice recognition modes, thereby solving the problem of disputes caused by consistency and objectivity of evaluation results in the traditional teaching evaluation.
Inventors
- LI YONGSHENG
- MA JIANYING
- YUE FENGJIE
- ZHANG QIANLI
- WU JINSONG
- Nie Hanyun
- WAN JIWEI
Assignees
- 北京数时代大数据科技有限公司
- 北京经济管理职业学院(北京经理学院)
Dates
- Publication Date
- 20260508
- Application Date
- 20260129
Claims (10)
- 1. The method for evaluating the teaching effectiveness of the teacher class based on the AI assistance is characterized by comprising the following steps of: Acquiring a denoised pure voice data set, a clear image data set after light optimization and a screened effective audio fragment; Acquiring a trained 3D-LCBQN neural network; And inputting the denoised pure voice data set, the light optimized clear image data set and the screened effective audio frequency fragments into a 3D-LCBQN neural network, so as to generate a teacher effective behavior quantization set, a student effective behavior quantization set and a teacher and student classroom teaching task propulsion effective behavior quantization set.
- 2. The AI-assisted teacher classroom teaching effectiveness assessment method of claim 1, wherein the acquiring of the denoised clean speech data set, the ray-optimized clear image data set, and the screened effective audio clips includes: Acquiring classroom visual data and classroom audio data; Identifying classroom video data through YOLOv target detection algorithm, thereby obtaining a frame-by-frame target detection result set; extracting facial key points in classroom video data through Dlib facial key points, so as to obtain a frame-by-frame facial key point dataset; Generating a classroom original image data set according to frame numbers according to the frame-by-frame target detection result set and the frame-by-frame surface key point data set; performing data preprocessing on the classroom audio data so as to obtain a classroom original voice data set with a time stamp; performing multi-mode data time stamp alignment on the classroom original image data set with the frame numbers and the classroom original voice data set with the time stamps, so as to obtain the classroom original image data set and the classroom original voice data set with the time stamps aligned; executing a real-time denoising algorithm on the classroom original voice data set, thereby obtaining a denoised pure voice data set; Performing light correction on the image data through a light compensation algorithm, so as to obtain a clear image data set after light optimization; And filtering invalid voices in the classroom original voice data set through a voice semantic analysis algorithm, so as to obtain the screened valid audio fragments.
- 3. The AI-assisted teacher classroom teaching effectiveness assessment method of claim 2, characterized in that the 3D-LCBQN neural network structure includes: The multi-mode basic feature extraction module is used for generating basic features according to the denoised pure voice data set, the light-optimized clear image data set and the screened effective audio fragments, wherein the basic features comprise visual features, audio features, text features and time stamp feature sets; the time sequence-semantic-knowledge point anchoring coding layer is used for generating visual anchoring coding features, audio anchoring coding features, text anchoring coding features and three-dimensional tag sets according to visual class features, audio class features, text class features and time stamp feature sets; The teaching behavior-cognitive state linkage extraction layer is used for generating teacher guiding-behavior linkage characteristics and student cognition-behavior linkage characteristics according to the visual anchoring coding characteristics, the audio anchoring coding characteristics and the text anchoring coding characteristics; The knowledge point progressive-evaluation dimension dynamic fusion layer is used for generating teacher dimension exclusive fusion characteristics, student dimension exclusive fusion characteristics and teacher-student task propulsion dimension exclusive fusion characteristics according to visual anchoring coding characteristics, audio anchoring coding characteristics and text anchoring coding characteristics; The index association-rule embedded quantization layer is used for generating final dynamic quantization characteristics of a teacher, final dynamic quantization characteristics of a student and teacher-student classroom teaching task propulsion quantization score information according to the special fusion characteristics of the dimension of the teacher, the special fusion characteristics of the dimension of the student, the special fusion characteristics of the propulsion dimension of the teacher and student, the teacher guiding-behavior linkage characteristics and the cognition-behavior linkage characteristics of the student; The three-dimensional linkage output layer is used for outputting a teacher effective behavior quantization set with a traceability tag, a student effective behavior quantization set with the traceability tag and a teacher and student classroom teaching task propulsion effective behavior quantization set with the traceability tag after calibrating final dynamic quantization characteristics of a teacher, final dynamic quantization characteristics of a student and teacher and student classroom teaching task propulsion quantization score information.
- 4. The method for evaluating teaching effectiveness of AI-assisted teacher classes of claim 3, wherein the visual class features include movement trajectory features, head-up rate, eyebrow-tattooing duration, writing actions, front-row sitting rate, board writing area features, teaching aids, and screen projection features; the audio class features comprise speech speed features, spoken language frequency features, question frequency features, student speaking rate features and interactive response delay features; the text type characteristics comprise teaching keyword characteristics, teaching link keyword characteristics and heavy and difficult point keyword characteristics.
- 5. The AI-assisted teacher classroom teaching effectiveness assessment method of claim 4, characterized in that the timing-semantic-knowledge point anchoring coding layer includes: the visual channel is used for generating visual coding features according to the denoised pure voice data set; The time sequence anchoring unit is used for generating time sequence anchoring coding vectors according to the clear image dataset after light optimization; The semantic anchoring unit is used for generating semantic anchoring coding vectors according to the screened effective audio fragments; the knowledge point anchoring unit is used for generating knowledge point anchoring coding vectors according to the semantic anchoring coding vectors; And the trans-axis fusion gating unit is used for fusing the time sequence anchoring coding vector, the semantic anchoring coding vector, the knowledge point anchoring coding vector and the visual coding feature to generate the visual anchoring coding feature, the audio anchoring coding feature, the text anchoring coding feature and the three-dimensional tag set.
- 6. The AI-assisted teacher classroom teaching effectiveness assessment method of claim 5, wherein the teaching behavior-cognitive state linkage extraction layer includes: the teacher-side linkage extraction unit is used for generating teacher guiding force characteristics according to visual characteristics, audio characteristics and text characteristics; The student end linkage extraction unit is used for generating student cognitive input degree characteristics according to visual characteristics, audio characteristics and text characteristics; And the bidirectional association unit is used for generating teacher guiding-behavior linkage characteristics and student cognition-behavior linkage characteristics according to the teacher guiding force characteristics and the student cognition input characteristics.
- 7. The AI-assisted teacher classroom teaching availability assessment method of claim 6, characterized in that the knowledge point progression-assessment dimension dynamic fusion layer includes: The modal adaptation fusion unit is used for generating modal fusion characteristics according to the visual anchoring coding characteristics, the audio anchoring coding characteristics and the text anchoring coding characteristics; the dimension binding fusion unit is used for generating initial teacher dimension fusion characteristics, initial student dimension fusion characteristics and initial teacher-student task propulsion dimension fusion characteristics according to the modal fusion characteristics; The knowledge point progressive adjusting unit is used for generating teacher dimension exclusive fusion characteristics, student dimension exclusive fusion characteristics and teacher-student task advancing dimension exclusive fusion characteristics according to the initial teacher dimension fusion characteristics, the initial student dimension fusion characteristics and the initial teacher-student task advancing dimension fusion characteristics.
- 8. The AI-assisted teacher classroom teaching availability assessment method of claim 7, characterized in that the index association-rule embedding quantization layer includes: The rule presetting unit is used for storing an index evaluation rule set; The rule embedding unit is used for generating an index rule embedding weight set according to the stored index evaluation rule set, the teacher guiding-behavior linkage characteristic, the student cognition-behavior linkage characteristic, the teacher dimension exclusive fusion characteristic, the student dimension exclusive fusion characteristic and the teacher-student task propulsion dimension exclusive fusion characteristic; the index incidence matrix unit is used for generating an intra-dimension incidence matrix and a trans-dimension incidence matrix according to teacher guiding-behavior linkage characteristics and student cognition-behavior linkage characteristics; The dynamic quantization unit is used for generating final dynamic quantization characteristics of a teacher, final dynamic quantization characteristics of the student and teacher-student classroom teaching task propulsion quantization score information according to the special fusion characteristics of the dimension of the teacher, the special fusion characteristics of the dimension of the student, the special fusion characteristics of the propulsion dimension of the teacher and student tasks, the index rule embedded weight set, the guidance-behavior linkage characteristics of the teacher and the cognition-behavior linkage characteristics of the student.
- 9. The AI-assisted teacher classroom teaching effectiveness assessment method of claim 8, wherein the three-dimensional linkage output layer includes: the first-stage full-connection hidden layer is used for generating 128-dimensional fusion feature vectors according to final dynamic quantization features of teachers, final dynamic quantization features of students and propulsion quantization score information of classroom teaching tasks of teachers and students; the second-stage full-connection output layer is used for generating a teacher effective behavior core quantized value, a student effective behavior core quantized value and a teacher-student task propulsion effective behavior core quantized value according to the 128-dimensional fusion feature vector.
- 10. The utility model provides an evaluation system based on teaching of AI supplementary teacher classroom effectiveness which characterized in that, evaluation system based on teaching effectiveness of AI supplementary teacher classroom includes: The information acquisition module is used for acquiring the denoised pure voice data set, the clear image data set after light optimization and the screened effective audio fragment; the neural network acquisition module is used for acquiring a trained 3D-LCBQN neural network; The effective behavior amount acquisition module is used for inputting the denoised pure voice data set, the light-optimized clear image data set and the screened effective audio frequency fragments into the 3D-LCBQN neural network so as to generate a teacher effective behavior quantization set, a student effective behavior quantization set and a teacher and student classroom teaching task propulsion effective behavior quantization set.
Description
Evaluation method and system for teaching effectiveness of teacher class based on AI assistance Technical Field The application relates to the technical field of data processing, in particular to an evaluation method and system for teaching effectiveness of a teacher class based on AI assistance. Background The technical field of teaching analysis mainly relates to research and application for carrying out systematic analysis on teaching activities of teachers and learning results of students by using educational data such as teaching behaviors and learning processes, and core matters in the technical field comprise teaching process data acquisition, teaching behavior modeling, teaching quality assessment, learning effect analysis, intelligent feedback mechanism and the like, and aims to realize quantitative description of teaching rules and data monitoring and evaluation of teaching quality through a data means. The traditional assessment method for the teaching effectiveness of the teacher is means for assessing the teaching effectiveness of the teacher in the classroom according to teaching observation scales, modes and the like. The method is characterized in that the class performance of the teacher is subjected to qualitative or semi-quantitative analysis through manual observation and scoring, the evaluation process comprises manual recording of class behaviors, subjective judgment of teaching links and data arrangement of teaching results, manual operation is relied on, the data source is single, and the evaluation process is time-consuming and difficult to reflect the dynamic characteristics of the teaching of the teacher in real time. Disclosure of Invention The invention aims to provide an AI-assisted assessment method for teaching effectiveness of teachers to at least solve one technical problem. The invention provides an evaluation method for teaching effectiveness of a teacher class based on AI assistance, which comprises the following steps: Acquiring a denoised pure voice data set, a clear image data set after light optimization and a screened effective audio fragment; Acquiring a trained 3D-LCBQN neural network; And inputting the denoised pure voice data set, the light optimized clear image data set and the screened effective audio frequency fragments into a 3D-LCBQN neural network, so as to generate a teacher effective behavior quantization set, a student effective behavior quantization set and a teacher and student classroom teaching task propulsion effective behavior quantization set. Optionally, the obtaining the denoised clean voice data set, the light optimized clear image data set and the screened effective audio fragment includes: Acquiring classroom visual data and classroom audio data; Identifying classroom video data through YOLOv target detection algorithm, thereby obtaining a frame-by-frame target detection result set; extracting facial key points in classroom video data through Dlib facial key points, so as to obtain a frame-by-frame facial key point dataset; Generating a classroom original image data set according to frame numbers according to the frame-by-frame target detection result set and the frame-by-frame surface key point data set; performing data preprocessing on the classroom audio data so as to obtain a classroom original voice data set with a time stamp; performing multi-mode data time stamp alignment on the classroom original image data set with the frame numbers and the classroom original voice data set with the time stamps, so as to obtain the classroom original image data set and the classroom original voice data set with the time stamps aligned; executing a real-time denoising algorithm on the classroom original voice data set, thereby obtaining a denoised pure voice data set; Performing light correction on the image data through a light compensation algorithm, so as to obtain a clear image data set after light optimization; And filtering invalid voices in the classroom original voice data set through a voice semantic analysis algorithm, so as to obtain the screened valid audio fragments. Optionally, the 3D-LCBQN neural network structure includes: The multi-mode basic feature extraction module is used for generating basic features according to the denoised pure voice data set, the light-optimized clear image data set and the screened effective audio fragments, wherein the basic features comprise visual features, audio features, text features and time stamp feature sets; the time sequence-semantic-knowledge point anchoring coding layer is used for generating visual anchoring coding features, audio anchoring coding features, text anchoring coding features and three-dimensional tag sets according to visual class features, audio class features, text class features and time stamp feature sets; The teaching behavior-cognitive state linkage extraction layer is used for generating teacher guiding-behavior linkage characteristics and student cognition-behavior linkage