Search

CN-118506655-B - Music teaching man-machine interaction implementation system and method based on cloud database

CN118506655BCN 118506655 BCN118506655 BCN 118506655BCN-118506655-B

Abstract

The invention provides a music teaching man-machine interaction realization system and method based on a cloud database, and belongs to the technical field of man-machine interaction and artificial intelligence. The system comprises a sound sensor, an image sensor, a cloud matching unit and a visual interaction terminal, wherein the sound sensor comprises a music collecting unit and a music playing unit, the music playing unit firstly plays first target teaching music, then the music collecting unit collects real-time music beat sequences generated by target persons currently, the image sensor synchronously captures limb action sequences and expression change sequences of the target persons, the cloud matching unit matches at least one target matching music in a cloud database based on the real-time music beat sequences of the target persons and the limb action sequences or the expression change sequences, and the visual interaction terminal compares the first target teaching music with the target matching music and adjusts the first target teaching music. The technical scheme of the invention can realize personalized music teaching based on the assistance of artificial intelligence.

Inventors

  • LIU YANTONG

Assignees

  • 济宁职业技术学院

Dates

Publication Date
20260508
Application Date
20240619

Claims (4)

  1. 1. A music teaching man-machine interaction realizing system based on a cloud database comprises a sound sensor, an image sensor, a cloud matching unit and a visual interaction terminal; the method is characterized in that: The sound sensor comprises a music acquisition unit and a music playing unit; The music playing unit plays first target teaching music; after the first target teaching music is played, the music acquisition unit acquires a real-time music beat sequence currently generated by a target person; The image sensor synchronously captures a limb action sequence and an expression change sequence of a target person; the cloud matching unit is used for matching at least one piece of target matching music in a cloud database based on the real-time music beat sequence of the target person and the limb action sequence or the expression change sequence; The visual interaction terminal compares the first target teaching music with the target matching music, and adjusts the first target teaching music based on a comparison result; The cloud matching unit matches at least one piece of target matching music in a cloud database based on a real-time music beat sequence of a target person and a limb action sequence or an expression change sequence, and specifically comprises the following steps: The cloud matching unit matches at least one candidate matching music in the cloud database based on the real-time music beat sequence, and then takes the limb action sequence or the expression change sequence and the candidate matching music as the input of a digital music generation engine, wherein the digital music generation engine outputs at least one target matching music; The cloud matching unit comprises the digital music generation engine; the digital music generation engine includes a generation countermeasure network element and a recurrent neural network element; the limb action sequence or expression change sequence is used as a sample input of the generating countermeasure network element; the cloud matching unit comprises an audio extraction unit, an audio analysis unit and a conversion unit; the audio extraction unit extracts audio sequence data from the real-time music beat sequence; the audio analysis unit identifies the pitch, volume and time sequence in the audio sequence data; the conversion unit converts the audio sequence data into MIDI data according to the analysis result of the audio analysis unit.
  2. 2. The method for realizing the human-computer interaction of the music teaching based on the cloud database is realized based on electronic equipment, and the electronic equipment comprises a digital music generation engine; the human-computer interaction realization method is characterized by comprising the following steps of: s11, playing first target teaching music; s21, after the first target teaching music is played, acquiring a real-time music beat sequence generated by a target person, and synchronously capturing a limb action sequence and an expression change sequence of the target person; S31, extracting audio sequence data from the real-time music beat sequence, identifying the pitch, the volume and the time sequence in the audio sequence data, and converting the audio sequence data into MIDI data; S41, matching at least one candidate matching music in a cloud database based on the MIDI data, and then taking the limb action sequence or the expression change sequence and the candidate matching music as input of a digital music generation engine, wherein the digital music generation engine outputs at least one target matching music; S51, comparing the first target teaching music with the target matching music, and adjusting the first target teaching music based on a comparison result; the digital music generation engine includes a generation countermeasure network element and a recurrent neural network element; the limb-motion sequence or expression-change sequence is entered as a sample of the generated countermeasure network element.
  3. 3. A portable terminal device integrating a sound sensor and an image sensor and communicating with a cloud artificial intelligence server for implementing the cloud database-based music teaching man-machine interaction implementation method of claim 2.
  4. 4. An electronic device comprising a memory and one or more processors, the memory coupled to the processors; The computer program comprises a computer program code, wherein the memory is used for storing computer program codes, the computer program codes comprise computer instructions, and when the computer instructions are executed by the processor, the electronic equipment is caused to execute the music teaching man-machine interaction implementation method based on the cloud database as claimed in claim 2.

Description

Music teaching man-machine interaction implementation system and method based on cloud database Technical Field The invention belongs to the technical field of man-machine interaction and artificial intelligence, and particularly relates to a music teaching man-machine interaction realization system and method based on a cloud database, portable terminal equipment, electronic equipment and a computer readable storage medium for realizing the method. Background Most of traditional music teaching resources are static, unidirectional and papery, and often depend on textbook teaching materials. Textbook teaching materials usually pass through a long 'test period' from design development to revision publishing, and the teaching materials are limited in content and can only be presented in paper form, so that music teaching resources are relatively deficient and boring. Music education in educational new situation advocates changing the boring 'off-body learning' in the traditional teaching process, emphasizes the physical and mental on-site physical learning of students, and the goal becomes possible along with the wide popularization of artificial intelligence technology. Artificial intelligence technology allows music "teaching" and "learning" to be no longer limited to traditional classroom instruction and face-to-face practical instruction. The intelligent teaching system can customize a learning scheme for students according to the learning progress and learning interests of the students, so that the initiative and pertinence of the students to learn are remarkably improved, and the music literacy of the students is enhanced. Meanwhile, the application of the artificial intelligence technology in the aspects of musical composition analysis, voice recognition, simulation and the like can provide abundant practical operation opportunities for students, and deepen understanding and perception of the students on music. The invention patent with the bulletin number of CN112507294B discloses an English teaching system and a teaching method based on human-computer interaction, wherein the patent classification number of G06F, and the English teaching system based on human-computer interaction comprises a registration module, an identity verification and login module, a course selection module, a central control module, a video teaching module, a labeling module, a translation module, a voice recording module, a voice analysis module, a question-answering module, a storage module and a comprehensive evaluation module. The login verification method provided by the invention enhances the login security of the user and solves the problem of low security of the existing login mode. The invention patent of CN108897879B discloses a method for realizing personalized teaching through man-machine interaction. The patent classification number is G06F, which provides a personalized teaching method for realizing teaching links including automatic decomposition of learning tasks, automatic recommendation of test questions, positioning of error steps, and questioning explanation through human-computer interaction, wherein the method does not need any real teacher participation in the first step except for data preparation and parameter setting, so that the occupation of human resources, especially educational resources, is reduced to the maximum extent, and meanwhile, compared with the traditional explanation mode which takes 15-20 minutes on average, the patent can explain accurately in step level by determining error factors, not only maximally skip the content without explanation according to the specific learning situation of users, but also focus on explaining the questions of the users and deeply excavate the root sources of the questions, so that the users can understand knowledge points more easily, the explanation time can be shortened to 2-3 minutes, and the learning efficiency of the users is greatly improved. However, practical applications find that there is still a large room for improvement in the artificial intelligence auxiliary schemes used in the existing music teaching process, for example, the interactivity with the user is insufficient, other mood factors of the user are not considered, and thus personalized music teaching cannot be truly realized. Disclosure of Invention In order to solve the technical problems, the invention provides a music teaching man-machine interaction realization system and method based on a cloud database, and portable terminal equipment, electronic equipment and a computer readable storage medium for realizing the method. In a first aspect of the invention, a music teaching man-machine interaction implementation system based on a cloud database is provided, and the system comprises a sound sensor, an image sensor, a cloud matching unit and a visual interaction terminal. The sound sensor comprises a music acquisition unit and a music playing unit; The music playing unit plays first