Search

CN-122024539-A - Intelligent central robot and household education implementation method thereof

CN122024539ACN 122024539 ACN122024539 ACN 122024539ACN-122024539-A

Abstract

The application discloses an intelligent central robot and a household education implementation method thereof, wherein the household education implementation method comprises the steps of determining a learning scene context by analyzing sensing data of a sensing module and a digital twin space which is constructed in advance based on an indoor physical space; selecting a corresponding matching strategy according to the learning scene context and the stored user learning emotion image, performing variable filling and style adjustment on a structured teaching template according to the learning scene context, the display style and the matching strategy to generate personalized scene content, mobilizing a corresponding execution module according to the personalized scene content to perform multi-mode output so as to construct an immersed teaching scene, determining interaction information of a user by analyzing perception data and digital twin space of a sensing module, and adjusting output of the corresponding execution module according to the interaction information. By implementing the technical scheme of the application, the individuation degree is higher, and the teaching interactivity and the feeling of presence are enhanced.

Inventors

  • LI XIUQIU
  • DAI MINGJUN
  • YU XIAO
  • Zheng xiaobei
  • CHEN YUMING
  • Ge Dongjun
  • DENG HUI
  • LI YICHENG

Assignees

  • 深圳安康慧科技有限公司

Dates

Publication Date
20260512
Application Date
20260303
Priority Date
20251230

Claims (12)

  1. 1. The household education implementation method for the intelligent central robot is applied to a processor of the intelligent central robot and is characterized by further comprising a sensing module and an executing module, wherein the executing module comprises a projection module, a light module, an attitude adjusting module, a display module and a sound output module, and the household education implementation method comprises the following steps: Step S10, determining a learning scene context by analyzing the sensing data of the sensing module and a digital twin space which is constructed in advance based on an indoor physical space, wherein the learning scene context comprises a user identity, a user position, a learning state and a learning obstacle type; Step S20, selecting a corresponding matching strategy from a plurality of preset matching strategies according to the learning scene context and the stored user learning situation portrait, and performing variable filling and style adjustment on a pre-constructed structured teaching template by using AIGC technology according to the learning scene context and a preset display style based on the selected matching strategy to generate personalized scene content; Step S30, according to the personalized scene content, a corresponding execution module is mobilized to carry out multi-mode output so as to construct an immersive teaching scene; And S40, determining interaction information of a user by analyzing the sensing data of the sensing module and the digital twin space, and adjusting the output of the corresponding execution module according to the interaction information.
  2. 2. The home education implementation method according to claim 1, further comprising, after the step S40: Determining a learning effect by analyzing the interaction information; And updating the user emotion portrait according to the learning effect.
  3. 3. The home education implementation method according to claim 2, further comprising: Generating teaching recommendation information according to the learning effect, and outputting the teaching recommendation information.
  4. 4. The home education implementation method according to claim 2, further comprising: generating a study report according to the learning effect, and sending the study report to a preset terminal device.
  5. 5. The home education implementation method according to any one of claims 1 to 4, wherein the sensing module includes a non-visual sensor, a visual sensor, and the step S10 includes: Step S11, determining a current user event according to the perception data of the non-visual sensor and the digital twin space, wherein the user event comprises an event type, an occurrence place, an occurrence time and a confidence level; Step S12, when the current user event meets a preset trigger condition, starting the visual sensor, and determining a learning scene context by analyzing the sensing data of the visual sensor, the sensing data of the non-visual sensor and the digital twin space.
  6. 6. The home education implementation method according to claim 5, further comprising, between the step S11 and the step S12: Step S13, determining a teaching opportunity level corresponding to a current user event according to a preset rule base, wherein the rule base comprises a plurality of condition information which are combined according to different logic relations and of different event types and/or different occurrence places and/or different occurrence times and/or different confidence degrees, and the teaching opportunity level corresponding to each condition information respectively; and step S14, according to the current teaching opportunity level, the prestored authority setting information and the current state of a privacy switch, the opening of the visual sensor is arbitrated and authorized, wherein the privacy switch is connected between the power end of the visual sensor and the power supply or between the processor and the visual sensor.
  7. 7. The home education implementation method according to claim 6, wherein the authority setting information includes opening authorities to which a plurality of different vision sensors respectively correspond in respective levels of teaching opportunities, wherein the plurality of different vision sensors includes a first camera provided at a head of the intelligent center robot, a second camera provided at a trunk of the intelligent center robot, a third camera provided at a leg of the intelligent center robot; In the step S14, the method for arbitrating and authorizing the opening of the visual sensor according to the authority setting information and the current state of the privacy switch includes: Step S141, judging whether the current state of the privacy switch is off, if yes, executing step S142, and if not, executing step S143; Step S142, refusing to call the visual sensor and outputting prompt information; and step S143, authorizing and opening the camera with the opening permission according to the permission setting information and the current teaching opportunity.
  8. 8. The method according to claim 7, wherein the authority setting information further includes execution parameters corresponding to a plurality of different cameras in different education opportunity levels, respectively; In the step S143, the authorizing and opening the camera with the opening authority includes: and according to the corresponding execution parameters, carrying out authorized opening on the corresponding cameras with the opening authorities.
  9. 9. The home education implementation method according to claim 7, wherein the authority setting information further includes a shutdown condition corresponding to each of the plurality of different cameras in each of the teaching opportunity levels; After the step S10, the method further includes: judging whether the shutdown condition corresponding to the started camera is met currently, if yes, shutting down the corresponding started camera.
  10. 10. The home education implementation method according to claim 1, wherein the digital twin space is constructed according to the following manner: identifying the physical space in the home by SLAM technology and generating an indoor 3D vector map; and identifying at least one indoor environment element, carrying out semantic annotation, and/or receiving semantic annotation information of the user on the at least one indoor environment element.
  11. 11. The home education implementation method according to claim 1, wherein the user's emotion portraits are initially constructed according to the following manner: Positioning the mastering level of a user in a knowledge graph through the game self-adaptive evaluation, and generating a knowledge capacity image; evaluating the learning style, thinking habit and anti-frustration ability of the user through the interactive task and questionnaire to generate a learning portraits; a user-selected interest topic is received to generate an interest representation.
  12. 12. An intelligent central robot, comprising a sensing module, an execution module, a processor and a memory storing a computer program, wherein the execution module comprises a projection module, a light module, an attitude adjustment module, a display module and a sound output module, and the intelligent central robot is characterized in that the processor realizes the steps of the household education realization method according to any one of claims 1-11 when executing the computer program.

Description

Intelligent central robot and household education implementation method thereof Technical Field The application relates to the field of smart home, in particular to an intelligent central robot and a home education implementation method thereof. Background Education is a continuous pedigree, and home education and school education play different roles at different stages, namely, early home education is a leading person who builds personality and cognition root, early school age is converted into personalized strategic partners which are parallel to the school, and finally, the education is built into the core power of individual life-long learning. It is not only the "foundation" and "supplement" of school education, but also the necessary stabilizers and growth engines to ensure that individuals remain unique, creative and inherent balance in the standardized system. Although technologies such as high-speed network, live broadcast, cloud computing and audio/video provide basic support for home education, tools and effects thereof are still subject to the following deep bottlenecks, resulting in the difficulty of home study in teaching black boxes and pseudo individuation: First, the "teaching black box" causes interactivity and presence loss. The existing product depends on unidirectional video and standardized interaction, and cannot realize important multi-mode real-time interaction (such as eye, expression, gesture and group emotion circulation) in classroom teaching. The teacher can not instantly sense the concentration and confusion of the students like off-line, the students can not obtain continuous in-situ feedback and emotion connection, the learning process is easy to distract one-way information reception, and feedback fracture which is invisible and not managed is formed; Secondly, "pseudo-personalization" is difficult to adapt to real learning requirements. Most of the systems carry out content pushing based on static labels and coarse-grained data, and lack deep diagnosis on dynamic knowledge patterns, cognitive styles and thinking processes of learners. This results in teaching still trapped within the framework of "thousand people" and not providing accurate remedies for the latter nor designing challenging steps for the advanced, creating efficiency bottlenecks of "teaching impermeability". Therefore, how to fundamentally solve the dilemma of 'invisible, indistinct and impermeable' in home education, and transform the home study from passive reception into a new intelligent education space which can be actively adapted, accurately supported and deeply cooperated, thereby improving the home study efficiency is a technical problem to be solved urgently. Disclosure of Invention The application aims to solve the technical health problem of providing an intelligent central robot and a household education implementation method thereof aiming at the technical defects existing in the prior art. The technical scheme adopted by the application for solving the technical health problems is that an intelligent central robot and a household education implementation method thereof are constructed, the intelligent central robot is applied to a processor of the intelligent central robot, the intelligent central robot further comprises a sensing module and an executing module, the executing module comprises a projection module, a light module, a posture adjusting module, a display module and a sound output module, and the household education implementation method comprises the following steps: Step S10, determining a learning scene context by analyzing the sensing data of the sensing module and a digital twin space which is constructed in advance based on an indoor physical space, wherein the learning scene context comprises a user identity, a user position, a learning state and a learning obstacle type; Step S20, selecting a corresponding matching strategy from a plurality of preset matching strategies according to the learning scene context and the stored user learning situation portrait, and performing variable filling and style adjustment on a pre-constructed structured teaching template by using AIGC technology according to the learning scene context and a preset display style based on the selected matching strategy to generate personalized scene content; Step S30, according to the personalized scene content, a corresponding execution module is mobilized to carry out multi-mode output so as to construct an immersive teaching scene; And S40, determining interaction information of a user by analyzing the sensing data of the sensing module and the digital twin space, and adjusting the output of the corresponding execution module according to the interaction information. Optionally, after the step S40, the method further includes: Determining a learning effect by analyzing the interaction information; And updating the user emotion portrait according to the learning effect. Optionally, the method further c