Search

CN-121973278-A - Intelligent accompanying robot based on visual perception and recognition and control method

CN121973278ACN 121973278 ACN121973278 ACN 121973278ACN-121973278-A

Abstract

The invention discloses an intelligent accompanying robot based on visual perception and recognition and a control method thereof, wherein the robot comprises a visual perception and recognition module, a control module and a control module, wherein the visual perception and recognition module is used for acquiring indoor and outdoor environment images in real time and constructing a three-dimensional environment map; verification and visual positioning of identity recognition of the old; identifying a corresponding emotional state through a facial expression identification algorithm; the intelligent environment monitoring system based on the three-dimensional environment map comprises a three-dimensional environment map, a target detection model, a real-time accompanying control module, an emotion accompanying and interaction module, a preset emotion interaction strategy and a service initiative, wherein the three-dimensional environment map is used for identifying obstacles in the environment, the real-time accompanying control module is used for starting real-time identification and accompanying based on the three-dimensional environment map and visual positioning of the old, dynamically adjusting following parameters according to the moving state of the old, avoiding the obstacles in the environment, and the emotion accompanying and interaction module is used for calling the preset emotion interaction strategy to realize emotion accompanying after the emotion state of the old is identified visually.

Inventors

  • ZHANG MIKE

Assignees

  • 张米克

Dates

Publication Date
20260505
Application Date
20260112

Claims (10)

  1. 1. An intelligent companion robot based on visual perception and recognition, comprising: the visual perception and identification module is used for: Indoor and outdoor environment images are collected in real time, and a three-dimensional environment map is constructed through a preset environment vision modeling algorithm; verification and visual positioning of identity recognition of the old; identifying a corresponding emotional state through a facial expression identification algorithm; Identifying an obstacle in the environment through a deployed target detection model, outputting position, size and distance information of the obstacle, and providing environmental support for accompanying movement; the real-time accompanying control module is used for starting real-time identification and accompanying based on the three-dimensional environment map and the visual positioning of the old, dynamically adjusting following parameters according to the moving state of the old and avoiding obstacles in the environment; the emotion accompanying and interaction module is used for: After the emotional state of the old is identified through vision, a preset emotion interaction strategy is called to realize emotion accompanying, wherein the emotion interaction strategy calls corresponding needed interaction resources based on the pre-input interest preference of the old so as to generate a personalized adaptive scheme.
  2. 2. The intelligent accompanying robot based on visual perception and recognition according to claim 1, further comprising a visual auxiliary housekeeping executing module, wherein the visual auxiliary housekeeping executing module is used for identifying corresponding housekeeping scenes and related objects through visual perception so as to realize accurate execution of simple housekeeping, and the housekeeping scenes comprise dining table cleaning, ground cleaning and article taking.
  3. 3. An intelligent companion robot based on visual perception and recognition as defined in claim 2, the health monitoring and reminding device is characterized by further comprising a health monitoring and reminding module, wherein the health monitoring and reminding module is used for: the method comprises the steps of combining visual identification with health data acquisition, realizing comprehensive health management, and immediately triggering emergency response when abnormality occurs, wherein a health index threshold calibration mechanism is adopted during health management, and personalized thresholds are established by inputting age, basic medical history and past health data of the old; the position and the state of the old are visually identified to trigger the corresponding reminding item, and And visually identifying whether the old people finish the reminding item or not, and feeding back the result to the family member end.
  4. 4. The intelligent companion robot according to any one of claims 1-3, wherein the object detection model introduces an attention mechanism based on YOLOv algorithm to enhance the recognition capability of key features and small-sized obstacles of the elderly, and adopts model quantization technique to quantize parameters of the model into INT8 format to reduce the calculation amount.
  5. 5. The intelligent companion robot based on visual perception and recognition of claim 4 wherein the real-time recognition employs gait recognition, comprising: extracting gait features by using a convolutional neural network, and extracting multi-scale gait features by adopting an improved residual error network; and enhancing the characterization force of the local gait feature by using the KAN network so as to improve the accuracy.
  6. 6. The intelligent accompanying robot based on visual perception and recognition as claimed in claim 5, wherein the initial convolution in the improved residual error network is introduced into Inception module, and the local detail features and the global features are captured simultaneously through parallel convolution and pooling operation, so that the discrimination in different scenes is improved; And meanwhile, a mixed residual structure is adopted, and the richness of the characteristic feature expression is realized, wherein the mixed residual structure fuses three branches of bottleneck residual, cavity residual and attention residual, and the three branches are output after being connected with a short circuit through weighted fusion.
  7. 7. The intelligent companion robot based on visual perception and recognition according to claim 5, wherein the real-time companion control module is further configured to determine a robot response order rule based on a preset priority decision logic to ensure service rationality in complex scenes when the scene of multiple elderly people is adapted.
  8. 8. A control method of an intelligent companion robot, which is applied to an intelligent companion robot based on visual perception and recognition as claimed in claim 4, the method comprising: Indoor and outdoor environment images are collected in real time, and a three-dimensional environment map is constructed through a preset environment vision modeling algorithm; verification and visual positioning of identity recognition of the old; identifying a corresponding emotional state through a facial expression identification algorithm; Identifying an obstacle in the environment through a deployed target detection model, outputting position, size and distance information of the obstacle, and providing environmental support for accompanying movement; based on the three-dimensional environment map and the visual positioning of the old, starting real-time identification and accompanying, and dynamically adjusting following parameters according to the moving state of the old so as to avoid obstacles in the environment; After the emotional state of the old is identified through vision, a preset emotion interaction strategy is called to realize emotion accompanying, wherein the emotion interaction strategy calls corresponding needed interaction resources based on the pre-input interest preference of the old so as to generate a personalized adaptive scheme.
  9. 9. The control method according to claim 8, characterized in that the method further comprises: the method comprises the steps of combining visual identification with health data acquisition, realizing comprehensive health management, and immediately triggering emergency response when abnormality occurs, wherein a health index threshold calibration mechanism is adopted during health management, and personalized thresholds are established by inputting age, basic medical history and past health data of the old; the position and the state of the old are visually identified to trigger the corresponding reminding item, and And visually identifying whether the old people finish the reminding item or not, and feeding back the result to the family member end.
  10. 10. The control method according to claim 8 or 9, wherein the real-time identification employs gait identification, specifically comprising: extracting gait features by using a convolutional neural network, and extracting multi-scale gait features by adopting an improved residual error network; and enhancing the characterization force of the local gait feature by using the KAN network so as to improve the accuracy.

Description

Intelligent accompanying robot based on visual perception and recognition and control method Technical Field The invention relates to the technical field of intelligent robots, in particular to an intelligent accompanying robot based on visual perception and recognition and a control method, which are particularly suitable for daily accompanying, life care and health monitoring scenes of old people. Background With the continuous deepening of the population aging degree, the number of the elderly living alone is increased year by year, and the daily care, safety guarantee and emotion requirements of the elderly become the core problems of social concern. The existing old man accompanying robot mostly adopts a sensor fusion or single voice interaction scheme, and has the defects that firstly, accompanying accuracy is insufficient, real-time and stable accompanying cannot be realized, position separation is easy to occur between the robot and the old man, secondly, emotion accompanying lacks of accurate recognition of visual dimensions, facial expression, limb state and other emotion signals of the old man are difficult to accurately capture, interaction experience is hard, thirdly, vision linkage performance of household service and health monitoring is poor, service response cannot be actively triggered through vision recognition, and fourthly, risk recognition relies on a single sensor, and recognition accuracy and instantaneity of dangerous scenes such as falling, abnormal stay and the like are to be improved. Disclosure of Invention Aiming at the technical defects mentioned in the background art, the embodiment of the invention aims to provide an intelligent accompanying robot based on visual perception and recognition and a control method thereof, which aim to solve one of the technical problems in the related art at least to a certain extent. To achieve the above object, in a first aspect, an embodiment of the present invention provides an intelligent companion robot based on visual perception and recognition, including: the visual perception and identification module is used for: Indoor and outdoor environment images are collected in real time, and a three-dimensional environment map is constructed through a preset environment vision modeling algorithm; verification and visual positioning of identity recognition of the old; identifying a corresponding emotional state through a facial expression identification algorithm; Identifying an obstacle in the environment through a deployed target detection model, outputting position, size and distance information of the obstacle, and providing environmental support for accompanying movement; the real-time accompanying control module is used for starting real-time identification and accompanying based on the three-dimensional environment map and the visual positioning of the old, dynamically adjusting following parameters according to the moving state of the old and avoiding obstacles in the environment; the emotion accompanying and interaction module is used for: After the emotional state of the old is identified through vision, a preset emotion interaction strategy is called to realize emotion accompanying, wherein the emotion interaction strategy calls corresponding needed interaction resources based on the pre-input interest preference of the old so as to generate a personalized adaptive scheme. As a preferable implementation mode of the application, the intelligent accompanying robot based on visual perception and recognition further comprises a visual auxiliary housekeeping executing module, wherein the visual auxiliary housekeeping executing module is used for realizing accurate execution of simple housekeeping through visual recognition of corresponding housekeeping scenes and related objects, and the housekeeping scenes comprise dining table cleaning, ground cleaning and article taking. As a preferred implementation manner of the present application, the intelligent accompanying robot based on visual perception and recognition further includes a health monitoring and reminding module, where the health monitoring and reminding module is configured to: the method comprises the steps of combining visual identification with health data acquisition, realizing comprehensive health management, and immediately triggering emergency response when abnormality occurs, wherein a health index threshold calibration mechanism is adopted during health management, and personalized thresholds are established by inputting age, basic medical history and past health data of the old; the position and the state of the old are visually identified to trigger the corresponding reminding item, and And visually identifying whether the old people finish the reminding item or not, and feeding back the result to the family member end. As a specific implementation mode of the application, the target detection model introduces an attention mechanism on the basis of YOLOv algorithm to e