Search

CN-121462639-B - Method and device for intelligently controlling equipment in shared office based on user state

CN121462639BCN 121462639 BCN121462639 BCN 121462639BCN-121462639-B

Abstract

The application discloses an intelligent control method and device for equipment of shared offices based on user states, wherein the server side method comprises the steps of obtaining multi-mode sensing data of each shared office at each historical moment in a preset period, wherein the multi-mode sensing data comprise motion data, seat pressure data and personal article state data, conducting key feature extraction and feature fusion on the motion data, the seat pressure data and the personal article state data to obtain multi-mode features of each historical moment, determining the user state corresponding to each shared office according to the multi-mode features of each historical moment, a predefined behavior mode or a pre-fine-tuned user state identification model, and sending the user state corresponding to each shared office to a preset resource scheduling platform so as to automatically control operation equipment of each shared office. Therefore, the application can reduce the operation cost, improve the stability of the system and adapt to the complex scene of frequent flow and diversification of personnel in the shared office.

Inventors

  • HAN JINBO

Assignees

  • 浙江微风智能科技有限公司

Dates

Publication Date
20260512
Application Date
20260105

Claims (8)

  1. 1. An intelligent control method for equipment in a shared office based on a user state is characterized by being applied to a server side, and comprises the following steps: acquiring multi-mode sensing data of each shared office at each historical moment in a preset period, wherein the multi-mode sensing data comprises movement data, seat pressure data and personal object state data; carrying out key feature extraction and feature fusion on the motion data, the seat pressure data and the personal object state data to obtain multi-mode features of each historical moment; Determining a user state corresponding to each shared office according to the multi-modal characteristics of each historical moment, a predefined behavior mode or a pre-fine-tuning user state identification model, wherein the predefined behavior mode is used for representing the corresponding relation between different behavior combinations of users in the shared office and the user state, and the pre-fine-tuning user state identification model is obtained by training together through untagged general model weight learning and tagged model fine-tuning learning; The multi-mode features at each historical moment comprise a moving track feature, a seat pressure feature and an article carrying feature, the predefined behavior modes comprise a behavior mode corresponding to an in-place state, a behavior mode corresponding to a short-time leaving state and a behavior mode corresponding to a permanent leaving state, and the user state corresponding to each shared office is determined according to the multi-mode features at each historical moment and the predefined behavior mode, wherein the multi-mode features at each historical moment comprise the results of counting the track feature with the largest occurrence number through the moving track feature at each historical moment, the results of counting the pressure feature with the largest occurrence number through the seat pressure feature at each historical moment, and the results of counting the article feature with the largest occurrence number through the article carrying feature at each historical moment; When the result of the pressure characteristic indicates that the occupied state of the seat is occupied, the result of the track characteristic indicates that the user does not get up, does not move towards the doorway, and the moving speed is slow, and the result of the article characteristic indicates that the personal article is indoors, determining that a behavior mode corresponding to the in-place state is met; determining a behavior mode corresponding to the short-time leaving state to be met when the result of the pressure characteristic indicates that the occupied state of the seat is unoccupied and the result of the track characteristic indicates that the user is rising, moves towards a doorway and the result of the article characteristic indicates that the personal article is indoors, taking the short-time leaving state as the user state corresponding to each shared office, determining a behavior mode corresponding to the permanent leaving state to be met when the result of the pressure characteristic indicates that the occupied state of the seat is unoccupied and the result of the track characteristic indicates that the user is rising, moves towards the doorway and the result of the article characteristic indicates that the personal article is not indoors, and taking the permanent leaving state as the user state corresponding to each shared office; And sending the user state corresponding to each shared office to a preset resource scheduling platform so as to automatically control the operation equipment of each shared office.
  2. 2. The method of claim 1, wherein before obtaining the multi-modal awareness data for each of the plurality of shared offices at each of the plurality of historic time instants within the predetermined period, further comprising: When a shared office is started by a user, starting multiple types of sensing equipment which are pre-deployed in the shared office, wherein the multiple types of sensing equipment comprise millimeter wave radar, a pressure sensor and a personal object detection component; acquiring the actions, the moving directions and the speeds of the user in the shared office through the millimeter wave radar as motion data; acquiring seat pressure data of a seat in the shared office by the pressure sensor; Monitoring the state of the personal object carried by the user through the personal object detection assembly to obtain personal object state data; Preprocessing the motion data, the seat pressure data and the personal object state data and aligning the preprocessed data with a time window to obtain multi-mode perception data of each historical moment; And storing the multi-mode sensing data to a time sequence data platform.
  3. 3. The method of claim 1, wherein the performing key feature extraction and feature fusion on the motion data, the seat pressure data, and the personal item status data to obtain the multi-modal feature at each historical moment comprises: Analyzing whether the user has a rising action, whether the user moves towards a doorway or not and whether the user moves at a speed of the movement according to the movement data to obtain movement track characteristics; determining the occupancy state of the seat according to the seat pressure data to obtain the seat pressure characteristics; Determining whether personal items of a user exist in the shared office according to the personal item state data, and obtaining item carrying characteristics; And performing characteristic splicing on the movement track characteristic, the seat pressure characteristic and the article carrying characteristic to obtain the multi-mode characteristic of each historical moment.
  4. 4. The method of claim 1, wherein determining the corresponding user status for each shared office based on the multimodal characterization of each historical moment, the pre-trimmed user status identification model, comprises: initializing a user state judgment model which is finely adjusted in advance; sequentially inputting the multi-mode characteristics of each historical moment into a user state judgment model which is finely adjusted in advance according to the time sequence; And outputting the user state corresponding to each shared office.
  5. 5. The method of claim 4, wherein generating the pre-tuned user state decision model comprises: Collecting and preprocessing historical multi-mode sensing data of each shared office in a preset historical time period to obtain historical mode data of each historical moment of each shared office; combining the historical mode data of adjacent historical time points in the historical mode data of each historical time point of each shared office into a first sensing data pair which is positively correlated; Randomly extracting the history mode data of any two history moments of different shared offices from the history mode data of each history moment of each shared office to form a second perception data pair which is inversely related; According to the unlabeled general model weight learning of each first perception data pair and each second perception data pair, a basic model capable of identifying correlation characteristics between adjacent moments is obtained; selecting part of historical mode data labels from the historical mode data of each historical moment to reflect the real user state of the user under the historical mode data, and obtaining a small quantity of model fine-tuning samples; and performing labeled model fine adjustment learning on the basic model based on the model fine adjustment sample to obtain a user state judgment model which is fine-adjusted in advance.
  6. 6. The method according to claim 5, wherein the performing unlabeled generic model weight learning according to each first perceptual data pair and each second perceptual data pair to obtain a base model capable of identifying correlation features between adjacent time instants comprises: Creating an initial network architecture, wherein the initial network architecture comprises a feature embedding module, a similarity calculation module and a loss function for unlabeled learning which are connected in sequence; Normalizing the two historical mode data in each first perception data pair and each second perception data pair; Inputting each normalized first sensing data pair and each normalized second sensing data pair into the feature embedding module to encode two historical mode data in the sensing data pair to obtain embedded representation of each feature; performing feature stitching on the embedded representation of each feature to obtain two integral embedded representations of each first perception data pair and two integral embedded representations of each second perception data pair; inputting the two integral embedded representations into the similarity calculation module, and calculating the cosine similarity of each first perception data pair and the cosine similarity of each second perception data pair; and when the correlation loss value reaches the minimum, generating a basic model capable of identifying correlation characteristics between adjacent moments.
  7. 7. The method of claim 5, wherein the performing tagged model trim learning on the base model based on the model trim samples results in a pre-trimmed user state decision model, comprising: Extracting basic data characteristics of each model fine adjustment sample and label characteristics reflecting the state of a real user, wherein the basic data characteristics comprise movement track characteristics, seat pressure characteristics and article carrying characteristics; Mapping the basic data features and the tag features into embedded vectors with fixed dimensions to obtain characteristic vectors of each model fine tuning sample; Inputting the characteristic vector into the basic model to carry out labeled model fine adjustment, and outputting a cross entropy loss value between a user state predicted by the basic model and a real user state of each model fine adjustment sample; when the cross entropy loss value reaches the minimum, generating a user state judgment model which is finely adjusted in advance; or when the cross entropy loss value does not reach the minimum, continuing to perform the step of inputting the characteristic vector into the basic model for labeling model fine tuning until the cross entropy loss value reaches the minimum.
  8. 8. An intelligent control apparatus for a shared office device based on a user status, the apparatus comprising: The multi-mode sensing data acquisition module is used for acquiring multi-mode sensing data of each shared office at each historical moment in a preset period, wherein the multi-mode sensing data comprises movement data, seat pressure data and personal object state data; The feature processing module is used for extracting key features and fusing features of the motion data, the seat pressure data and the personal object state data to obtain multi-mode features of each historical moment; The user state determining module is used for determining the user state corresponding to each shared office according to the multi-modal characteristics of each historical moment, a predefined behavior mode or a pre-fine-tuning user state identification model, wherein the predefined behavior mode is used for representing the corresponding relation between different behavior combinations of users in the shared office and the user state, and the pre-fine-tuning user state identification model is obtained by jointly training through untagged general model weight learning and tagged model fine-tuning learning; The multi-mode features at each historical moment comprise a moving track feature, a seat pressure feature and an article carrying feature, the predefined behavior modes comprise a behavior mode corresponding to an in-place state, a behavior mode corresponding to a short-time leaving state and a behavior mode corresponding to a permanent leaving state, and the user state corresponding to each shared office is determined according to the multi-mode features at each historical moment and the predefined behavior mode, wherein the multi-mode features at each historical moment comprise the results of counting the track feature with the largest occurrence number through the moving track feature at each historical moment, the results of counting the pressure feature with the largest occurrence number through the seat pressure feature at each historical moment, and the results of counting the article feature with the largest occurrence number through the article carrying feature at each historical moment; When the result of the pressure characteristic indicates that the occupied state of the seat is occupied, the result of the track characteristic indicates that the user does not get up, does not move towards the doorway, and the moving speed is slow, and the result of the article characteristic indicates that the personal article is indoors, determining that a behavior mode corresponding to the in-place state is met; determining a behavior mode corresponding to the short-time leaving state to be met when the result of the pressure characteristic indicates that the occupied state of the seat is unoccupied and the result of the track characteristic indicates that the user is rising, moves towards a doorway and the result of the article characteristic indicates that the personal article is indoors, taking the short-time leaving state as the user state corresponding to each shared office, determining a behavior mode corresponding to the permanent leaving state to be met when the result of the pressure characteristic indicates that the occupied state of the seat is unoccupied and the result of the track characteristic indicates that the user is rising, moves towards the doorway and the result of the article characteristic indicates that the personal article is not indoors, and taking the permanent leaving state as the user state corresponding to each shared office; and the automation control module is used for sending the user state corresponding to each shared office to a preset resource scheduling platform so as to automatically control the operation equipment of each shared office.

Description

Method and device for intelligently controlling equipment in shared office based on user state Technical Field The application relates to the technical field of intelligent environment sensing and control, in particular to an intelligent control method and device for equipment in a shared office based on a user state. Background In modern shared office environments, users of the office space frequently flow and the presence status of personnel (e.g., on-site, short-time away, permanent away) is highly dynamic. Under the scene, the use state of office equipment (such as air conditioner, light, socket and the like) needs to be flexibly adjusted according to the actual in-place situation of personnel so as to realize energy conservation and emission reduction. In the related art, a single sensor is commonly used in a shared office environment to detect the presence status of a person. And judging whether a person is in place or not by detecting the thermal signal of the human body. When the signal is detected to disappear, the manager is notified to confirm the actual state of the personnel by looking at the video monitoring, and the manager can manually turn off related equipment (such as an air conditioner, light and the like) by autonomous judgment. However, a single sensor can only detect the thermal signal of the human body, and cannot accurately distinguish whether the person leaves for a short time or permanently, resulting in delay and inaccuracy of the control of the apparatus. Secondly, an administrator needs to conduct manual analysis and judgment through video monitoring, the manual monitoring mode is low in efficiency, and the method is difficult to adapt to complex scenes with frequent personnel flow and diversified behavior patterns in a shared office environment. Disclosure of Invention The embodiment of the application provides an intelligent control method and device for equipment in a shared office based on a user state. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later. In a first aspect, an embodiment of the present application provides an intelligent control method for devices in a shared office based on a user state, where the method is applied to a server, and includes: Acquiring multi-mode sensing data of each shared office at each historical moment in a preset period, wherein the multi-mode sensing data comprises movement data, seat pressure data and personal object state data; Carrying out key feature extraction and feature fusion on the motion data, the seat pressure data and the personal object state data to obtain multi-mode features of each historical moment; Determining a user state corresponding to each shared office according to multi-mode characteristics of each historical moment, a predefined behavior mode or a pre-fine-tuned user state identification model, wherein the predefined behavior mode is used for representing the corresponding relation between different behavior combinations of users in the shared offices and the user state, and the pre-fine-tuned user state identification model is obtained by adopting unlabeled general model weight learning and labeled model fine-tuning learning co-training; And sending the user state corresponding to each shared office to a preset resource scheduling platform so as to automatically control the operation equipment of each shared office. Optionally, before obtaining the multi-mode sensing data of each historical moment in the preset period in each shared office, the method further includes: when the shared office is started by a user, starting multiple types of sensing equipment which are pre-deployed in the shared office, wherein the multiple types of sensing equipment comprise millimeter wave radar, a pressure sensor and a personal object detection component; acquiring actions, moving directions and speeds of users in a shared office through millimeter wave radar, and taking the actions, the moving directions and the speeds as motion data; acquiring seat pressure data of a seat in the shared office by a pressure sensor; monitoring the state of personal items carried by a user through a personal item detection assembly to obtain personal item state data; Preprocessing motion data, seat pressure data and personal object state data and aligning the preprocessed motion data, the seat pressure data and the personal object state data with a time window to obtain multi-mode perception data of each historical moment; storing the multi-modal awareness data to the temporal data platform. Optionally, the key feature extraction and feature fusion are performed on the motion data, the s