CN-121999771-A - Visual control method and system based on AI voice interaction and control
Abstract
The invention provides a visual control method and a visual control system based on AI voice interaction and control, which relate to the technical field of AI control and comprise the steps of acquiring a control instruction containing a target device identifier and an execution action type through a voice or remote application interface; the method comprises the steps of analyzing a control instruction into a device-level control signal based on a preset device mapping relation, transmitting the control signal to a site control node with independent energy supply through a wireless communication link, acquiring a site state image of a device after the device executes an action in real time through an image acquisition unit and transmitting the site state image back to a cloud end, extracting device state characteristics and comparing the device state characteristics with an execution action type in a consistency mode to generate a state verification result, and generating visual feedback data containing a device execution state identifier based on the verification result and the site image and pushing the visual feedback data to a user side. The invention realizes the closed-loop verification from the control instruction to the execution result, and improves the reliability and the visualization degree of remote control.
Inventors
- LI HUIJUN
- LI SHUANGBING
Assignees
- 宁波市富金园艺灌溉设备有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260107
Claims (8)
- 1. The visual control method based on AI voice interaction and control is characterized by comprising the following steps: the method comprises the steps of acquiring a control instruction sent by a user through a voice interaction interface or a remote application interface, wherein the control instruction comprises a target equipment identifier and an execution action type; The method comprises the steps of transmitting the equipment-level control signal to a site control node where target equipment is located through a wireless communication link, wherein the site control node maintains an operation state through an independent energy supply module; Extracting equipment state characteristics according to the on-site state image, and comparing the equipment state characteristics with the type of the execution action in the control instruction in a consistency manner to generate a state verification result; And generating visual feedback data based on the state verification result and the on-site state image, and pushing the visual feedback data to a remote application interface of a user, wherein the visual feedback data comprises an equipment execution state identifier and a corresponding on-site state image, and a closed loop verification mechanism which is transmitted from a control instruction to visual confirmation of the execution result is formed.
- 2. The method of claim 1, wherein parsing the control instruction into device-level control signals based on a preset device mapping relationship comprises: extracting a target equipment identifier and an execution action type from the control instruction, wherein the target equipment identifier is used for uniquely specifying a physical execution unit to be controlled; Retrieving a mapping record matched with the target equipment identifier from the equipment mapping relation, wherein the mapping record comprises an interface type and a control signal format of a physical execution unit; Determining a signal conversion rule according to the interface type in the mapping record, wherein the signal conversion rule defines a conversion mode from an execution action type to a signal identifiable by a physical execution unit; converting the execution action type into a device-level control signal which is matched with the interface type of the physical execution unit according to the signal conversion rule, wherein the device-level control signal comprises signal parameters required by driving the physical execution unit to execute the corresponding action; And packaging the equipment-level control signal and the target equipment identifier to generate a control data packet facing a field control node, wherein the control data packet is used for transmission of the wireless communication link.
- 3. The method of claim 1, wherein acquiring, in real time, a field state image of the target device after the action is performed by an image acquisition unit associated with the field control node, and transmitting the field state image back to the cloud processing platform via the wireless communication link comprises: After the field control node receives the equipment-level control signal, sending an image acquisition trigger instruction to an associated image acquisition unit, wherein the image acquisition trigger instruction comprises an acquisition time window parameter; The image acquisition unit determines the starting moment and the duration of image acquisition according to the acquisition time window parameters, and continuously acquires a scene state image sequence in the process of executing actions by the target equipment; Performing time stamp marking on each frame of image in the field state image sequence, wherein the time stamp marking is used for establishing a corresponding relation between an image frame and an execution action time sequence of equipment; Performing association binding on the field state image sequence marked by the time stamp and a corresponding device-level control signal identifier to generate an image data packet; And returning the image data packet to a cloud processing platform through the wireless communication link, wherein the cloud processing platform performs matching association on the field state image sequence and a corresponding control instruction according to the equipment-level control signal identifier.
- 4. A method according to claim 3, characterized in that the method further comprises: After the image acquisition unit acquires the field state image, preprocessing the field state image, wherein the preprocessing comprises image quality evaluation and image compression; the image quality evaluation judges whether the on-site state image meets the state identification requirement or not by calculating the definition index of the image, and triggers a re-acquisition instruction when the definition index is lower than a preset threshold; Performing self-adaptive compression processing on the field state image meeting the quality requirement, and dynamically adjusting compression parameters according to the real-time bandwidth condition of the wireless communication link to generate a compressed field state image; embedding metadata identification of device execution actions into the compressed field state image, wherein the metadata identification comprises a target device identification and an execution action type and is used for classifying and indexing the returned image by a cloud processing platform; and packaging the compressed field state image embedded with the metadata identifier into a data frame conforming to the wireless communication link transmission protocol, and transmitting the data frame back to the cloud processing platform through the wireless communication link.
- 5. The method of claim 1, wherein the field control node maintains an operational state through the independent energy supply module and drives the physical execution unit to perform an action, comprising: The independent energy supply module comprises an optical energy conversion unit, an energy storage unit and an energy management unit, wherein the energy management unit detects an electric quantity state parameter of the energy storage unit, dynamically adjusts charging power of electric energy received from the optical energy conversion unit according to the electric quantity state parameter, and transmits the adjusted electric energy to the energy storage unit, and the energy storage unit provides continuous electric energy supply for the field control node; The field control node comprises a wireless communication module, an image acquisition unit and a driving output circuit, wherein the wireless communication module receives the equipment-level control signal and analyzes an execution action type, pulse signal parameters are determined according to the execution action type and a trigger instruction is sent to the driving output circuit through an interface output unit, and the pulse signal parameters comprise pulse polarity and pulse duration; The driving output circuit generates a pulse driving signal according to the pulse signal parameter and outputs the pulse driving signal to the physical execution unit so as to drive the physical execution unit to execute physical actions; The image acquisition unit acquires a field state image during the physical action executed by the physical execution unit, and the field state image is transmitted back to the cloud processing platform through the wireless communication module for subsequent equipment state feature extraction and state verification.
- 6. A visual control system based on AI voice interaction and control for implementing the method of any of claims 1-5, comprising: the first unit is used for acquiring a control instruction sent by a user through a voice interaction interface or a remote application interface, wherein the control instruction comprises a target equipment identifier and an execution action type; the second unit is used for analyzing the control instruction into a device-level control signal based on a preset device mapping relation, and the device mapping relation defines a corresponding rule between a target device identifier and a physical execution unit; A third unit, configured to transmit, through a wireless communication link, the device-level control signal to a site control node where the target device is located, where the site control node maintains an operation state through an independent energy supply module; the fourth unit is used for acquiring a field state image of the target equipment after the target equipment performs actions in real time through an image acquisition unit associated with the field control node, and transmitting the field state image back to the cloud processing platform through the wireless communication link; A fifth unit, configured to extract a device state feature according to the field state image, compare the device state feature with an execution action type in the control instruction, and generate a state verification result; And the sixth unit is used for generating visual feedback data based on the state verification result and the on-site state image, and pushing the visual feedback data to a remote application interface of a user, wherein the visual feedback data comprises an equipment execution state identifier and a corresponding on-site state image, and a closed loop verification mechanism which is transmitted from a control instruction to visual confirmation of the execution result is formed.
- 7. An electronic device, comprising: A processor; A memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 5.
- 8. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 5.
Description
Visual control method and system based on AI voice interaction and control Technical Field The invention relates to the technical field of AI control, in particular to a visual control method and system based on AI voice interaction and control. Background Traditional device control approaches rely primarily on local physical keys or simple cell phone application operations, where the user needs to operate in the vicinity of the device, or where limited remote management is achieved through a fixed control panel. In recent years, with the maturity of voice recognition technology, cloud computing technology and wireless communication technology, intelligent control systems based on voice interaction are becoming an important development direction in the field of device control. The user can interact with the system through natural language, control instructions are sent out, and the system analyzes the instructions and then transmits the instructions to the target equipment to execute corresponding actions. In the application scenario of intelligent control systems, especially in a remote control environment, there is physical distance and line-of-sight isolation between the user and the controlled device. After a user sends a control instruction through voice or an application program, the actual execution condition of the equipment cannot be directly observed. This problem of information asymmetry between control and feedback presents significant usage impediments and potential risks in many application scenarios. Disclosure of Invention The embodiment of the invention provides a visual control method and a visual control system based on AI voice interaction and control, which can solve the problems in the prior art. In a first aspect of the embodiment of the present invention, a visual control method based on AI voice interaction and control is provided, including: Acquiring a control instruction sent by a user through a voice interaction interface or a remote application interface, wherein the control instruction comprises a target equipment identifier and an execution action type; Analyzing the control instruction into a device-level control signal based on a preset device mapping relation, wherein the device mapping relation defines a corresponding rule between a target device identifier and a physical execution unit; Transmitting the equipment-level control signal to a site control node where target equipment is located through a wireless communication link, wherein the site control node maintains an operation state through an independent energy supply module; Acquiring a field state image of the target equipment after the action is executed in real time through an image acquisition unit associated with the field control node, and transmitting the field state image back to a cloud processing platform through the wireless communication link; Extracting equipment state characteristics according to the on-site state image, and comparing the equipment state characteristics with the type of the execution action in the control instruction in a consistency manner to generate a state verification result; And generating visual feedback data based on the state verification result and the on-site state image, and pushing the visual feedback data to a remote application interface of a user, wherein the visual feedback data comprises an equipment execution state identifier and a corresponding on-site state image, and a closed loop verification mechanism which is transmitted from a control instruction to visual confirmation of the execution result is formed. The analyzing the control instruction into the device-level control signal based on the preset device mapping relation comprises the following steps: extracting a target equipment identifier and an execution action type from the control instruction, wherein the target equipment identifier is used for uniquely specifying a physical execution unit to be controlled; Retrieving a mapping record matched with the target equipment identifier from the equipment mapping relation, wherein the mapping record comprises an interface type and a control signal format of a physical execution unit; Determining a signal conversion rule according to the interface type in the mapping record, wherein the signal conversion rule defines a conversion mode from an execution action type to a signal identifiable by a physical execution unit; converting the execution action type into a device-level control signal which is matched with the interface type of the physical execution unit according to the signal conversion rule, wherein the device-level control signal comprises signal parameters required by driving the physical execution unit to execute the corresponding action; And packaging the equipment-level control signal and the target equipment identifier to generate a control data packet facing a field control node, wherein the control data packet is used for transmission of the wirele