CN-122009064-A - Non-inductive interaction method, device and equipment
Abstract
The application provides a non-inductive interaction method, device and equipment, which are applied to the technical field of intelligent cabin control. The method comprises the steps of responding to an operation instruction of a user and acquiring an interface state of a main display interface. And generating a virtual execution environment for executing the operation instruction according to the interface state and the operation instruction. Rendering an application interface of a target application of which the operation instruction indicates operation in the virtual execution environment, and loading the target application into the virtual execution environment. And simulating user operation in the virtual execution environment according to the operation instruction. After the operation instruction is executed, the execution result of the operation instruction is fed back to the user, and the problems that the interaction experience is split and potential safety hazards exist due to the fact that the traditional GUI intelligent agent must preempt the main display interface when executing the task can be solved.
Inventors
- Feng Zongwen
- YANG HAIFENG
- GUAN JIANFANG
- Yang Weitun
Assignees
- 安徽开阳科技有限公司
- 奇瑞汽车股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260318
Claims (10)
- 1. A non-inductive interaction method, characterized by being applied to an intelligent cabin system, the method comprising: Responding to an operation instruction of a user, and acquiring an interface state of a main display interface; generating a virtual execution environment for executing the operation instruction according to the interface state and the operation instruction; Rendering an application interface of a target application of which the operation instruction indicates operation in the virtual execution environment, and loading the target application into the virtual execution environment; simulating the user operation in the virtual execution environment according to the operation instruction; and after the operation instruction is executed, feeding back an execution result of the operation instruction to the user.
- 2. The method of claim 1, wherein generating a virtual execution environment for executing the operation instructions based on the interface state and the operation instructions comprises: activating a non-inductive execution mode of the intelligent cabin system under the condition that the interface state indicates that the main display interface is in a preset high-priority scene, so as to keep the interface state of the main display interface unchanged; determining the task type of the task to be executed according to the operation instruction; Creating the virtual execution environment in the background of the intelligent cabin system under the condition that the task type is determined to be the task type without visual confirmation of a user; or under the condition that the task type is determined to be required to be visually confirmed by a user, creating the virtual execution environment in a preset floating area of the main display interface.
- 3. The method of claim 2, wherein creating the virtual execution environment comprises: a first frame buffer area is allocated for the virtual execution environment in a memory area; the first frame buffer area and the second frame buffer area corresponding to the main display interface are isolated from each other; And setting the output target of the virtual execution environment as the first frame buffer area, wherein the display content in the first frame buffer area does not participate in picture composition of the main display interface.
- 4. The method of claim 1, wherein rendering the application interface of the target application in the virtual execution environment for which the operation instruction indicates an operation comprises: optimizing the application interface to generate a simplified interface of the target application, wherein the simplified interface comprises a core control of the target application; rendering the simplified interface in the virtual execution environment.
- 5. The method of claim 4, wherein optimizing the application interface to generate a simplified interface for the target application comprises: screening out a background rendering instruction of the target application to remove a background image and a dynamic special effect of the target application; and/or increasing the display contrast of the core control.
- 6. The method of claim 1, wherein said simulating the user operation in the virtual execution environment according to the operation instruction comprises: Acquiring image data in a first frame buffer area corresponding to the virtual execution environment; Determining control coordinates of each operable control in the virtual execution environment according to the image data; Determining a target control and the operation type of the target control from the operable controls according to the operation instruction; constructing a simulation event according to the control coordinates of the target control and the operation type; Injecting the simulation event into an input channel of the virtual execution environment so that the target application executes the simulation event.
- 7. The method according to claim 1, wherein the method further comprises: under the condition that the operation instruction is determined to be a compound instruction, determining a plurality of sub-operation instructions according to the compound instruction; And respectively creating corresponding virtual execution environments for the sub-operation instructions, and executing the corresponding sub-operation instructions in parallel in the virtual execution environments.
- 8. The method according to claim 1, wherein the feeding back the execution result of the operation instruction to the user after executing the operation instruction includes: acquiring an execution result of the operation instruction; determining a corresponding feedback mode according to the content type of the execution result; and feeding back the execution result to the user according to the feedback mode.
- 9. A non-inductive interaction device for use in an intelligent cockpit system, said device comprising: the acquisition module is used for responding to the operation instruction of the user and acquiring the interface state of the main display interface; The generation module is used for generating a virtual execution environment for executing the operation instruction according to the interface state and the operation instruction; The rendering module is used for rendering an application interface of a target application operated by the operation instruction in the virtual execution environment and loading the target application into the virtual execution environment; The execution module is used for simulating the user operation in the virtual execution environment according to the operation instruction; And the feedback module is used for feeding back the execution result of the operation instruction to the user after executing the operation instruction.
- 10. A non-inductive interaction device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the non-inductive interaction method of any of claims 1 to 8.
Description
Non-inductive interaction method, device and equipment Technical Field The application relates to the technical field of intelligent cabin control, in particular to a non-inductive interaction method, a non-inductive interaction device and non-inductive interaction equipment. Background With the development of automobile intellectualization, voice interaction has become a core interaction mode of an intelligent cabin. However, third party applications (e.g., take-away, video, social software) often fail to open the underlying control interface to the vehicle system due to business barriers or technical architecture limitations, resulting in a voice assistant failing to control these applications through the API depth, creating an experience dilemma that can be opened but not used. To address this problem, smart technologies based on graphical user interfaces have evolved. Related (graphics user interface, GUI) graphical user interface agents are largely divided into two categories, one category that obtains control tree simulation operations based on barrier-free services and another category that identifies screenshot predictive operation coordinates based on visual language models. A common drawback of both types of solutions is that both default agents must perceive and operate on the foreground real interface that is currently visible. The technical premise is that an agent must preempt a focus of a main screen when executing a task, if a user is using navigation or watching a video, a picture mutation not only causes experience cracking and more distraction of a driver to bring potential safety hazards, but also limits a background application to acquire UI information, so that the agent cannot operate the background application and can not process a compound instruction in parallel, and in addition, complicated skin and dynamic special effects of a real interface easily interfere with visual recognition precision, and privacy leakage risks exist in screenshot processing. Disclosure of Invention The application aims to provide a non-inductive interaction method, device and equipment, which solve the problems that interaction experience is split and potential safety hazards exist due to the fact that a traditional GUI intelligent body must preempt a main display interface when executing a task. In a first aspect, an embodiment of the present application provides a non-inductive interaction method, which is applied to an intelligent cabin system, and the method includes acquiring an interface state of a main display interface in response to an operation instruction of a user. And generating a virtual execution environment for executing the operation instruction according to the interface state and the operation instruction. Rendering an application interface of a target application of which the operation instruction indicates operation in the virtual execution environment, and loading the target application into the virtual execution environment. And simulating user operation in the virtual execution environment according to the operation instruction. After the operation instruction is executed, the execution result of the operation instruction is fed back to the user. According to the non-inductive interaction method provided by the embodiment of the application, the virtual execution environment is determined according to the interface state of the main display interface of the intelligent cabin system and the user operation instruction, and the loading, rendering and operation execution of the target application are transferred to the virtual execution environment, so that the preemption of the focus of the main screen when the intelligent agent executes the task is avoided, the main display interface is not interrupted or covered even if the user is using the high-priority application such as navigation, video watching and the like, the interference of picture mutation on the attention of a driver is eliminated, and the driving safety is remarkably improved. One possible implementation manner is to generate a virtual execution environment for executing the operation instruction according to the interface state and the operation instruction, wherein the virtual execution environment comprises a non-inductive execution mode of the intelligent cabin system is activated under the condition that the interface state is determined to indicate that the main display interface is in a preset high-priority scene so as to keep the interface state of the main display interface unchanged. And determining the task type of the task to be executed according to the operation instruction. And under the condition that the task type is determined to be the task type without visual confirmation of a user, creating a virtual execution environment in the background of the intelligent cabin system. Or under the condition that the task type is determined to be the task type which needs visual confirmation of a user, creating