Search

CN-122003656-A - UI task automatic execution method and electronic device

CN122003656ACN 122003656 ACN122003656 ACN 122003656ACN-122003656-A

Abstract

The application provides an automatic UI task execution method and electronic equipment, and relates to the technical field of terminals. The method can respond to user instructions (such as inquiring automatic renewal, sending files to someone, closing application permission, optimizing the performance of the whole machine, closing application notification and taking out, etc.), automatically execute tasks, and can pause automatic execution when conditions such as identity verification, user confirmation of order information or popup are met in the process of automatically executing tasks, and continue executing tasks after receiving user operations, thereby avoiding interruption or termination of automatic execution flow and improving the success rate and stability of task execution. The method can help the user to automatically execute the task, the user does not need to manually operate, only needs to input the instruction, and user experience and operation efficiency are improved.

Inventors

  • CAO ZHIHUI
  • MA DI
  • LIU XINGYU

Assignees

  • 荣耀终端股份有限公司

Dates

Publication Date
20260508
Application Date
20241023
Priority Date
20240905

Claims (20)

  1. A method for automatically executing a UI task, wherein the method is applied to an electronic device, the method comprising: displaying a first prompt window and receiving a first user instruction when the first interface is displayed; Responding to the first user instruction, and displaying a third interface after sequentially displaying M second interfaces, wherein the third interface comprises first information used for prompting a user to take over; and responding to the first operation input by the user on the third interface, and displaying a fifth interface after displaying N fourth interfaces in turn, wherein the fifth interface is an interface associated with the first user instruction, and N is a positive integer.
  2. The method according to claim 1, wherein the method further comprises: And when the M second interfaces are displayed in sequence, displaying a cursor on each second interface, wherein the cursor is displayed at a target control, and the target control is used for triggering the next interface of the current interface to be displayed.
  3. The method according to claim 1 or 2, wherein the sequentially displaying the M second interfaces comprises displaying one of the second interfaces in a full screen window and displaying a first control in the full screen window; The method further comprises the steps of responding to the operation of a user on the first control, switching the full screen window into a floating window, displaying the floating window on the first interface, and continuously displaying M-1 second interfaces in sequence in the floating window.
  4. A method according to any one of claims 1 to 3, further comprising: Displaying task execution progress information in a first window when the M second interfaces are sequentially displayed through the first window, and/or displaying a light effect layer on the first window; wherein the first window comprises a full screen window or a floating window.
  5. The method according to claim 3 or 4, characterized in that the method further comprises: In the full screen window state, if a preset type of application is started, switching the full screen window into the floating window; The application of the preset type comprises a video application or a game application.
  6. The method according to claim 4, wherein the method further comprises: Drawing the light effect layer on a system customized dynamic effect layer; And when the light effect layer is displayed on the first window, if touch operation of a user is received, the touch operation is transmitted to the next level of the dynamic effect level, and the touch operation is not responded.
  7. The method according to any one of claims 1 to 6, further comprising: Updating the light effect layer displayed on the first window according to the task execution state change, wherein the task execution state comprises an executing state, a user taking over state and an execution ending state; The method comprises the steps of updating a light effect layer displayed on a first window according to task execution state change, wherein the step of updating the light effect layer displayed on the first window comprises the steps of displaying a first light effect layer on the first window when the task execution state is in the executing state, displaying a second light effect layer on the first window when the user takes over the task execution state, and hiding the light effect layer displayed on the first window when the task execution state is ended.
  8. The method according to any one of claims 1 to 7, further comprising: And in the process of sequentially displaying the M second interfaces, responding to the operation of a user on any second interface, displaying second information, wherein the second information comprises prompt information which does not support the operation during automatic execution, exiting the control and continuing to execute the control.
  9. The method of any one of claims 1 to 8, wherein the third interface is an authentication interface, a password input interface, an information confirmation interface, a permission request interface, a single or multiple choice interface, or a pop-up requiring user take over.
  10. The method according to any one of claims 1 to 8, further comprising: responding to the first user instruction, determining a first operation sequence according to a first task map, and sequentially displaying the M second interfaces according to the first operation sequence; The first task map comprises a starting node, a plurality of intermediate nodes and an ending node, wherein the starting node is the first interface of the M second interfaces, and the fifth interface is the ending node.
  11. The method of claim 10, wherein after the displaying the third interface, the method further comprises: And responding to the first operation input by the user on the third interface, determining the node position of the current display interface in a first task map, and determining a second operation sequence according to the node position and the first task map, wherein the second operation sequence comprises the N fourth interfaces.
  12. The method according to any one of claims 1 to 11, further comprising: and under the condition that the advertisement popup window appears on the second interface, automatically closing the advertisement popup window.
  13. The method according to any one of claims 1 to 12, further comprising: Triggering the second interface to be displayed in a rolling way under the condition that the target control is not displayed on the second interface; And after the second interface is displayed in a rolling way, displaying a target control in the second interface, and displaying a cursor at the target control.
  14. The method of any one of claims 1 to 13, further comprising displaying a resume control on the third interface; After the third interfaces are displayed, the method further comprises the step of sequentially displaying the N fourth interfaces under the condition that control operation of a user on the continuous execution control is received.
  15. The method of any one of claims 1 to 14, wherein the electronic device comprises a system assistant and a UI agent module, the method further comprising: the system assistant determines intention information and slot position information according to the first user instruction and sends the intention information and the slot position information to the UI agent module; The UI agent module determines first machine flow automation (RPA) configuration information according to the intention information, and a service type identifier of the first RPA configuration information corresponds to the first service type; And the UI agent module generates a UI task operation step of a user interface according to the slot position information and the first RPA configuration information, wherein the UI task operation step comprises the steps of displaying the M second interfaces and displaying the fifth interface in sequence.
  16. The method of claim 15, wherein the first RPA configuration information includes a task identification ID, a tool name, and one or more tool parameters, one tool parameter corresponding to each slot; The step of generating the user interface UI task operation according to the slot information and the first RPA configuration information includes: Filling slots in the first RPA configuration information according to the slot information, and generating the UI task operation step, wherein the UI task operation step comprises the task identification ID, the tool name and the one or more tool parameters, and each tool parameter has a slot value.
  17. The method according to claim 15 or 16, wherein the UI task operation step further includes a target control and a UI task, and the automatic execution flow includes: Identifying the target control in each interface and determining the position coordinates of the target control; Executing the UI task on the target control according to the position coordinates of the target control; and after the UI task is executed, jumping to the next interface from the current display interface.
  18. The method of claim 17, wherein the target control is at least one of text, a button, a slide switch, an icon, and wherein the UI task is clicking or sliding the target control; The UI task is executed on the target control, and the UI task comprises the steps of displaying a cursor simulating clicking operation or displaying a track simulating sliding operation on the target control.
  19. The method of claim 17 or 18, wherein the target control is a search box, and wherein the UI task is entering a keyword in the search box; The UI task is executed on the target control, and the UI task comprises the step of displaying the action of simulating the input keyword in the search box.
  20. The method of any of claims 17 to 19, wherein the identifying the target control comprises: Detecting a target control in a text form based on page structural features, wherein the page structural features comprise extensible markup language (XML) structural information and/or Document Object Model (DOM); Detecting a target control in the form of an icon based on the visual characteristics of the page; Detecting the target control based on the page visual characteristics and the page structural characteristics under the condition that the target control is an icon control and has associated text; Detecting the target control based on page layout characteristics under the condition that the target control is a first text and a plurality of first texts are included in a page; In the event that the target control is not detected based on the page visual features and/or page structural features, the target control is detected based on the page visual features, the page structural features, and the page layout features.

Description

UI task automatic execution method and electronic device The present application claims priority from the chinese patent application filed on day 5, 9, 2024, filed on the national intellectual property office with application number 202411244923.1, application name "application business processing method, electronic device, and storage medium", the entire contents of which are incorporated herein by reference. Technical Field The application relates to the technical field of terminals, in particular to an automatic UI task execution method and electronic equipment. Background In daily use of smartphones, users often need to manually perform a series of User Interface (UI) operations to accomplish certain specific tasks, such as spot takeouts, closing automated renewal services, sending files, closing application rights, closing application notifications, complete machine optimization, or other possible tasks. However, each step of manually operating these tasks is neither convenient nor error-prone, and even some tasks have too complicated an operation path, which is cumbersome and time-consuming for the user. In addition, various special situations such as page loading delay, pop-up advertisement, operation conflict and the like can be encountered in the process of executing the tasks, so that the user is difficult to smoothly complete the operation, and the user experience is affected. Disclosure of Invention The application provides an automatic execution method of a UI task and electronic equipment, which can help a user to automatically execute the task, the user does not need to manually operate, only needs to input instructions, and the user experience and the operation efficiency are improved. In a first aspect, an embodiment of the application provides an automatic execution method of a UI task, which comprises the steps of displaying a first prompt window when a first interface is displayed, receiving a first user instruction, responding to the first user instruction, displaying a third interface after M second interfaces are sequentially displayed, wherein the third interface comprises first information, the first information is used for prompting a user to take over, M is a positive integer, responding to first operation input by a user on the third interface, displaying a fifth interface after N fourth interfaces are sequentially displayed, and the fifth interface is an interface associated with the first user instruction, and N is a positive integer. The method provided by the embodiment of the application can respond to the user instruction (such as inquiring automatic renewal, sending a file to someone, closing application permission, optimizing the performance of the whole machine, closing application notification and taking out, etc.), automatically execute the task, and can pause automatic execution when meeting the conditions such as identity verification, requiring the user to confirm order information or popup window, etc. in the process of automatically executing the task, continue executing the task after receiving the user operation, avoid the interruption or termination of the automatic execution flow, and improve the success rate and the stability of task execution. Through the scheme, the user can be helped to automatically execute the task, the user does not need to manually operate, only the instruction needs to be input, and the user experience and the operation efficiency are improved. In some possible implementations, the third interface is an authentication interface, a password input interface, an information confirmation interface, a permission request interface, a single-choice or multiple-choice interface, or a popup window that requires the user to take over. Through the scheme, the scene needing to be taken over by the user can be more accurately identified in the process of automatically executing the task, and the success rate and the stability of task execution are improved. According to the scheme, in the process of automatically executing the task, the automatic execution can be suspended when the conditions such as authentication, the need of a user to confirm order information or the permission of a popup window are met, the user can enter a take-over process, and the success rate and the stability of task execution are improved. In some possible implementations, the detecting that user takeover is required includes determining that user takeover is required if the search results in multiple results when performed automatically and user selection is required. In some possible implementations, prompting the user to perform the interactive operation when the user is detected to be required to take over includes suspending execution of the first operation sequence when the user is detected to be required to take over, and prompting the user to perform the interactive operation by means of displaying information, vibration prompts, popup prompts and/or voice prompt