Search

BR-112023013595-B1 - A method implemented by a computer, system, and computer-readable medium for pre-processing application functions for faster initialization.

BR112023013595B1BR 112023013595 B1BR112023013595 B1BR 112023013595B1BR-112023013595-B1

Abstract

SYSTEMS AND METHODS FOR PRE-PROCESSING APPLICATION FUNCTIONS FOR FASTER INITIALIZATION. The disclosed computer-implemented method may include predicting, by a machine learning model, a user action and the time of the user action for an application on a computing device. The method may also include determining that an expected delay in the execution of the user action is greater than a predetermined threshold based on one or more resource constraints of the computing device and initializing an application configuration to reduce the expected delay in the execution of the user action based on the predicted time. Furthermore, the method may include prefetching one or more application components in response to the initialization of the application configuration and pre-processing at least a portion of an application function used to execute the user action. Finally, the method may include executing the user action for the application in response to a user request. Several other computer-readable methods, systems, and means are also disclosed.

Inventors

  • Shyamsundar Gopalakrishnan
  • Amritanshu Thakur
  • Ashish Gupta
  • Sailesh Nepal

Assignees

  • NETFLIX, INC

Dates

Publication Date
20260310
Application Date
20220107
Priority Date
20210108

Claims (19)

  1. 1. A computer-implemented method (100), characterized in that it comprises: predicting (110), by a machine learning model, a user action and a user action time for an application on a computing device; determining (120) that an expected delay in the execution of the user action is greater than a predetermined threshold by calculating the expected delay based on the use of at least one resource constraint of the computing device to execute the user action, wherein the predetermined threshold is an acceptable delay and wherein the expected delay is the time required to execute the user action; initializing (130) an application configuration, before application initialization, to reduce the expected delay in the execution of the user action based on the predicted time, wherein initializing the application configuration comprises initializing an initialization function to open the application; prefetching (140) at least one application component in response to the initialization of the application configuration; preprocessing (150) at least a portion of an application function used to execute the user action; execute (160) the user action for the application in response to a user request; wherein the resource constraint comprises a limited computing device resource used by at least one of: a kernel function to run the application, wherein the kernel function is an application initialization function; or the application function used to execute the user action.
  2. 2. Method (100), according to claim 1, characterized in that predicting user action and user action time comprises: training the machine learning model using a historical record of user actions on the computing device; and predicting a probability of the user request to perform the user action based on the historical record.
  3. 3. Method (100), according to claim 2, characterized in that the historical record comprises data about at least one of: historical use of the application on the computing device; historical use of a different application on the computing device; a state of the computing device; a state of a resource of the computing device; or a time of a previous user action on the computing device.
  4. 4. Method (100), according to claim 1, characterized in that the limited resource comprises at least one of: a processor of the computing device; a memory of the computing device; or an application resource stored in the computing device.
  5. 5. Method (100), according to claim 1, characterized in that the determination that the expected delay is greater than the predetermined threshold comprises at least one of: calculating that a time to perform the core function using the limited resource exceeds the predetermined threshold; or calculating that a time to perform the function used to execute the user action using the limited resource exceeds the predetermined threshold.
  6. 6. Method (100), according to claim 1, characterized in that the determination that the expected delay is greater than the predetermined threshold comprises: determining that the kernel function contributes to the expected delay; determining that the function used to execute the user action contributes to the expected delay; and calculating that a combined time to perform the kernel function and the function used to execute the user action exceeds the predetermined threshold.
  7. 7. Method (100), according to claim 1, characterized in that the application configuration initialization comprises at least one of: initializing the core function; initializing the function used to execute the user action; or initializing the limited resource used by the function.
  8. 8. Method (100), according to claim 1, characterized in that the initialization of the application configuration to reduce the expected delay comprises timing the initialization to begin before the expected time of the user action so that the reduced expected delay does not exceed the predetermined threshold.
  9. 9. Method (100), according to claim 1, characterized in that the application component comprises at least one of: metadata; an application asset; or a media resource.
  10. 10. Method (100), according to claim 9, characterized in that the preprocessing of the function used to execute the user action comprises at least one of: preprocessing the metadata; loading the application asset; pre-rendering an application graph; pre-decrypting the media resource; pre-decoding the media resource; scheduling the function used to execute the user action; or initializing an application initialization.
  11. 11. Method (100), according to claim 10, characterized in that the pre-decoding of the media resource comprises preparing the media resource for playback in response to a user request.
  12. 12. Method (100), according to claim 1, characterized in that the execution of the user action in response to the user request comprises: receiving the user request; completing the application configuration; and completing the function used to execute the user action.
  13. 13. Method (100), according to claim 1, characterized in that it further comprises reducing the probability of forced termination of the application by at least one of: decreasing application resource usage; or initializing the application configuration closer to the predicted time.
  14. 14. System (1000), characterized in that it comprises: a prediction module (212), stored in memory, which predicts, by a machine learning model (206), a user action (208) and a time (210) of the user action for an application (230) on a client computing device (202); a determination module (214), stored in memory, which determines that an expected delay (226) in the execution of the user action is greater than a predetermined threshold (228) by calculating the expected delay based on the use of at least one resource constraint (224) of the client computing device to execute the user action, wherein the predetermined threshold is an acceptable delay and wherein the expected delay is the time required to execute the user action; an initialization module (216), stored in memory, which initializes an application configuration, before application initialization, to reduce the expected delay in the execution of the user action based on the predicted time, wherein initializing the application configuration comprises initializing a initialization function (402) to open the application; a prefetching module (218), stored in memory, that prefetches at least one application component (236) in response to the initialization of the application configuration; a preprocessing module (220), stored in memory, that preprocesses at least a portion of an application function (234) used to execute the user action; an execution module (222), stored in memory, that executes the user action for the application in response to a user request (238); and at least one processor that executes the prediction module, the determination module, the initialization module, the prefetching module, the preprocessing module, and the execution module, wherein the resource constraint comprises a limited computing device resource used by at least one of: a kernel function to run the application, wherein the kernel function is an application initialization function; or the application function used to execute the user action.
  15. 15. System (1000), according to claim 14, characterized in that the prediction module (212) predicts the user action (208) and the time (210) of the user action by: training the machine learning model (206) using a historical record (302) of user actions on a set of client computing devices, including the client computing device (202); and predicting a probability (304) of the user request (238) to execute the user action.
  16. 16. System (1000), according to claim 15, characterized in that the historical record (302) comprises data about at least one of: historical use of the client computing device (202) by a user; historical use of another client computing device by the user; historical use of the application (230) by another user; a state of the client computing device; a state of another client computing device; a state of a resource of the client computing device; a state of a resource of another client computing device; or a time (210) of a previous user action.
  17. 17. System (1000), according to claim 15, characterized in that the training of the machine learning model (206) comprises: training the machine learning model on a server (902); and providing a result of the machine learning model to the client computing device (202).
  18. 18. System (1000), according to claim 15, characterized in that the training of the machine learning model (206) comprises: training the machine learning model on a server (902); providing the machine learning model to the client computing device (202); and adjusting the machine learning model based on a historical record (302) of the client computing device.
  19. 19. Non-transient computer-readable medium comprising one or more computer-executable instructions, characterized in that when the computer-executable instructions are executed by at least one processor of a computing device (202), they cause the computing device to perform the method as defined in any one of claims 1 to 13.

Description

CROSS-REFERENCE [0001] This application claims priority to U.S. Nonprovisional Application No. 17/145,023, entitled “SYSTEMS AND METHODS FOR PREPROCESSING APPLICATION FUNCTIONS FOR FASTER STARTUP,” which was filed January 8, 2021, the full contents of which are incorporated herein by reference. BACKGROUND [0002] Software applications can run on computing devices and use device resources to provide additional functionality to users. Traditionally, when a user opens an application to perform some function, the application may reserve computing resources and then use those resources while performing the function. However, applications that start “cold” generally have higher latency between the user's attempt to perform an action and the actual execution of the action. Furthermore, some functions may take a long time to prepare, based on the resources required and the application's logic for executing the functions. [0003] To prepare for a “warm” startup, some applications may maintain a small processing footprint on the device to launch more quickly when the user launches the application. Unfortunately, applications that run continuously in the background, especially on low-cost mobile devices, can consume limited and valuable resources. Furthermore, less frequently used applications may be subject to operating system processes that eliminate background applications to free up resources. Other traditional latency reduction methods may focus on faster data retrieval or simplifying application functions, such as creating lightweight versions of an application. However, these methods also assume that startup processes are sunk costs and accept a degree of delay based on a device's limitations. SUMMARY [0004] As will be described in more detail below, this disclosure describes systems and methods for predicting user actions to perform “warm-up” processes for an application before the application starts or the predicted actions are executed. In one example, a computer-implemented method for preprocessing application functions for faster startup might include predicting, by a machine learning model, a user action and the time it takes for that action to be executed for an application on a computing device. The method might also include determining that an expected delay in the execution of the user action is greater than a predetermined threshold based on one or more resource constraints of the computing device. Furthermore, the method might include initializing an application configuration to reduce the expected delay in the execution of the user action based on the predicted time. Additionally, the method might include prefetching one or more application components in response to the initialization of the application configuration. Furthermore, the method might include preprocessing at least a portion of an application function used to execute the user action. Finally, the method may include executing the user action for the application in response to a user request. [0005] In one embodiment, predicting user action and user action timing may involve training the machine learning model using a historical record of user actions on the computing device and predicting the probability of the user request to execute the user action based on the historical record. In this embodiment, the historical record may include data on the historical application usage on the computing device, historical usage of a different application on the computing device, a computing device state, a computing device resource state, and/or a time of a previous user action on the computing device. For example, the computing device may train the machine learning model using the historical record to predict the user action and user action timing. Alternatively, a server may train the machine learning model and provide the predictions and/or functions and function timing to be performed before the user action to a client computing device. In another example, the server can provide the trained machine learning model to the client computing device, which can further refine the machine learning model to predict user action and the time to perform that user action. [0006] In one example, a resource constraint might include a limited computing device resource used by a kernel function to run the application and/or the application function used to execute the user action. In this example, the limited resource might include a computing device processor, computing device memory, and/or an application resource stored on the computing device. Furthermore, in this example, determining that the expected delay is greater than the predetermined threshold might include calculating that a time to perform the kernel function using the limited resource exceeds the predetermined threshold and/or calculating that a time to perform the function used to execute the user action using the limited resource exceeds the predetermined threshold. Additionally or alternatively,