Search

EP-4740095-A1 - FRACTIONALIZED TASK DISTRIBUTION AND THROTTLING FRAMEWORK FOR HIGH-VOLUME TRANSACTIONS

EP4740095A1EP 4740095 A1EP4740095 A1EP 4740095A1EP-4740095-A1

Abstract

An example embodiment may involve: receiving a. request relating to a plurality of parallelizable jobs; obtaining a schedule of worker thread, availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads; assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality' of parallelizable jobs, wherein the plurality' of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs; and directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads.

Inventors

  • BRAME, Walter, James, I

Assignees

  • ServiceNow, Inc.

Dates

Publication Date
20260513
Application Date
20240814

Claims (1)

  1. MBHB Docket No.23-0693-WO CLAIMS What is claimed is: 1. A method comprising: receiving a request relating to a plurality of parallelizable jobs; obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads; assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs; and directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads. 2. The method of claim 1, further comprising: receiving a second request relating to a second plurality of parallelizable jobs, wherein the schedule of worker thread availability is also with respect to a second fractionalized task distributor, and wherein the second fractionalized task distributor is operable according to a second predefined number of worker threads; assigning, to the second fractionalized task distributor, a second plurality of worker threads for execution of the second plurality of parallelizable jobs, wherein the second plurality of worker threads is based on the second predefined number of worker threads, and wherein assigning the second plurality of worker threads is according to the schedule, the plurality of parallelizable jobs, and the one or more tasks not included in the plurality of parallelizable jobs or the second plurality of parallelizable jobs; and directing the second fractionalized task distributor to execute the second plurality of parallelizable jobs via the second plurality of worker threads at least partially concurrently with the fractionalized task distributor executing the plurality of parallelizable jobs via the plurality of worker threads. MBHB Docket No.23-0693-WO 3. The method of claim 2, wherein a sum of the predefined number of worker threads and the second predefined number of worker threads is greater than a count of worker threads from the schedule of worker thread availability, and wherein a sum of the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor is less than or equal to the count of worker threads. 4. The method of claim 2, wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are both at least 1. 5. The method of claim 2, wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are based on respective priorities of the fractionalized task distributor and the second fractionalized task distributor. 6. The method of claim 1, wherein the predefined number of worker threads corresponds to a maximum number of worker threads that can be assigned to the fractionalized task distributor. 7. The method of claim 1, wherein the plurality of worker threads is less than or equal to the predefined number of worker threads. 8. The method of claim 1, wherein directing the fractionalized task distributor to execute the plurality of parallelizable jobs comprises directing the fractionalized task distributor to execute the plurality of parallelizable jobs at least partially in parallel with one another. 9. The method of claim 1, wherein the plurality of parallelizable jobs relate to reception of a data object into a computing platform that executes the fractionalized task distributor, and wherein the parallelizable jobs respectively relate to reception of non- overlapping portions of the data object. MBHB Docket No.23-0693-WO 10. The method of claim 9, wherein reception of the data object into the computing platform comprises writing representations of the non-overlapping portions of the data object into entries of one or more database tables of the computing platform. 11. The method of claim 9, wherein reception of the data object into the computing platform comprises breaking the data object into the non-overlapping portions of the data object, wherein the plurality of parallelizable jobs are respectively associated with processing of the non-overlapping portions of the data object, and wherein executing the plurality of parallelizable jobs via the plurality of worker threads comprises transforming the non- overlapping portions of the data object into a storage format supported by the computing platform. 12. The method of claim 1, wherein the plurality of parallelizable jobs relate to responding, by a computing platform that executes the fractionalized task distributor, to a query for a data object, and wherein the parallelizable jobs respectively relate to non-overlapping portions of the data object. 13. The method of claim 12, wherein responding to the query for the data object comprises reading representations of the non-overlapping portions of the data object from entries of one or more database tables of the computing platform. 14. The method of claim 12, wherein responding to the query for the data object comprises breaking the query into a set of queries for the non-overlapping portions of the data object, wherein the plurality of parallelizable jobs are respectively associated with processing of the queries, and wherein executing the plurality of parallelizable jobs via the plurality of worker threads comprises obtaining the non-overlapping portions of the data object and providing them in response to the query. 15. A non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations comprising: receiving a request relating to a plurality of parallelizable jobs; MBHB Docket No.23-0693-WO obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads; assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs; and directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads. 16. The non-transitory computer-readable medium of claim 15, the operations further comprising: receiving a second request relating to a second plurality of parallelizable jobs, wherein the schedule of worker thread availability is also with respect to a second fractionalized task distributor, and wherein the second fractionalized task distributor is operable according to a second predefined number of worker threads; assigning, to the second fractionalized task distributor, a second plurality of worker threads for execution of the second plurality of parallelizable jobs, wherein the second plurality of worker threads is based on the second predefined number of worker threads, and wherein assigning the second plurality of worker threads is according to the schedule, the plurality of parallelizable jobs, and the one or more tasks not included in the plurality of parallelizable jobs or the second plurality of parallelizable jobs; and directing the second fractionalized task distributor to execute the second plurality of parallelizable jobs via the second plurality of worker threads at least partially concurrently with the fractionalized task distributor executing the plurality of parallelizable jobs via the plurality of worker threads. 17. The non-transitory computer-readable medium of claim 16, wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are based on respective priorities of the fractionalized task distributor and the second fractionalized task distributor. MBHB Docket No.23-0693-WO 18. The non-transitory computer-readable medium of claim 15, wherein the plurality of parallelizable jobs relate to reception of a data object into a computing platform that executes the fractionalized task distributor, and wherein the parallelizable jobs respectively relate to reception of non-overlapping portions of the data object. 19. The non-transitory computer-readable medium of claim 15, wherein the plurality of parallelizable jobs relate to responding, by a computing platform that executes the fractionalized task distributor, to a query for a data object, and wherein the parallelizable jobs respectively relate to non-overlapping portions of the data object. 20. A system comprising: one or more processors; and memory, containing program instructions that, upon execution by the one or more processors, cause the system to perform operations comprising: receiving a request relating to a plurality of parallelizable jobs; obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads; assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs; and directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads.

Description

MBHB Docket No.23-0693-WO Fractionalized Task Distribution and Throttling Framework for High-Volume Transactions CROSS-REFERENCE TO RELATED APPLICATION [0001] This application claims priority to U.S. patent application no. 18/368,854, filed September 15, 2023, which is hereby incorporated by reference in its entirety. BACKGROUND [0002] Large-scale, multi-service computing platforms can simultaneously execute tens or hundreds of applications for hundreds or thousands of users. In operation, these applications execute independently of one another, obtaining processing, memory, and communication resources as needed. Some applications are tasked with moving high volumes of data out of and/or into the platform (e.g., backup and restore procedures). This may take the form of transmitting and/or receiving many gigabytes of data or hundreds of millions of database entries, for example. The execution of such an application can be resource-intensive and have a deleterious impact on the other applications, such as causing user interfaces to respond slowly. Further, these high-volume data transactions are often designed to operate linearly, thus resulting in the transactions taking much longer than is expected or acceptable. SUMMARY [0003] Various implementations disclosed herein include efficient fractionalized task distribution for transfer of large data objects (e.g., files or sets of database entries) into and out of a computing platform. These implementations provide a fractionalized task distribution and throttling framework that allows for multiple fractionalized task distributors that intelligently share available computing resources (e.g., processing, memory, and network capacity). Notably, tasks may be pre-configured to be processed by the framework such that large tasks are broken apart into a number of smaller tasks and each smaller task represents a job that is to be completed by a worker thread. Thus, a limited amount of memory is used before the job’s data is committed to long-term storage (e.g., a database or file system). Further, all of the jobs across all tasks are held at or under a maximum number of total workers that may be used at a given time. The extent of these total workers can be pre-configured to dynamically increase or decrease over time. [0004] In this manner, fractionalized task distribution can be accomplished in a robust fashion. Individual transfers of data objects can be sped up through parallelization when computing resource availability supports doing so. Further, multiple fractionalized task distributors can share a predefined amount of these computing resources so that the computing MBHB Docket No.23-0693-WO platform remains responsive even when under heavy load. Also, the amount of computing resources reserved for sharing amongst fractionalized task distributors can vary over time, allowing the computing platform to adapt in the presence of diurnal load patterns or other scheduled tasks. [0005] Accordingly, a first example embodiment may involve: receiving a request relating to a plurality of parallelizable jobs; obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads; assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs; and directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads. [0006] A second example embodiment may involve a non-transitory computer- readable medium, having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations in accordance with the first example embodiment. [0007] In a third example embodiment, a computing system may include at least one processor, as well as memory and program instructions. The program instructions may be stored in the memory, and upon execution by the at least one processor, cause the computing system to perform operations in accordance with the first example embodiment. [0008] In a fourth example embodiment, a system may include various means for carrying out each of the operations of the first example embodiment. [0009] These, as well as other embodiments, aspects, advantages, and alternatives, will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate to the accompanying drawings. Further, this summary and other descriptions and figures provided herein are intended to illustrate embodiments by way of example only and, as such, tha