Search

US-12621253-B2 - Allocating resources among autonomous artificial intelligence agents within a distributed computational network

US12621253B2US 12621253 B2US12621253 B2US 12621253B2US-12621253-B2

Abstract

Systems and methods disclosed herein automatically evaluate, select, and coordinate artificial intelligence (AI)-based agents for collaborative distributed task execution based on dynamic, multi-attribute scoring and resource allocation models. The system obtains a task specification request defining a computational requirement set, a performance metric set, and an available resource set for one or more tasks to be executed by a network of AI-based agents. A first AI model set generates domain-specific test datasets and validates prospective agents by comparing agent-generated fingerprints against predetermined hash values stored on a distributed or federated ledger. A second AI model set constructs a multi-dimensional scoring data structure for each agent by using historical performance metrics to compute weighted composite scores. The system selects a subset of AI-based agents, ranks the agents, and allocates resources proportional to each agent's composite score. A third AI model set coordinates and executes distributed computer-executable workflows across the selected agents.

Inventors

  • Vishal Mysore
  • Prithvi Narayana Rao
  • Payal Jain
  • Sawyer Uzzell
  • Joao Paulo De Castro Marchese
  • James Myers

Assignees

  • CITIBANK, N.A.

Dates

Publication Date
20260505
Application Date
20250919

Claims (20)

  1. 1 . A system for allocating resources among autonomous artificial intelligence (AI) agents within a distributed computational network, the system comprising: at least one hardware processor; and at least one non-transitory memory storing instructions, which, when executed by the at least one hardware processor, cause the system to: receive, from a computing device, a task specification request that defines (a) a computational requirement set, (b) a performance metric set, and (c) an available resource set associated with one or more tasks configured to be executed by a distributed network of multiple AI-based agents, wherein corresponding values of the performance metric set associated with historically executed tasks for each AI-based agent are accessible via a distributed ledger associated with the distributed network; evaluate, using an AI model set, the multiple AI-based agents by: generating a series of domain-specific test datasets configured to test satisfaction of a particular AI-based agent with the computational requirement set, transmitting each domain-specific test dataset to an input layer of each AI-based agent of the multiple AI-based agents, receiving, from an output layer of each AI-based agent of the multiple AI-based agents, a digital fingerprint of output content generated responsive to a corresponding domain-specific test dataset, wherein the digital fingerprint is generated by applying one or more hash functions to the output content, and validating one or more AI-based agents of the multiple AI-based agents by comparing the digital fingerprint from each AI-based agent against a predetermined hash value set stored on the distributed ledger; construct, using the AI model set, a multi-dimensional scoring matrix for each of the one or more AI-based agents by generating a series of weighted composite scores using the corresponding values of the performance metric set for each of the one or more AI-based agents accessed via the distributed ledger; select, using the AI model set, a selected AI-based agent set of the multiple AI-based agents by ranking the one or more AI-based agents using the constructed multi-dimensional scoring matrix; distribute the available resource set among the selected AI-based agent set proportional to a corresponding series of weighted composite scores of each selected AI-based agent; and cause execution of, using the selected AI-based agent set on the distributed network, a sequence of computer-executable workflows configured to perform the one or more tasks in accordance with the computational requirement set, wherein each selected AI-based agent is configured to use a respective distributed resource set to execute the sequence of computer-executable workflows.
  2. 2 . The system of claim 1 , wherein the system is further caused to: monitor each selected AI-based agent during execution of the sequence of computer-executable workflows by collecting performance data of the selected AI-based agent that includes one or more of: processing time, memory usage, or task completion rate; and dynamically adjust the distribution of the available resource set among the selected AI-based agent set by re-distributing one or more unused resources within a respective distributed resource set of a first AI-based agent within the selected AI-based agent set to a second AI-based agent within the selected AI-based agent set.
  3. 3 . The system of claim 1 , wherein the system is further caused to: generate one or more executable smart contracts defining a performance threshold set associated with the performance metric set for each selected AI-based agent, wherein the one or more executable smart contracts are configured to execute one or more computer-executable instructions in response to the selected AI-based agent satisfying the performance threshold set during execution of the sequence of computer-executable workflows; and cause deployment of the one or more executable smart contracts within the distributed network.
  4. 4 . The system of claim 1 , wherein the system is further caused to: record values of an individual contribution metric set for each selected AI-based agent; and update the multi-dimensional scoring matrix by combining corresponding values of the individual contribution metric set for each selected AI-based agent with a corresponding series of weighted composite scores.
  5. 5 . The system of claim 4 , wherein the system is further caused to: update the selected AI-based agent set of the multiple AI-based agents by ranking the one or more AI-based agents using the updated multi-dimensional scoring matrix.
  6. 6 . The system of claim 1 , wherein the one or more AI-based agents are validated in response to a determination that a respective digital fingerprint within a fault threshold of the predetermined hash value set stored on the distributed ledger.
  7. 7 . A non-transitory, computer-readable storage medium comprising instructions thereon, wherein the instructions, when executed by at least one data processor of a system, cause the system to: access a task specification request that defines (a) a computational requirement set, (b) a performance metric set, and (c) an available resource set associated with one or more tasks configured to be executed by a federated network of multiple AI-based agents, wherein corresponding values of the performance metric set associated with historically executed tasks for each AI-based agent are accessible via a federated ledger associated with the federated network; evaluate, using an AI model set, the multiple AI-based agents by: generating a series of test datasets configured to test satisfaction of a particular AI-based agent with the computational requirement set, and validating one or more AI-based agents of the multiple AI-based agents by applying each test dataset to each AI-based agent of the multiple AI-based agents; construct, using the AI model set, a scoring matrix for each of the one or more AI-based agents by generating a series of scores using the corresponding values of the performance metric set for each of the one or more AI-based agents accessed via the federated ledger; select, using the AI model set, a selected AI-based agent set within the federated network of multiple AI-based agents by ranking the one or more AI-based agents using the constructed scoring matrix; distribute the available resource set among the selected AI-based agent set proportional to a corresponding series of scores of each selected AI-based agent; and cause execution of, using the selected AI-based agent set on the federated network, a sequence of computer-executable workflows configured to perform the one or more tasks in accordance with the computational requirement set.
  8. 8 . The non-transitory, computer-readable storage medium of claim 7 , wherein the computational requirement set includes one or more of: a processing power specification, a data format, or a knowledge domain.
  9. 9 . The non-transitory, computer-readable storage medium of claim 7 , wherein the series of scores includes a reputation score, and wherein the reputation score of a particular AI-based agent is generated by combining peer scores received from other AI-based agents of the one or more AI-based agents.
  10. 10 . The non-transitory, computer-readable storage medium of claim 7 , wherein the federated ledger represents multiple independent entities configured to control a hash-chained log, and wherein the federated ledger network is configured to replicate a representation of the values of the performance metric set for each of the one or more AI-based agents to a respective computing device associated with each of the multiple independent entities in response to a quorum co-signature from the multiple independent entities.
  11. 11 . The non-transitory, computer-readable storage medium of claim 7 , wherein the system is further caused to: decompose, using the AI model set, the task specification request to identify (a) the computational requirement set, (b) the performance metric set, and (c) the available resource set.
  12. 12 . The non-transitory, computer-readable storage medium of claim 7 , wherein the system is further caused to: determine, using the AI model set, a degree of complexity associated with the task specification request using (a) the computational requirement set, (b) the performance metric set, and (c) the available resource set; and generate the series of scores using a subset of the corresponding values of the performance metric set for each of the one or more AI-based agents that is generated by filtering corresponding historically executed tasks for the AI-based agent based on the determined degree of complexity.
  13. 13 . The non-transitory, computer-readable storage medium of claim 7 , wherein the AI model set is a large language model (LLM).
  14. 14 . A computer-implemented method for managing collaboration of artificial intelligence (AI)-based agents, the computer-implemented method comprising: obtaining a task specification request that defines (a) a computational requirement set, (b) a performance metric set, and (c) an available resource set associated with one or more tasks configured to be executed by multiple AI-based agents; validating one or more AI-based agents of the multiple AI-based agents by evaluating, using an AI model set, the multiple AI-based agents against a series of test datasets configured to test satisfaction of each AI-based agent with the computational requirement set; determining, using the AI model set, a series of scores for each of the one or more AI-based agents using corresponding values of the performance metric set associated with each of the one or more AI-based agents accessed; selecting, using the AI model set, a selected AI-based agent set of the multiple AI-based agents by comparing the one or more AI-based agents using the series of scores; allocating the available resource set among the selected AI-based agent set proportional to a corresponding series of scores of each selected AI-based agent; and causing execution of, using the selected AI-based agent set, a series of computer-executable workflows configured to perform the one or more tasks in accordance with the computational requirement set.
  15. 15 . The computer-implemented method of claim 14 , wherein the series of scores includes a reputation score, and wherein the reputation score of a particular AI-based agent is generated by combining previous series of scores previously determined for the particular AI-based agent.
  16. 16 . The computer-implemented method of claim 14 , further comprising: generating one or more executable smart contracts configured to transfer a corresponding allocated resource set to each selected AI-based agent; and causing deployment of the one or more executable smart contracts.
  17. 17 . The computer-implemented method of claim 14 , wherein one or more resources within the available resource set represents a monetary resource.
  18. 18 . The computer-implemented method of claim 14 , wherein one or more resources within the available resource set represents a computational resource.
  19. 19 . The computer-implemented method of claim 14 , wherein one or more AI-based agents of the multiple AI-based agents is an autonomous agent.
  20. 20 . The computer-implemented method of claim 14 , wherein one or more AI-based agents of the multiple AI-based agents is a semi-autonomous agent.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S) This application is a continuation-in-part of U.S. patent application Ser. No. 19/288,027, entitled “ENCRYPTED AUTONOMOUS AGENT VERIFICATION IN MULTI-TIERED DISTRIBUTED SYSTEMS ACROSS GLOBAL OR CLOUD NETWORKS” filed on Aug. 1, 2025, which is a continuation-in-part of U.S. patent application Ser. No. 19/217,943 entitled “AUTOMATIC GENERATION AND EXECUTION OF COMPUTER-EXECUTABLE COMMANDS USING ARTIFICIAL INTELLIGENCE MODELS” filed on May 23, 2025. U.S. patent application Ser. No. 19/288,027 is further a continuation-in-part of U.S. patent application Ser. No. 19/179,996 entitled “SYSTEMS AND METHODS FOR DETERMINING RESOURCE AVAILABILITY ACROSS GLOBAL OR CLOUD NETWORKS” and filed Apr. 15, 2025, which is a continuation-in-part of U.S. patent application Ser. No. 18/434,687 (now U.S. Pat. No. 12,126,546 issued Oct. 22, 2024) entitled “SYSTEMS AND METHODS FOR DETERMINING RESOURCE AVAILABILITY ACROSS GLOBAL OR CLOUD NETWORKS” and filed Feb. 6, 2024. This application is further a continuation-in-part of U.S. patent application Ser. No. 19/182,585, entitled “DYNAMIC MULTI-MODEL MONITORING AND VALIDATION FOR ARTIFICIAL INTELLIGENCE MODELS” filed on Apr. 18, 2025, which is a continuation of U.S. patent application Ser. No. 18/947,102, entitled “DYNAMIC MULTI-MODEL MONITORING AND VALIDATION FOR ARTIFICIAL INTELLIGENCE MODELS” filed on Nov. 14, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/653,858 entitled “VALIDATING VECTOR CONSTRAINTS OF OUTPUTS GENERATED BY MACHINE LEARNING MODELS” filed on May 2, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/637,362 entitled “DYNAMICALLY VALIDATING AI APPLICATIONS FOR COMPLIANCE” filed on Apr. 16, 2024. U.S. patent application Ser. No. 18/947,102 is further a continuation-in-part of U.S. patent application Ser. No. 18/782,019 entitled “IDENTIFYING AND ANALYZING ACTIONS FROM VECTOR REPRESENTATIONS OF ALPHANUMERIC CHARACTERS USING A LARGE LANGUAGE MODEL” and filed Jul. 23, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/771,876 entitled “MAPPING IDENTIFIED GAPS IN CONTROLS TO OPERATIVE STANDARDS USING A GENERATIVE ARTIFICIAL INTELLIGENCE MODEL” and filed Jul. 12, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/661,532 entitled “DYNAMIC INPUT-SENSITIVE VALIDATION OF MACHINE LEARNING MODEL OUTPUTS AND METHODS AND SYSTEMS OF THE SAME” and filed May 10, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/661,519 entitled “DYNAMIC, RESOURCE-SENSITIVE MODEL SELECTION AND OUTPUT GENERATION AND METHODS AND SYSTEMS OF THE SAME” and filed May 10, 2024, and is a continuation-in-part of U.S. patent application Ser. No. 18/633,293 entitled “DYNAMIC EVALUATION OF LANGUAGE MODEL PROMPTS FOR MODEL SELECTION AND OUTPUT VALIDATION AND METHODS AND SYSTEMS OF THE SAME” and filed Apr. 11, 2024. U.S. patent application Ser. No. 18/947,102 is further a continuation-in-part of U.S. patent application Ser. No. 18/739,111 entitled “END-TO-END MEASUREMENT, GRADING AND EVALUATION OF PRETRAINED ARTIFICIAL INTELLIGENCE MODELS VIA A GRAPHICAL USER INTERFACE (GUI) SYSTEMS AND METHODS” and filed Jun. 10, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/607,141 entitled “GENERATING PREDICTED END-TO-END CYBER-SECURITY ATTACK CHARACTERISTICS VIA BIFURCATED MACHINE LEARNING-BASED PROCESSING OF MULTI-MODAL DATA SYSTEMS AND METHODS” filed on Mar. 15, 2024, which is a continuation-in-part of U.S. patent application Ser. No. 18/399,422 entitled “PROVIDING USER-INDUCED VARIABLE IDENTIFICATION OF END-TO-END COMPUTING SYSTEM SECURITY IMPACT INFORMATION SYSTEMS AND METHODS” filed on Dec. 28, 2023, which is a continuation of U.S. patent application Ser. No. 18/327,040 (now U.S. Pat. No. 11,874,934) entitled “PROVIDING USER-INDUCED VARIABLE IDENTIFICATION OF END-TO-END COMPUTING SYSTEM SECURITY IMPACT INFORMATION SYSTEMS AND METHODS” filed on May 31, 2023, which is a continuation-in-part of U.S. patent application Ser. No. 18/114,194 (now U.S. Pat. No. 11,763,006) entitled “COMPARATIVE REAL-TIME END-TO-END SECURITY VULNERABILITIES DETERMINATION AND VISUALIZATION” filed Feb. 24, 2023, which is a continuation-in-part of U.S. patent application Ser. No. 18/098,895 (now U.S. Pat. No. 11,748,491) entitled “DETERMINING PLATFORM-SPECIFIC END-TO-END SECURITY VULNERABILITIES FOR A SOFTWARE APPLICATION VIA GRAPHICAL USER INTERFACE (GUI) SYSTEMS AND METHODS” filed Jan. 19, 2023. The content of the foregoing applications is incorporated herein by reference in their entirety. BACKGROUND An artificial intelligence (AI) agentic model (“agent”), whether autonomous or semi-autonomous, refers to a persistent software entity characterized by a digitally encoded objective function. The objective function can instruct the agent to, for example, maximize task accuracy, minimize resource usage, comply with specified operational constraints, a