Search

CN-122027623-A - Distributed request processing method and system for AI teaching request

CN122027623ACN 122027623 ACN122027623 ACN 122027623ACN-122027623-A

Abstract

The invention discloses a distributed request processing method and a distributed request processing system for AI teaching requests, wherein the method comprises the steps of obtaining a plurality of AI teaching requests sent to a teaching platform by a plurality of user terminals; the method comprises the steps of determining processing loads corresponding to each processing node of a teaching platform according to a plurality of AI teaching requests, distributing corresponding processing nodes for each AI teaching request based on a distribution algorithm according to equipment parameters of a user side and the processing loads, forwarding each AI teaching request to the corresponding processing node and driving all the processing nodes to process the AI teaching requests simultaneously. Therefore, the invention can realize accurate distributed teaching task scheduling based on complexity perception and load balancing, improve the concurrent processing capacity and response speed of the teaching platform, and reduce the teaching delay or resource waste risk caused by improper node allocation.

Inventors

  • WU SHUANGDI
  • Li Menchang
  • GUO FU
  • Qian Feiwan
  • Fang Enyuan

Assignees

  • 广州中长康达信息技术有限公司

Dates

Publication Date
20260512
Application Date
20260211

Claims (10)

  1. 1. A distributed request processing method for AI teaching requests, the method comprising: acquiring a plurality of AI teaching requests sent to a teaching platform by a plurality of user terminals; determining a processing load corresponding to each processing node of the teaching platform according to the AI teaching requests; distributing corresponding processing nodes for each AI teaching request based on a distribution algorithm according to the equipment parameters of the user side and the processing load; And forwarding each AI teaching request to a corresponding processing node and driving all the processing nodes to process the AI teaching requests simultaneously.
  2. 2. The distributed request processing method for AI tutorial requests of claim 1, wherein the AI tutorial requests include at least one of a tutorial type, a tutorial content, a tutorial model, a student parameter, and a tutorial duration.
  3. 3. The method for distributed request processing of AI tutorial requests of claim 1, wherein determining a processing load corresponding to each processing node of the tutorial platform based on the plurality of AI tutorial requests comprises: Inputting each AI teaching request into a trained teaching complexity prediction model to obtain teaching complexity corresponding to each AI teaching request; Determining processing resource parameters corresponding to teaching complexity corresponding to each AI teaching request based on a preset corresponding relation between the complexity and the processing resources; And determining the processing load corresponding to each processing node of the teaching platform according to the processing resource parameters corresponding to all the AI teaching requests.
  4. 4. The method for processing a distributed request for AI teaching request according to claim 3, wherein determining a processing load corresponding to each processing node of the teaching platform according to processing resource parameters corresponding to all AI teaching requests comprises: For each processing node of the teaching platform, acquiring a history processing request record and node equipment parameters of the processing node; Determining node allocation possibility corresponding to each AI teaching request and the processing node according to the history processing request record and the node equipment parameters; Calculating a calculation weight proportional to the node allocation likelihood; And calculating weighted summation values of processing resource parameters corresponding to all the AI teaching requests according to the calculation weights to obtain processing loads corresponding to the processing nodes.
  5. 5. The method for distributed request processing of AI tutorial requests of claim 4, wherein determining a node allocation likelihood for each AI tutorial request corresponding to the processing node based on the historical processing request record and node device parameters includes: calculating the average value of the request similarity between each AI teaching request and the request data of all the history processing records in the history processing request records to obtain a history similarity parameter corresponding to each AI teaching request; Inputting each AI teaching request and the node equipment parameters into a trained distribution feasibility prediction model to obtain distribution feasibility corresponding to each AI teaching request; And calculating the product of the historical similarity parameter and the distribution feasibility to obtain the node distribution possibility of each AI teaching request corresponding to the processing node.
  6. 6. The method for distributed request processing of AI tutorial requests according to claim 3, wherein said assigning each AI tutorial request with a corresponding processing node based on an assignment algorithm based on the device parameters of the user side and the processing load, comprises: for each AI teaching request, acquiring the equipment parameters of the user side corresponding to the AI teaching request; calculating the average value of the equipment similarity between all the communication equipment data and the equipment parameters in the historical communication record of each processing node to obtain the corresponding equipment priority between each processing node and the AI teaching request; Setting an objective function related to the device priority and the processing load and the processing resource parameters; And carrying out iterative allocation calculation on all the AI teaching requests based on the objective function until convergence according to a dynamic planning algorithm to obtain an allocation strategy, wherein the allocation strategy is used for determining a processing node corresponding to each AI teaching request.
  7. 7. The distributed request processing method for AI teaching requests of claim 6, wherein the objective function comprises: The equipment priority corresponding to each AI teaching request and the processing node corresponding to the AI teaching request is larger than a preset priority threshold; The processing load corresponding to the processing node corresponding to each AI teaching request is smaller than a preset load threshold; The processing load corresponding to each processing node corresponding to the AI teaching request is larger than the multiplication resource parameter corresponding to the AI teaching request, and the multiplication resource parameter is the product of the processing resource parameter corresponding to the AI teaching request and the normalization request quantity corresponding to the processing node.
  8. 8. The distributed request processing method for AI tutorial requests of claim 7, wherein the normalized request quantity is obtained by: calculating the total request quantity corresponding to all AI teaching requests Calculating the total node quantity corresponding to all the processing nodes; calculating the ratio of the total request number to the total node number; and calculating the product of the quantity of all the AI teaching requests currently distributed to the processing node and the logarithmic value corresponding to the ratio to obtain the normalization request quantity corresponding to the processing node.
  9. 9. A distributed request processing system for AI teaching requests, the system comprising: the acquisition module is used for acquiring a plurality of AI teaching requests sent to the teaching platform by a plurality of user terminals; the determining module is used for determining the processing load corresponding to each processing node of the teaching platform according to the AI teaching requests; the distribution module is used for distributing corresponding processing nodes for each AI teaching request based on a distribution algorithm according to the equipment parameters of the user side and the processing load; And the processing module is used for forwarding each AI teaching request to a corresponding processing node and driving all the processing nodes to process the AI teaching requests simultaneously.
  10. 10. A distributed request processing system for AI teaching requests, the system comprising: a memory storing executable program code; a processor coupled to the memory; The processor invokes the executable program code stored in the memory to perform the distributed request processing method for AI tutorial requests as recited in any one of claims 1-8.

Description

Distributed request processing method and system for AI teaching request Technical Field The invention relates to the technical field of data processing, in particular to a distributed request processing method and system for AI teaching requests. Background With the rapid popularization of online education and AI personalized teaching platforms, enterprises increasingly pay attention to realizing efficient scheduling of highly concurrent AI teaching tasks through a distributed computing architecture, wherein how to reduce delay and resource waste becomes a key technical problem. In the prior art, after AI teaching requests are sent by a plurality of user terminals, a central scheduler distributes tasks to processing nodes in a fixed polling or simple load threshold mode, or directly uses a single high-performance node for serial processing, so as to meet the basic teaching response requirement. The existing solution is difficult to realize accurate task-node matching and multi-node parallel driving due to lack of complexity prediction analysis of teaching request content and dynamic perception of real-time load of each processing node, and common static allocation or coarse granularity load balancing strategies cannot adapt to diversified complexity and sudden concurrency peaks of teaching tasks, so that partial nodes are overloaded, other nodes are idle, teaching response delay, user waiting time extension or whole calculation resource waste are easily caused by improper node allocation, and concurrency processing capacity, response speed and service stability of an AI teaching platform are limited. It can be seen that the prior art has defects and needs to be solved. Disclosure of Invention The invention aims to solve the technical problem of providing a distributed request processing method and a distributed request processing system for AI teaching requests, which can realize accurate distributed teaching task scheduling based on complexity perception and load balancing, improve concurrent processing capacity and response speed of a teaching platform and reduce teaching delay or resource waste risk caused by improper node allocation. In order to solve the technical problem, the first aspect of the present invention discloses a distributed request processing method for AI teaching requests, the method comprising: acquiring a plurality of AI teaching requests sent to a teaching platform by a plurality of user terminals; determining a processing load corresponding to each processing node of the teaching platform according to the AI teaching requests; distributing corresponding processing nodes for each AI teaching request based on a distribution algorithm according to the equipment parameters of the user side and the processing load; And forwarding each AI teaching request to a corresponding processing node and driving all the processing nodes to process the AI teaching requests simultaneously. As an optional implementation manner, in the first aspect of the present invention, the AI teaching request includes at least one of a teaching type, a teaching content, a teaching model, a student parameter, and a teaching duration. As an optional implementation manner, in the first aspect of the present invention, the determining, according to the plurality of AI tutorial requests, a processing load corresponding to each processing node of the tutorial platform includes: Inputting each AI teaching request into a trained teaching complexity prediction model to obtain teaching complexity corresponding to each AI teaching request; Determining processing resource parameters corresponding to teaching complexity corresponding to each AI teaching request based on a preset corresponding relation between the complexity and the processing resources; And determining the processing load corresponding to each processing node of the teaching platform according to the processing resource parameters corresponding to all the AI teaching requests. As an optional implementation manner, in the first aspect of the present invention, the determining, according to processing resource parameters corresponding to all the AI teaching requests, a processing load corresponding to each processing node of the teaching platform includes: For each processing node of the teaching platform, acquiring a history processing request record and node equipment parameters of the processing node; Determining node allocation possibility corresponding to each AI teaching request and the processing node according to the history processing request record and the node equipment parameters; Calculating a calculation weight proportional to the node allocation likelihood; And calculating weighted summation values of processing resource parameters corresponding to all the AI teaching requests according to the calculation weights to obtain processing loads corresponding to the processing nodes. As an optional implementation manner, in the first aspect of the prese