CN-121998305-A - Method, apparatus, device and storage medium for laboratory order arrangement
Abstract
The application relates to the technical field of software development, and discloses a method, a device, equipment and a storage medium for laboratory order arrangement and measurement. The method comprises the steps of carrying out chromosome coding on current laboratory order information, current laboratory personnel information and current laboratory equipment information according to a current order arrangement strategy generated based on a reinforcement learning model to obtain a corresponding initial population, evaluating the fitness of individuals in the initial population based on a current fitness function, and iteratively evolving the initial population by selecting and improving the crossover and mutation operation of chromosomes, wherein the current fitness function is obtained based on the examination parameters of laboratory orders predicted by a first AI model, and obtaining an optimal laboratory order arrangement and measurement scheme under the condition that set genetic termination conditions are met. Therefore, through the reinforcement learning model and the AI model and by combining a genetic algorithm, comprehensive optimization of laboratory order arrangement is realized, and the order arrangement efficiency and the resource utilization rate are improved.
Inventors
- LIU SHUAI
- LI TIAN
- ZHANG FAN
- YAO BIN
- CUI XIUYUAN
Assignees
- 青岛巨商汇网络科技有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251229
Claims (10)
- 1. A method for laboratory order queuing, comprising: according to the current ordering strategy generated based on the reinforcement learning model, carrying out chromosome coding on current laboratory order information, current laboratory personnel information and current experimental equipment information to obtain a corresponding initial population; evaluating the fitness of individuals in the initial population based on a current fitness function, and iteratively evolving the initial population by selecting and improving crossover and mutation operations of chromosomes, wherein the current fitness function is obtained based on examination parameters of laboratory orders predicted by a first AI model; and under the condition that the set genetic termination condition is met, obtaining the optimal laboratory order arranging and measuring scheme.
- 2. The method of claim 1, wherein generating the current ranking policy based on the reinforcement learning model comprises: Defining a state space comprising current laboratory order information, current laboratory personnel information and current laboratory equipment information, and defining an action space for distributing orders to set laboratory personnel and set laboratory equipment; And gradually learning to obtain an optimal ordering strategy matched with the state space and the action space based on the reinforcement learning model according to the reward function, and determining the optimal ordering strategy as the current ordering strategy, wherein the reward function is determined according to the examination factors of laboratory orders, and the examination factors comprise one or more of order completion conditions, laboratory staff load balance and experimental equipment utilization rate.
- 3. The method as recited in claim 1, further comprising: Collecting historical laboratory order information, historical laboratory personnel information, historical laboratory equipment information and historical examination parameters, and extracting corresponding characteristic information, wherein the characteristic information comprises one or more of submitting time, deadline time, detection time, retesting, workload and available time of laboratory personnel and use information of laboratory equipment of an order; training a model based on a machine learning algorithm or a deep learning algorithm, and training a first AI model for predicting examination parameters of a laboratory order according to characteristic information, wherein the examination parameters comprise one or more of detection time, retest probability, working efficiency of an experimenter and utilization rate of experimental equipment of the order.
- 4. A method according to claim 3, wherein evaluating fitness of individuals in the initial population based on the current fitness function comprises: Obtaining the corresponding fitness of each individual in the initial population according to the formula (1); text { Fitness } = w_1\times text { order completion time } +w_2\times\text { experimenter load balancing } +w_3\times\text { experimenter utilization } +w_4\times\text { retest/deferred corresponding penalty } (1) Wherein, (w_1, w_2, w_3, w_4) is a weight, the order completion time is a sum of the order start time and the order detection time, the load balance of the experimenter is obtained according to the working efficiency of the experimenter, and the retest corresponding penalty is determined according to retest probability; One or more of the order's detection time, retest probability, experimenter's work efficiency, and experimental facility utilization are predicted based on the first AI model.
- 5. The method of claim 1, wherein iteratively evolving the initial population comprises: Selecting corresponding individuals from the initial population for reproduction according to the fitness; performing corresponding iterative operation according to the current crossing rate and the current mutation rate; Wherein one or more genetic parameters of the weight coefficient, the current crossover rate, and the current mutation rate in the current fitness function are predicted by the second AI model. The second AI model is trained based on machine learning or deep learning models according to the collected historical genetic parameters.
- 6. The method of any one of claims 1-5, wherein meeting a set genetic termination condition comprises: Under the condition that the current iteration times are equal to the set iteration times, determining that the set genetic termination condition is met; And under the condition that the absolute difference between the current fitness of the current individual in the current iteration population and the previous fitness is smaller than a set threshold value, adding 1 to the recorded duration times, and determining that the set genetic termination condition is met if the updated current duration times reach the set times.
- 7. An apparatus for laboratory order arranging and testing, comprising: The learning coding module is configured to perform chromosome coding on current laboratory order information, current laboratory personnel information and current experimental equipment information according to a current ordering strategy generated based on the reinforcement learning model to obtain a corresponding initial population; The genetic iteration module is configured to evaluate the fitness of individuals in the initial population based on a current fitness function, and iteratively evolve the initial population by selecting and improving the crossover and mutation operations of chromosomes, wherein the current fitness function is obtained based on the examination parameters of the laboratory order predicted by the first AI model; and a termination obtaining module configured to obtain an optimal laboratory order arrangement and measurement scheme in case the set genetic termination condition is satisfied.
- 8. An apparatus for laboratory order orchestration, the apparatus comprising a processor and a memory storing program instructions, wherein the processor is configured to, when executing the program instructions, perform the method for laboratory order orchestration according to any one of claims 1-6.
- 9. An apparatus, comprising: An equipment body; the apparatus for laboratory order measurement as set forth in claim 7 or 8, mounted to the equipment body.
- 10.A storage medium storing program instructions which, when executed, perform the method for laboratory order sequencing as claimed in any one of claims 1 to 6.
Description
Method, apparatus, device and storage medium for laboratory order arrangement Technical Field The present application relates to the field of software development technology, for example, to a method, apparatus, device and storage medium for laboratory order arrangement. Background Laboratory order sequencing systems play a critical role in modern laboratory management and may require the handling of a large number of test orders each day, with different priorities, expiration dates and test times. Without an efficient metering system, the processing of orders may become confusing, resulting in inefficient testing. Related laboratory order queuing systems typically order based on predefined rules, such as First Come First Served (FCFS), deadline first served (EDD), shortest job first (SPT), etc. rules, resource allocation is typically static, irrespective of dynamic changes, such as machine failures, laboratory availability changes, etc. In addition, when special conditions such as retest demands, emergency orders and the like are met, manual intervention of an experimenter is usually required to adjust the orders. As can be seen, the related laboratory queuing system lacks flexibility, cannot effectively cope with dynamic changes, has limited optimizing capability, is used for queuing based on a single rule, and is difficult to realize multi-objective optimization, such as optimizing order completion time, load balancing of a laboratory and station utilization rate, and the like, and is difficult to process complex scenes, such as self-adaptive adjustment of queuing strategies under complex conditions of retesting, advanced completion, machine faults and the like. It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art. Disclosure of Invention The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows. The embodiment of the disclosure provides a method, a device, equipment and a storage medium for laboratory order measurement, which are used for solving the technical problems of inflexibility and low efficiency of laboratory order measurement. In some embodiments, the method comprises: according to the current ordering strategy generated based on the reinforcement learning model, carrying out chromosome coding on current laboratory order information, current laboratory personnel information and current experimental equipment information to obtain a corresponding initial population; evaluating the fitness of individuals in the initial population based on a current fitness function, and iteratively evolving the initial population by selecting and improving crossover and mutation operations of chromosomes, wherein the current fitness function is obtained based on examination parameters of laboratory orders predicted by a first AI model; and under the condition that the set genetic termination condition is met, obtaining the optimal laboratory order arranging and measuring scheme. Therefore, the order arrangement strategy can be dynamically adjusted based on the reinforcement learning model according to real-time data, the dynamic metamorphosis of a laboratory is adapted, the global searching capability of a genetic algorithm is used for finding an optimal order arrangement scheme through the combination of the reinforcement learning model and the AI model, the comprehensive optimization of laboratory order arrangement and measurement is realized, and the order arrangement efficiency and the resource utilization rate are improved. In some embodiments, generating the current ranking policy based on the reinforcement learning model includes: Defining a state space comprising current laboratory order information, current laboratory personnel information and current laboratory equipment information, and defining an action space for distributing orders to set laboratory personnel and set laboratory equipment; And gradually learning to obtain an optimal ordering strategy matched with the state space and the action space based on the reinforcement learning model according to the reward function, and determining the optimal ordering strategy as the current ordering strategy, wherein the reward function is determined according to the examination factors of laboratory orders, and the examination factors comprise one or more of order completion conditions, laboratory staff load balance and experimental equipment utilization rate. Therefore, by defining a state space and an action space, the reinforcement learnin