CN-122018811-A - Universal flow anti-shake processing method based on distributed cache
Abstract
The invention discloses a universal flow anti-shake processing method based on a distributed cache, which specifically comprises the following steps of S1, storing or updating event records in the distributed cache according to event identification id when an event is accessed, S2, obtaining candidate events in batches from the distributed cache, S3, judging the candidate events according to an idle time threshold value and a maximum waiting time threshold value, adding the candidate events into an event list to be processed when the events exceed the idle time threshold value or the maximum waiting time threshold value, S4, intercepting a service method by a section and injecting the event list to be processed into service processing logic before service execution, S5, after the candidate events obtained in batches are processed, executing safe cleaning on the event records which are not updated according to comparison of latest updating time before and after processing. The invention greatly reduces the writing quantity of the database and realizes the automatic batch merging processing of the high-frequency behaviors.
Inventors
- PENG WEN
Assignees
- 上海捷晓信息技术有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260206
Claims (9)
- 1. A universal flow anti-shake processing method based on distributed cache is characterized by comprising the following steps: s1, when an event is accessed, storing or updating an event record in a distributed cache according to an event identification (id), recording a start time when the event is accessed for the first time, and recording the latest update time when a subsequent corresponding event is updated; S2, obtaining candidate events in batches from the distributed cache, and reading a plurality of event records according to the preset batch size limit; s3, judging candidate events according to an idle time threshold value and a maximum waiting time threshold value, and adding the candidate events into a to-be-processed event list when the events exceed the idle time threshold value or the maximum waiting time threshold value; S4, before the service is executed, intercepting the service method by a tangent plane and injecting the event list to be processed into a service processing logic; S5, after the batch acquired candidate event processing is completed, safety cleaning is carried out on event records which are not updated according to comparison of latest update time before and after the processing, and the event records which are updated are reserved and the starting time is reset.
- 2. The universal traffic anti-jitter processing method based on distributed cache as claimed in claim 1, wherein the idle time threshold is used for judging whether the event needs to be processed when no update exists in the designated time, and the maximum waiting time threshold is used for limiting the maximum waiting time of the event in the queue so as to ensure that the event is finally and inevitably processed.
- 3. The universal flow anti-shake processing method based on distributed cache according to claim 1, wherein the method is characterized in that the method comprises the steps of obtaining a plurality of keys from the cache in batches by using an incremental key scanning mode of a distributed Map, and loading corresponding event records at one time based on getAll operation so as to reduce network access times.
- 4. The universal traffic anti-shake processing method based on distributed cache as claimed in claim 3, wherein the size of the candidate event set is a preset multiple of batchSize, and the screening precision is improved by expanding the candidate range.
- 5. The method for preventing jitter of universal traffic based on distributed cache as claimed in claim 4, wherein said adding time fair screening mechanism before processing of batch acquired candidate events performs screening and sorting, specifically, sorting batch acquired candidate events in ascending order according to start time of event records, so that events with earlier start time are added to the pending list preferentially.
- 6. The method for preventing jitter of universal traffic based on distributed caches as claimed in claim 5, wherein the event ascending order is implemented by a comparator to perform order of starting time of event records in the candidate set, the ascending order is used for preventing starvation of events, and ensuring that events with low update frequency but early entry time can obtain processing opportunities in time.
- 7. The universal flow anti-shake processing method based on distributed cache according to claim 1, wherein the step S4 is realized by a tangent plane technology, namely, the anti-shake screening logic is automatically called before the method for marking the preset annotation is executed, and the event list to be processed is transmitted into the service method in a parameter form, so that the service logic does not need to explicitly process an anti-shake flow.
- 8. The method for processing universal traffic anti-shake based on distributed caches according to claim 1, wherein the safety cleaning of event records which are not updated is specifically implemented by comparing whether the latest update time in the cache before processing of an event to be cleaned is consistent with the latest update time in the cache during cleaning, deleting the corresponding event record if the update times of the two times are consistent, and if the update times of the two times are inconsistent, reserving the event record and resetting the start time to be the current time.
- 9. The method for universal traffic anti-shake processing based on distributed caches according to claim 8, wherein the reset start time is used to continue maintaining the anti-shake period next to the event.
Description
Universal flow anti-shake processing method based on distributed cache Technical Field The invention belongs to the technical field of computer software, and particularly relates to a universal flow anti-shake processing method based on distributed caching. Background In high concurrency systems, a large number of user actions trigger background business logic, such as course learning records, action reporting, counting class operations, etc., at a very high frequency. If each trigger is directly subjected to persistence or complex logic computation, the following steps are caused: 1. Database write volume increases dramatically 2. Excessive server execution overhead 3. Hot spot ID is updated at high frequency resulting in severe lock contention 4. System overall throughput reduction The conventional solution mainly comprises: 1. Asynchronous writing using queues, but it is still unavoidable that high frequency events cause queue pile-up; 2. Using fixed time window aggregation, long tail events cannot be handled; 3. the method is simple in use and anti-shake, but the problem of high concurrency update and data consistency of a distributed scene is difficult to solve; 4. Without a fairness mechanism, some events may not be handled for a long period of time (event starvation). Disclosure of Invention The invention aims to overcome the defects in the prior art and provide a universal flow anti-shake processing method based on distributed cache, which greatly reduces the writing amount of a database, realizes automatic batch merging processing of high-frequency behaviors, has fair event processing, does not need to worry that old events are extruded for a long time due to frequent updating, and has reliable and consistent data under high concurrency, and a cleaning mechanism avoids false deletion. In order to achieve the above object, the present invention provides the following technical solutions: A general flow anti-shake processing method based on distributed cache specifically comprises the following steps: s1, when an event is accessed, storing or updating an event record in a distributed cache according to an event identification (id), recording a start time when the event is accessed for the first time, and recording the latest update time when a subsequent corresponding event is updated; S2, obtaining candidate events in batches from the distributed cache, and reading a plurality of event records according to the preset batch size limit; s3, judging candidate events according to an idle time threshold value and a maximum waiting time threshold value, and adding the candidate events into a to-be-processed event list when the events exceed the idle time threshold value or the maximum waiting time threshold value; S4, before the service is executed, intercepting the service method by a tangent plane and injecting the event list to be processed into a service processing logic; S5, after the batch acquired candidate event processing is completed, safety cleaning is carried out on event records which are not updated according to comparison of latest update time before and after the processing, and the event records which are updated are reserved and the starting time is reset. Further, the idle time threshold is used for judging whether the event needs to be processed when no update exists in the appointed time, and the maximum waiting time threshold is used for limiting the maximum waiting time of the event in the queue so as to ensure that the event is finally and necessarily processed. Further, the method for obtaining candidate events in batches comprises the steps of obtaining a plurality of keys from a cache in batches by using an incremental key scanning mode of a distributed Map, and loading corresponding event records at one time based on getAll operation so as to reduce network access times. Further, the size of the candidate event set is a preset multiple of batchSize, and screening precision is improved by expanding the candidate range. Further, the method comprises the step of screening and sorting the candidate events acquired in batches by a time fair screening mechanism before processing the candidate events acquired in batches, namely, sorting the candidate events acquired in batches in ascending order according to the starting time of event records, so that the events with earlier starting time are added into a to-be-processed list preferentially. Further, the event ascending sort is implemented by a comparator executing sort on the start time of event records in the candidate set, and the ascending sort is used for preventing the event from starving, so that the event with lower update frequency but earlier entry time can obtain a processing opportunity in time. Further, the step S4 is realized by a section cutting technology, namely the anti-shake screening logic is automatically called before the method for marking the preset annotation is executed, and the event list to be processed is