Search

CN-112699060-B - Data block management method, system and storage medium

CN112699060BCN 112699060 BCN112699060 BCN 112699060BCN-112699060-B

Abstract

The application discloses a data block management method, a system and a storage medium, wherein the data block management method effectively improves the searching efficiency of a data block through the matching use of a data block ID query queue, an LRU management queue and a management node, and traverses or uses the LRU management queue and/or the data block ID query queue in a jump table manner, so that the performance problem of data movement caused by using an array manner is avoided, and the data processing efficiency is improved.

Inventors

  • XU JIAHONG
  • LI YIN
  • LI WEIQING
  • LIU BIN

Assignees

  • 深圳市茁壮网络股份有限公司

Dates

Publication Date
20260505
Application Date
20191023

Claims (11)

  1. 1. The method for managing the data block is characterized by being applied to a shared memory, wherein the shared memory comprises a management area and a data area, and the method for managing the data block comprises the following steps: Dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size; after the shared memory is mapped to a current process, the head address of the shared memory in the current process is obtained, and the difference value between the head address of the data block and the head address of the shared memory is used as an offset address of the data block; Establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises the IDs of orderly arranged data blocks, and the LRU management queue uses a priority elimination algorithm; Establishing a management node for the data block, wherein the management node comprises a plurality of first pointers, the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or represent IDs of the data blocks in the data area or represent priorities in the LRU management queue, and the addresses of the first pointers are offset addresses based on the first address of the shared memory, and the address pointers of the data blocks are used for representing offset addresses of the data blocks; Establishing a manager for each physical disk, wherein a plurality of second pointers are stored in the manager, and the second pointers point to a first node or a last node of the LRU management queue or point to the first node or the last node of the data block ID query queue or represent the number of nodes in the LRU management queue or represent the number of nodes in the data block ID query queue or represent the ID of the physical disk; traversing or using the data block ID to inquire a queue in a skip list mode according to the management node and the manager.
  2. 2. The method of claim 1, wherein the first pointer is a LRU _prev pointer to a previous node of an LRU management queue, or is a LRU _next pointer to a next node of an LRU management queue, or is a bid_prev pointer to a previous node of a data block ID lookup queue, or is a bid_next pointer to a next node of a data block ID lookup queue, or is an address pointer that characterizes an offset address of a data block of a data region in shared memory, or is a blockid field that characterizes an ID of a data block in the physical disk, or is a priv field that characterizes a priority in the LRU management queue; the address pointer points to an address other than the first address of the shared memory.
  3. 3. The method of claim 2, wherein the second pointer is a LRU _head pointer to a first node of an LRU management queue, or is a LRU _tail pointer to a last node of an LRU management queue, or is a bid_head pointer to a first node of a data block ID lookup queue, or is a bid_tail pointer to a last node of a data block ID lookup queue, or is a LRU _size field that characterizes a number of nodes in the LRU management queue, or is a bid_size field that characterizes a number of nodes in the data block ID lookup queue, or is a bid field that characterizes an ID of the physical disk.
  4. 4. The method of claim 3, wherein traversing the data block ID query queue in a skip list manner according to the management node and the manager comprises: Acquiring a bid_head pointer through the manager, and acquiring an offset address of a first node of a data block ID query queue through the bid_head pointer; calculating the sum of the head address of the shared memory and the offset address of the first node of the data block ID query queue to obtain the address of the first node of the data block ID query queue based on the address space of the current process; according to the management node, sequentially acquiring a bid_next pointer and a bid_prev pointer of the node of the data block ID query queue; And according to the acquired bid_next pointer and bid_prev pointer of the node, acquiring the offset address of the node adjacent node, and calculating the sum of the acquired offset address and the head address of the shared memory to acquire the address of the first node adjacent node of the data block ID query queue based on the address space of the current process.
  5. 5. The method of claim 3, wherein the LRU management queue using a priority elimination algorithm comprises: Acquiring a LRU _head pointer through the manager, and acquiring an offset address of a first node of an LRU management queue through the LRU _head pointer; Calculating the sum of the head address of the shared memory and the offset address of the first node of the LRU management queue to obtain the address of the first node of the LRU management queue based on the address space of the current process; Sequentially acquiring LRU _next pointers and LRU _prev pointers of nodes of the LRU management queue according to the management nodes; And according to the LRU _next pointer and the LRU _prev pointer of the acquired node, acquiring the offset address of the adjacent node of the node, and calculating the sum of the acquired offset address and the head address of the shared memory to acquire the address of the first adjacent node of the LRU management queue based on the address space of the current process.
  6. 6. The method of claim 3, wherein traversing the data block ID query queue in a skip list manner according to the management node and the manager when the order of nodes in the LRU management queue needs to be adjusted comprises: Inquiring a data block ID inquiry queue where the data block ID is located in a skip list mode through the data block ID; Determining a target node through the data block ID query queue; acquiring lru _next pointers and lru _prev pointers of the target nodes; after the node pointed by the LRU _next pointer and the node pointed by the LRU _prev pointer are connected, the target node is placed at the position of the first node of the LRU management queue, so that the order of the nodes in the LRU management queue is adjusted.
  7. 7. The method of claim 3, wherein using the data block ID query queue in a skip list manner according to the management node and the manager when the data block corresponding to the target node needs to be accessed comprises: acquiring a data address pointer corresponding to the target node in the data block ID query queue; And calculating the sum of the offset address represented by the data address pointer and the head address of the shared memory, and accessing the data block corresponding to the target node according to the calculation result.
  8. 8. The method of claim 3, wherein when data in the physical disk is required to be stored in the shared memory, traversing or using the LRU management queue and the data block ID lookup queue in a skip list manner according to the management node and the manager comprises: Selecting a node with a priv field value of 0 as a storage node at the end of the LRU management queue, judging whether the value of the priv field value-1 of the node at the end of the LRU management queue is 0 or not if the node with the priv field value of 0 does not exist at the end of the LRU management queue, if not, putting the node at the end of the LRU management queue at the position of the first node of the LRU management queue, and carrying out a new round of elimination process again until a node with the priv field value of 0 is found; after the storage node is determined, the storage node is taken out of the LRU management queue, meanwhile, the value of LRU _size field is 1, if the storage node exists in the data block ID query queue at the same time, the storage node is taken out of the data block ID query queue, and the value of bid_size field of the data block ID query queue is 1; determining the head address of a data block pointed by a storage node through an address pointer of the storage node; storing the data in the physical disk into the determined data block according to the head address of the determined data block; After the data is stored, the ID of the data block is put into blockid fields of a storage node; according to the ID of the data block, storing a storage node in the data block ID query queue, and adding the value of the bid_size field to +1; and placing the storage node into the position of the first node of the LRU management queue, and adding the value +1 of the bid_size field.
  9. 9. The method of claim 3, wherein traversing or using the LRU management queue and the data block ID lookup queue in a skip list based on the management node and the manager when the target data in the shared memory needs to be read comprises: Determining a data block ID in a physical disk corresponding to the target data according to the target data, and reading the offset of the data in the data block and the size of the target data; acquiring a target node corresponding to the determined data block ID from the data block ID query queue through the determined data block ID; acquiring a data block corresponding to the target node through an address pointer of the target node, wherein the first address of the data block is obtained through the sum of the first address of the shared memory and an offset address represented by the address pointer; obtaining the determined head address of target data in the data block through the sum of the head address of the data block and the offset in the data block; And reading the target data according to the head address of the target data.
  10. 10. A management system for a data block, wherein the management system is applied to a shared memory, the shared memory includes a management area and a data area, and the management system for the data block includes: the data block dividing module is used for dividing the data area into a plurality of data blocks, and the size of each data block is a preset size; The address determining module is used for obtaining the head address of the shared memory in the current process after the shared memory is mapped to the current process, and taking the difference value between the head address of the data block and the head address of the shared memory as the offset address of the data block; the system comprises a queue creation module, a priority elimination algorithm and a data block ID query module, wherein the queue creation module is used for creating a corresponding LRU management queue and a data block ID query queue for each physical disk, the data block ID query queue comprises IDs of orderly arranged data blocks, and the LRU management queue uses the priority elimination algorithm; The management node module is used for establishing a management node for the data block, the management node comprises a plurality of first pointers, the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or represent the IDs of the data blocks in the data area or represent the priorities in the LRU management queue, the addresses of the first pointers are offset addresses based on the first addresses of the shared memory, and the address pointers of the data blocks are used for representing the offset addresses of the data blocks; A manager module, configured to establish a manager for each physical disk, where the manager stores a plurality of second pointers, where the second pointers point to a first node or a last node of the LRU management queue or point to the first node or the last node of the data block ID query queue or represent the number of nodes in the LRU management queue or represent the number of nodes in the data block ID query queue or represent IDs of the physical disk; And the queue using module is used for traversing or using the data block ID to query a queue in a skip list mode according to the management node and the manager.
  11. 11. A storage medium, characterized in that the storage medium has stored therein a program which, when triggered, performs the method of managing data blocks according to any one of claims 1-9.

Description

Data block management method, system and storage medium Technical Field The present application relates to the field of computer application technologies, and in particular, to a method, a system, and a storage medium for managing data blocks. Background On a server, all magnetic disks need to use a large shared buffer area, and the shared buffer area is used for caching data, so that when a user reads the data, the number of times of copying the data by the user is reduced, and the data processing efficiency of the server is improved. And each disk corresponding service provides a process for management. Then we need to use shared memory to manage the cache when it is needed. And the memory allocation in the shared memory needs to dynamically allocate the resources of the shared memory according to the heat degree of the disk, and the hotter the disk is, the more the shared memory is used as the memory of the data cache. When the shared memory is Used, the shared memory needs to be divided into a management area and a data area, the data area is divided into a plurality of data blocks, an LRU (LEAST RECENTLY Used ) queue is Used for each disk in the management area, the priority of the data blocks in the data area is recorded in the LRU management queue, and when one data block needs to be eliminated, the data block with the lowest priority in the LRU management queue is selected for elimination. However, in the actual use process, the existing management method of the data blocks of the shared memory has the problems of low searching efficiency of the data nodes and low data processing efficiency. Disclosure of Invention In order to solve the technical problems, the application provides a method, a system and a storage medium for managing data blocks, so as to achieve the purpose of improving the searching efficiency and the data processing efficiency of the management method for the data blocks of the shared memory. In order to achieve the technical purpose, the embodiment of the application provides the following technical scheme: the management method of the data block is applied to a shared memory, wherein the shared memory comprises a management area and a data area, and the management method of the data block comprises the following steps: Dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size; after the shared memory is mapped to a current process, the head address of the shared memory in the current process is obtained, and the difference value between the head address of the data block and the head address of the shared memory is used as an offset address of the data block; Establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises the IDs of orderly arranged data blocks, and the LRU management queue uses a priority elimination algorithm; Establishing a management node for the data block, wherein the management node comprises a plurality of first pointers, the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or represent IDs of the data blocks in the data area or represent priorities in the LRU management queue, and the addresses of the first pointers are offset addresses based on the first address of the shared memory, and the address pointers of the data blocks are used for representing offset addresses of the data blocks; Establishing a manager for each physical disk, wherein a plurality of second pointers are stored in the manager, and the second pointers point to a first node or a last node of the LRU management queue or point to the first node or the last node of the data block ID query queue or represent the number of nodes in the LRU management queue or represent the number of nodes in the data block ID query queue or represent the ID of the physical disk; traversing or using the data block ID to inquire a queue in a skip list mode according to the management node and the manager. Optionally, the first pointer is a LRU _prev pointer pointing to a previous node of the LRU management queue, or a LRU _next pointer pointing to a next node of the LRU management queue, or a bid_prev pointer pointing to a previous node of the data block ID query queue, or a bid_next pointer pointing to a next node of the data block ID query queue, or an address pointer representing an offset address of a data block of a data area in the shared memory, or a blockid field representing an ID of a data block in the physical disk, or a priv field representing a priority in the LRU management queue; the address pointer points to an address other than the first address of the shared memory. Optionally, the second pointer is a LRU _head pointer pointing to a first node of the LRU management queue, or a LRU _tail pointer pointing to a last node of the LRU management queue, or a bid_head pointer poi