CN-122019124-A - Memory processing method and device and computer storage medium
Abstract
The application provides a memory processing method, a memory processing device and a computer storage medium, wherein the memory processing method comprises the steps of obtaining a read-write request; and if so, completing the read-write request by using a hot data memory cache pool, wherein the hot data memory cache pool is positioned in a block device driving layer. By the method, the read access is completely completed in the memory cache pool of the drive layer, and a quick path without accessing the physical storage device is formed. The path thoroughly eliminates physical I/O operations and solves the performance bottleneck caused by read delay and seek time in the prior art.
Inventors
- LIU DANDAN
Assignees
- 合肥杰发科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251208
Claims (10)
- 1. The memory processing method is characterized by comprising the following steps: Acquiring a read-write request; determining whether the target data accessed by the read-write request is hot data; if yes, the read-write request is completed by using a hot data memory cache pool, wherein the hot data memory cache pool is positioned in a block device driving layer.
- 2. The memory processing method of claim 1, wherein, When the read-write request is a read request; and completing the read-write request by using the hot area memory cache pool, wherein the read-write request comprises the following steps: Positioning a buffer area of the hot data in the hot data memory buffer pool; data is read directly from the buffer to complete the read request.
- 3. The memory processing method of claim 1, wherein, When the read-write request is a write request; and completing the read-write request by using the hot area memory cache pool, wherein the read-write request comprises the following steps: Positioning a buffer area of the hot data in the hot data memory buffer pool; applying for a new cache page from the hot data memory cache pool; Applying the write data to a corresponding location of the new cache page; Updating the buffer pointer of the hot data to point to the new cache page; releasing the old cache page pointed by the hot data back to the hot area memory cache pool; and directly completing the write request in the hot data memory cache pool, and returning a success state to an upper layer.
- 4. The memory processing method of claim 1, wherein, The determining whether the target data accessed by the read-write request is hot data includes: analyzing the read-write request, and extracting characteristic information of the read-write request, wherein the characteristic information comprises a target logic block address, a request size, an operation type and/or a time stamp; Inquiring a thermal data management linked list, and judging whether target data accessed by the read-write request exists in the thermal data management linked list; if yes, determining that the target data is hot data; if not, judging whether the target data accessed by the read-write request is hot data or not according to the characteristic information.
- 5. The memory processing method of claim 4, wherein, The step of judging whether the target data accessed by the read-write request is hot data according to the characteristic information comprises the following steps: Acquiring a sliding time window maintained by the target logic block address, wherein the sliding time window has a predefined window length; counting the number of times of access received in the sliding time window by using the timestamp, and calculating average access frequency; and comparing the average access frequency with a preset frequency threshold value, and identifying the data block corresponding to the logical block address meeting the condition as hot data.
- 6. The memory processing method according to claim 5, wherein, The memory processing method further comprises the following steps: Judging whether the thermal data exists in the thermal data management linked list; if yes, updating the last access time stamp of the hot data node, and increasing the access count of the hot data node; if not, applying for a continuous memory space matched with the request size from a pre-allocated hot area memory cache pool; A new hot data node is created and inserted into the hot data linked list.
- 7. The memory processing method of claim 1, wherein, The memory processing method further comprises the following steps: triggering a periodic scanning task according to a preset time interval, and scanning the thermal data management linked list; decrementing the access count of the scanned hot data node by a decay factor; comparing the attenuated access count with a preset cold data threshold; If the access count is below the cold data threshold, the node is marked as cold data and removed from the hot data management linked list as a node to be eliminated.
- 8. The memory processing method of claim 1, wherein, The memory processing method further comprises the following steps: storing the information of the currently identified thermal data into a nonvolatile storage medium, wherein the thermal data information at least comprises a logic block address of the thermal data; reading the thermal data information from the non-volatile storage medium when the system is next started; And according to the read information, pre-establishing a corresponding hot data node in a hot area memory cache pool so as to provide hot data cache service immediately when the system is started.
- 9. The memory processing device is characterized by comprising a read-write request processing module, a thermal data judging module and a read-write driving module; the read-write request processing module is used for acquiring a read-write request; The thermal data judging module is used for determining whether the target data accessed by the read-write request is thermal data or not; the read-write driving module is used for completing the read-write request by utilizing a hot data memory cache pool, wherein the hot data memory cache pool is positioned at a block device driving layer.
- 10. A memory processing device/computer storage medium, wherein the memory processing device comprises a memory and a processor coupled to the memory, wherein the memory is configured to store program data, wherein the processor is configured to execute the program data to implement the memory processing method of any one of claims 1 to 8, and/or wherein the memory processing device is configured to store the program data, The computer storage medium is configured to store program data, which when executed by a computer, is configured to implement the memory processing method according to any one of claims 1 to 8.
Description
Memory processing method and device and computer storage medium Technical Field The present application relates to the field of storage technologies, and in particular, to a memory processing method, apparatus, and computer storage medium. Background In modern storage systems, the storage devices (e.g., SSD, eMMC, UFS) are burdened with a large number of data read-write tasks. With the increasing demand for high performance, low latency access by traffic, the data path of conventional "write devices re-read" is faced with a large performance bottleneck. Particularly, for Hot Data (Hot Data), i.e., data frequently accessed in a short time, I/O operations are performed through a storage device each time, which may cause problems such as increased delay and increased wear of the device (e.g., NAND FLASH lifetime). Disclosure of Invention In order to solve the above technical problems, the present application provides a memory processing method, a memory processing device and a computer storage medium. In order to solve the technical problems, the application provides a memory processing method which comprises the steps of obtaining a read-write request, determining whether target data accessed by the read-write request is hot data or not, and if so, completing the read-write request by utilizing a hot data memory cache pool, wherein the hot data memory cache pool is positioned in a block device driving layer. The method comprises the steps of locating a buffer zone of the hot data in the hot data memory buffer pool, and directly reading data from the buffer zone to complete the read request. The method comprises the steps of completing a read-write request by utilizing a hot area memory cache pool when the read-write request is a write request, wherein the step of completing the read-write request by utilizing the hot area memory cache pool comprises the steps of positioning a buffer area where hot data are located in the hot data memory cache pool, applying a new cache page to the hot data memory cache pool, applying write data to a corresponding position of the new cache page, updating a buffer area pointer of the hot data to enable the buffer area pointer of the hot data to point to the new cache page, releasing an old cache page originally pointed by the hot data back to the hot area memory cache pool, completing the write request directly in the hot data memory cache pool, and returning to a successful state to an upper layer. The method comprises the steps of determining whether target data accessed by a read-write request is hot data or not, analyzing the read-write request, extracting characteristic information of the read-write request, wherein the characteristic information comprises a target logic block address, a request size, an operation type and/or a time stamp, inquiring a hot data management linked list, judging whether the target data accessed by the read-write request exists in the hot data management linked list or not, if yes, determining whether the target data is hot data, and if not, judging whether the target data accessed by the read-write request is hot data or not according to the characteristic information. The method comprises the steps of obtaining a sliding time window maintained by a target logic block address, counting the number of times of access received in the sliding time window by utilizing a timestamp, calculating average access frequency, comparing the average access frequency with a preset frequency threshold, and identifying a data block corresponding to the logic block address meeting a condition as thermal data. The memory processing method further comprises the steps of judging whether the thermal data exists in the thermal data management linked list, if yes, updating a last access time stamp of a thermal data node, increasing an access count of the thermal data node, if not, applying for a continuous memory space matched with the request size to a memory cache pool in a pre-allocated hot area, creating a new thermal data node, and inserting the node into the thermal data linked list. The memory processing method further comprises the steps of triggering a periodic scanning task according to a preset time interval, scanning the thermal data management linked list, decrementing the access count of the scanned thermal data node by an attenuation factor, comparing the attenuated access count with a preset cold data threshold, marking the node as cold data if the access count is lower than the cold data threshold, and removing the node as a node to be eliminated from the thermal data management linked list. The memory processing method further comprises the steps of storing information of the currently identified hot data into a nonvolatile storage medium, wherein the hot data information at least comprises a logic block address of the hot data, reading the hot data information from the nonvolatile storage medium when a system is started next time, and pre-establishi