CN-116841454-B - Cache management method applied to memory and memory
Abstract
A cache management method applied to a memory and the memory are disclosed. The method comprises the steps of dividing a hot spot write cache and a random write cache on a cache unit, repeatedly accumulating the number of write operations corresponding to each index address in a first-level mapping table in a set period, sorting all index addresses in the first-level mapping table from large to small according to the corresponding number of write operations, recording a plurality of index addresses with the previous number of write operations as system hot spots, distributing hot spot cells in the hot spot write cache if the corresponding index addresses of a received write command belong to the system hot spots, loading all data of a second-level mapping table pointed by the corresponding index addresses into the hot spot cells, otherwise, distributing random write cells in the random write cache, and writing data into the random write cells based on the write command. The method can reduce the loading and unloading operation of the secondary mapping table, and is beneficial to improving the read-write performance and the service life of the memory.
Inventors
- Chu Shikai
- WANG CHENLUAN
- LUO XIAOMIN
- CHEN ZHENGLIANG
- CAI QUAN
Assignees
- 联芸科技(杭州)股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20220325
Claims (10)
- 1. A cache management method applied to a memory, the memory including a controller and a storage medium, the controller including a cache unit storing a primary mapping table storing a plurality of index addresses to point to a plurality of secondary mapping tables stored on the storage medium, the cache management method being performed by the controller, comprising: a hot spot write cache and a random write cache are divided on the cache unit; Accumulating the number of write operations corresponding to each index address in the primary mapping table in a set period, sorting all the index addresses in the primary mapping table from large to small according to the number of write operations corresponding to each index address, and marking a plurality of index addresses with the preceding number of write operations as system hot spots; for a received write command, if the corresponding index address belongs to a system hot spot, distributing hot spot cells in the hot spot write cache, loading all data of a secondary mapping table pointed by the corresponding index address into the hot spot cells, otherwise, distributing random write cells in the random write cache, and writing data into the random write cells based on the write command.
- 2. The cache management method as recited in claim 1, further comprising partitioning a read cache on the cache unit and upon receiving a read command, allocating a read cell from the read cache to load a secondary mapping table pointed to by a corresponding index address.
- 3. The cache management method of claim 2, wherein for a received write command, if its corresponding index address is a system hot spot, but a secondary mapping table to which its corresponding index address points is stored dispersively in at least one of the random write cache, the read cache, and the storage medium, data is read from at least one of the random write cache, the read cache, and the storage medium and incorporated into the hot spot cell.
- 4. The cache management method as recited in claim 2, wherein in the step of executing the step of allocating a read cell from the read cache to load a secondary mapping table pointed to by a corresponding index address, the secondary mapping table pointed to by the corresponding index address is further compressed.
- 5. The cache management method as recited in claim 4, wherein a fixed-size read cell is allocated on the cache unit to store the compressed secondary mapping table.
- 6. The cache management method as recited in claim 4, wherein in the read cache, non-fixed-size read cells are allocated according to the size of the compressed secondary mapping table to store the compressed secondary mapping table.
- 7. The cache management method as recited in claim 2, further comprising accumulating the access amount of each secondary mapping table in the read cache for a set period of time, and when the read cache space is insufficient, releasing the read cells with low access amount preferentially in the order of the access amount from small to large.
- 8. The cache management method as recited in claim 1, further comprising data smoothing the number of write operations accumulated based on the write command.
- 9. The cache management method according to any one of claims 1 to 8, wherein the memory is a Dram-less solid state disk.
- 10. A memory, comprising: a controller connected to a host for receiving write data from the host; a storage medium, connected with the controller, for storing the write data, The controller further comprises a buffer unit for storing a first level mapping table corresponding to the write data, the storage medium is further used for storing mapping table data corresponding to the first level mapping table, and the controller is used for executing the buffer management method according to any one of claims 1-9.
Description
Cache management method applied to memory and memory Technical Field The present invention relates to the field of data storage technologies, and in particular, to a cache management method applied to a memory and the memory. Background Solid state disk (Solid STATE DRIVES) is a storage hard disk made of Solid state electronic memory chips, and the storage hard disk includes a controller and a storage medium. Currently, the most mainstream solid state disk uses a storage medium (flash Memory) as a storage medium to store data, such as a nonvolatile Memory as NAND FLASH examples. The solid state disk is widely used in various occasions, and when the SSD is used for storing write data, a mapping table used for FTL (flash Translation Layer, flash memory conversion layer) is needed to record the mapping relation from a host logical space address to a flash memory physical address. Therefore, not only the write data written by the user but also the mapping table for maintaining the mapping relation of the write data is stored in the SSD. The current method for managing the mapping table by the Dram-less solid state disk (without DRAM) is to build a secondary mapping table, wherein the primary mapping table mainly stores logical address groups corresponding to write data in the primary mapping table, the primary mapping table comprises logical addresses corresponding to a plurality of data blocks, then build a plurality of secondary mapping tables, and each secondary mapping table stores a mapping relation pair of the logical address and the physical address corresponding to one data block. For a Dram-less solid state disk, a first-level mapping table is stored in a cache unit (the cache unit is usually an SRAM) of a controller, and a plurality of second-level mapping tables are stored in a storage medium. However, when the controller receives the host access command, the secondary mapping tables to be accessed need to be loaded from the storage medium into the cache unit, and when the capacity of the cache unit is full because the capacity of the cache unit is limited, some secondary mapping tables need to be loaded from the cache unit. Therefore, it can be understood that the loading and unloading of the secondary mapping table are closely related to the access performance of the solid state disk, and reducing the loading and unloading operation of the secondary mapping table is helpful to improve the access performance of the solid state disk. Disclosure of Invention In view of the foregoing, an object of the present invention is to provide a method and apparatus for managing a cache of a memory, which reduce load-and-unload operations of a secondary mapping table by planning and managing cache units of the memory. According to a first aspect of the present invention, there is provided a cache management method applied to a memory including a controller and a storage medium, the controller including a cache unit storing a primary mapping table storing a plurality of index addresses to point to a plurality of secondary mapping tables stored on the storage medium, the cache management method being performed by the controller, comprising: a hot spot write cache and a random write cache are divided on the cache unit; continuously accumulating the number of write operations corresponding to each index address in the primary mapping table in a set period, sorting all the index addresses in the primary mapping table from large to small according to the number of write operations corresponding to each index address, and marking a plurality of index addresses with the preceding number of write operations as system hot spots; for a received write command, if the corresponding index address belongs to a system hot spot, distributing hot spot cells in the hot spot write cache, loading all data of a secondary mapping table pointed by the corresponding index address into the hot spot cells, otherwise, distributing random write cells in the random write cache, and writing data into the random write cells based on the write command. Optionally, the method further comprises the steps of dividing a read cache on the cache unit, and when a read command is received, distributing read cells from the read cache to load a secondary mapping table pointed by a corresponding index address. Optionally, for a received write command, if its corresponding index address is a system hot spot, but its corresponding index address points to a secondary mapping table already cached in the random write cache and/or the read cache, the secondary mapping table pointed to by its corresponding index address is merged into the hot spot cell from the random write cache and/or the read cache. Optionally, in the step of executing the second-level mapping table pointed to by the corresponding index address by allocating the read cells from the read cache, the second-level mapping table is further compressed. Optionally, a fixed-size read cell is allocated