Search

EP-4740087-A1 - BALANCING WEAR ACROSS MULTIPLE RECLAIM GROUPS

EP4740087A1EP 4740087 A1EP4740087 A1EP 4740087A1EP-4740087-A1

Abstract

Aspects of the present disclosure configure a memory sub-system controller to balance program-erase count (PEC) across multiple reclaim groups of a memory sub-system. The controller groups a set of memory components into a plurality of reclaim groups (RGs), each RG of the plurality of RGs comprising a subset of reclaim units (RUs). The controller receives a request to program a set of data into a first RG of the plurality of RGs and compares a first PEC of the first RG with a second PEC of a second RG of the plurality of RGs. The controller performs wear leveling operations for the set of data requested to be programmed into the first RG using one or more memory components associated with the second RG based on a result of comparing the first PEC of the first RG with the second PEC of the second RG.

Inventors

  • HUBBARD, DANIEL J.
  • WEI, Meng

Assignees

  • Micron Technology, Inc.

Dates

Publication Date
20260513
Application Date
20240619

Claims (20)

  1. 1. A system comprising: a set of memory components of a memory sub-system; and at least one processing device operatively coupled to the set of memory components, the at least one processing device being configured to perform operations comprising: grouping the set of memory components into a plurality of reclaim groups (RGs), each RG of the plurality of RGs comprising a subset of reclaim units (RUs); receiving a request to program a set of data into a first RG of the plurality of RGs; comparing a first program-erase count (PEC) of the first RG with a second PEC of a second RG of the plurality of RGs; and performing wear leveling operations for the set of data requested to be programmed into the first RG using one or more memory components associated with the second RG based on a result of comparing the first PEC of the first RG with the second PEC of the second RG.
  2. 2. The system of claim 1, wherein the memory sub-system includes Flexible Data Placement (FDP).
  3. 3. The system of claim 1, the operations comprising: maintaining a table that stores a current PEC of each of the plurality of RGs, the first PEC being stored in the table in association with the first RG and the second PEC being stored in the table in association with the second RG.
  4. 4. The system of claim 1, the wear leveling operations comprising: programming the set of data requested to be programmed into the first RG into the one or more memory components associated with the second RG.
  5. 5. The system of claim 4, the operations comprising: regrouping at least a portion of the set of memory components based on the result of comparing the first PEC of the first RG with the second PEC of the second RG.
  6. 6. The system of claim 5, the operations comprising: determining that a first group of the set of memory components is associated with the first RG; determining that a second group of the set of memory components is associated with the second RG; and modifying association between the first group of the set of memory components and the first RG to associate the second group of the set of memory components with the first RG.
  7. 7. The system of claim 6, the operations comprising: modifying association between the second group of the set of memory components and the second RG to associate the first group of the set of memory components with the second RG; and maintaining association between a third group of the set of memory components with a third RG of the plurality of RGs.
  8. 8. The system of claim 6, the operations comprising: programming the set of data requested to be programmed into the first RG into the second group of the set of memory components instead of the first group of the set of memory components.
  9. 9. The system of claim 5, the operations comprising: determining that a difference between the first PEC of the first RG and the second PEC of the second RG transgresses a threshold; and initiating the regrouping of the at least the portion of the set of memory components in response to determining that the difference between the first PEC of the first RG and the second PEC of the second RG transgresses the threshold.
  10. 10. The system of claim 1, the operations comprising: determining that a first group of the set of memory components associated with the first RG has higher wearing than a second group of the set of memory components associated with the second RG based on the result of comparing the first PEC of the first RG with the second PEC of the second RG; defining the first RG as a high-wearing RG and the second RG as a low- wearing RG in response to determining that the first group of the set of memory components associated with the first RG has higher wearing than the second group of the set of memory components associated with the second RG; and enlarging a size of an individual RU of the subset of RUs of the first RG by donating a portion of a second RU of the subset of RUs of the second RG to the individual RU.
  11. 11. The system of claim 10, wherein the individual RU comprises a first set of planes of a first die, wherein the second RU comprises a second set of planes of a second die, the operations comprising: associating a block of an individual plane of the second set of planes with the individual RU to increase a quantity of blocks associated with the individual RU, wherein the second RU comprises blocks of a subset of the second set of planes that is fewer in quantity as a result of associating the block of the individual plane of the second set of planes with the individual RU.
  12. 12. The system of claim 11, the operations comprising: after associating the block of an individual plane of the second set of planes with the individual RU to increase the quantity of blocks associated with the individual RU, determining that wear of the first RG matches wear of the second RG; and in response to determining that the wear of the first RG matches the wear of the second RG, reducing the size of the individual RU by re-associating the block of the individual plane with the second RU.
  13. 13. The system of claim 10, the operations comprising: maintaining a tracking table that identifies the donated portion of the second RU; and removing the donated portion from the tracking table in response to determining that wear of the first RG matches wear of the second RG.
  14. 14. The system of claim 10, the operations comprising: while the donated portion of the second RU continues to be donated to the first RG, performing garbage collection operations on the second RG excluding the donated portion; and performing garbage collection operations on the first RG including the donated portion of the second RU.
  15. 15. The system of claim 14, the garbage collection operations comprising: folding valid data from one or more RUs of the first RG to one or more other RUs of the fu st RG.
  16. 16. The system of claim 10, wherein the operations comprise: selecting a size of the donated portion of the second RG; and computing, based on the selected size, a target PEC representing a quantity of PECs needed to complete balancing PEC values of the first RG with the PEC values of the second RG.
  17. 17. The system of claim 10, wherein the operations comprise: maintaining a queue of available blocks from the low-wearing RG available for use in expanding RUs of the high-wearing RG.
  18. 18. The system of claim 1, wherein each RG is associated with a different die of a plurality of dies of the memory sub-system.
  19. 19. A method comprising: grouping a set of memory components into a plurality of reclaim groups (RGs), each RG of the plurality of RGs comprising a subset of reclaim units (RUs); receiving a request to program a set of data into a first RG of the plurality of RGs; comparing a first program-erase count (PEC) of the first RG with a second PEC of a second RG of the plurality of RGs; and performing wear leveling operations for the set of data requested to be programmed into the first RG using one or more memory components associated with the second RG based on a result of comparing the first PEC of the first RG with the second PEC of the second RG.
  20. 20. A non-transitory computer-readable storage medium comprising instructions that, when executed by at least one processing device, cause the at least one processing device to perform operations comprising: grouping a set of memory components into a plurality of reclaim groups (RGs), each RG of the plurality of RGs comprising a subset of reclaim units (RUs); receiving a request to program a set of data into a first RG of the plurality of RGs; comparing a first program-erase count (PEC) of the first RG with a second PEC of a second RG of the plurality of RGs; and performing wear leveling operations for the set of data requested to be programmed into the first RG using one or more memory components associated with the second RG based on a result of comparing the first PEC of the first RG with the second PEC of the second RG.

Description

BALANCING WEAR ACROSS MULTIPLE RECLAIM GROUPS PRIORITY APPLICATION [0001] This application claims the benefit of priority to U.S. Provisional Application Serial Number 63/525,181, filed July 6, 2023, which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] Embodiments of the disclosure relate generally to memory sub-systems and, more specifically, to providing adaptive media management for memory components, such as memory dies. BACKGROUND [0003] A memory sub-system can be a storage system, such as a solid-state drive (SSD), and can include one or more memory components that store data. The memory components can be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory subsystem to store data on the memory components and to retrieve data from the memory components. Some memory sub-systems arrange their memory components into reclaim groups (RGs), each of which includes sets of reclaim units (RUs). Such memory sub-systems enable a host to control the physical location (e.g., by RG and/or RU) into which data is programmed. BRIEF DESCRIPTION OF THE DRAWINGS [0004] The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. [0005] FIG. 1 is a block diagram illustrating an example computing environment including a memory sub-system, in accordance with some embodiments of the present disclosure. [0006] FIG. 2 is a block diagram of an example media operations manager, in accordance with some implementations of the present disclosure. [0007] FIG. 3 is a block diagram of an example RG system implementation of the memory sub-system, in accordance with some implementations of the present disclosure. [0008] FIGS. 4 and 5 are block diagrams of examples of RG wear leveling operations, in accordance with some implementations of the present disclosure. [0009] FIG. 6 is a flow diagram of an example method to perform RG balancing (wear leveling), in accordance with some implementations of the present disclosure. [0010] FIG. 7 is a block diagram illustrating a diagrammatic representation of a machine in the form of a computer system within which a set of instructions can be executed for causing the machine to perform any one or more of the methodologies discussed herein, in accordance with some embodiments of the present disclosure. DETAILED DESCRIPTION [0011] Aspects of the present disclosure configure a system component, such as a memory sub-system controller, to perform program-erase count (PEC) and/or wear leveling operations. The memory sub-system controller can compare wear and/or PEC of different RGs of the memory sub-system to selectively control performing wear leveling operations. Based on the PEC and/or wear of different RG, the memory sub-system controller can selectively distribute memory operations across the memory components so that data is programmed using different physical memory components than those initially assigned or associated with an individual RG that is the subject of a request to program data. This ensures that performance of the memory system remains optimal by increasing the current PECs of different memory components at different rates until the PECs of the memory components reach a balance (e.g., are equal to each other or correspond to a target PEC). At that point, the different components can be programmed according to the default or previous assignments. This improves the overall efficiency of operating the memory sub-system. [0012] A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory sub-system that includes one or more memory components, such as memory devices (e.g., memory dies or planes across multiple memory dies) that store data. The host system can send access requests (e.g., write command, read command) to the memory sub-system, such as to store data at the memory sub-system and to read data from the memory sub-system. The data (or set of data) specified by the host is hereinafter referred to as “host data,” “application data,” or “user data”. In some cases, the memory sub-system includes an optional feature, such as a Flexible Data Placement (FDP) feature that defines RG and RUs. This protocol enables remote hosts to control data storage on the memory subsystems over a network. [0013] The memory sub-system can initiate media management operations, such as a write operation, on host data that is stored on a memory device. For example, firmware of the memory sub-system may re-write previously written host data from a location on a memory device to a new location as part of garbage collection management operations. The data that is re-written, for example as initiated by the firm