US-12619541-B2 - Concurrent page cache resource access in a multi-plane memory device
Abstract
A memory device includes a first memory array, a second memory array, and a page cache circuit coupled to the first memory array and the second memory array. The page cache circuit includes at least one set of concurrent resources and at least one shared resource, wherein the at least one set of concurrent resources are asynchronously and concurrently accessible by the first memory array and the second memory array, and wherein the at least one shared resource is accessible in a time-multiplexed fashion by the first memory array and the second memory array.
Inventors
- Sundararajan Sankaranarayanan
- Eric N. Lee
Assignees
- MICRON TECHNOLOGY, INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20240522
Claims (20)
- 1 . A memory device comprising: a first memory array; a second memory array; and a page cache circuit disposed within the memory device and comprising bitline bias circuitry, at least one sense amplifier, and a plurality of registers, wherein the page cache circuit is directly coupled to the first memory array and the second memory array to temporarily store data being read from or written to the first memory array and the second memory array, wherein the page cache circuit comprises at least one set of concurrent resources and at least one shared resource, wherein the at least one set of concurrent resources are concurrently accessible by the first memory array and the second memory array to perform either a same type or different types of memory access operations respectively, wherein the at least one shared resource comprises a cache register and one or more data registers disposed within the page cache circuit, and wherein the cache register and the one or more data registers are accessible in a time-multiplexed fashion by the first memory array and the second memory array such that the cache register and the one or more data registers perform operations on the first memory array and the second memory array successively in time.
- 2 . The memory device of claim 1 , wherein the at least one set of concurrent resources comprises a first bitline bias circuit associated with the first memory array and a second bitline bias circuit associated with the second memory array.
- 3 . The memory device of claim 2 , wherein the at least one shared resource comprises the at least one sense amplifier, the cache register, and the one or more data registers.
- 4 . The memory device of claim 1 , wherein the at least one set of concurrent resources comprises a first sense amplifier associated with the first memory array and a second sense amplifier associated with the second memory array.
- 5 . The memory device of claim 1 , wherein the first memory array and the second memory array are disposed on a single memory plane of the memory device.
- 6 . The memory device of claim 1 , wherein the first memory array and the second memory array are disposed on separate memory planes of the memory device.
- 7 . A method comprising: receiving, at a memory device, requests to perform respective memory access operations on a first memory array and a second memory array of the memory device; performing at least a portion of the respective memory access operations concurrently using a set of concurrent resources of a page cache circuit disposed within the memory device and directly coupled to the first memory array and the second memory array, wherein the page cache circuit temporarily stores data being read from or written to the first memory array and the second memory array; and performing at least a portion of the respective memory access operations in a time-multiplexed fashion using at least one shared resource of the page cache circuit, wherein the at least one shared resource comprises a cache register and one or more data registers disposed within the page cache circuit, and wherein performing at least the portion of the respective memory access operations in the time-multiplexed fashion comprises performing operations on the first memory array and the second memory array successively in time using the cache register and the one or more data registers.
- 8 . The method of claim 7 , wherein the at least one set of concurrent resources comprises a first bitline bias circuit associated with the first memory array and a second bitline bias circuit associated with the second memory array, and wherein performing at least the portion of the respective memory access operations concurrently comprises causing a bias voltage to be applied to respective bitlines of the first memory array and the second memory array using the first bitline bias circuit and the second bitline bias circuit.
- 9 . The method of claim 8 , wherein the at least one shared resource comprises a sense amplifier, the cache register, and the one or more data registers, and wherein performing at least the portion of the respective memory access operations in the time-multiplexed fashion comprises sensing a voltage from a corresponding wordline of the first memory array using the sense amplifier and storing a corresponding value in at least one of the cache register or the one or more data registers.
- 10 . The method of claim 7 , wherein the at least one set of concurrent resources comprises a first bitline bias circuit associated with the first memory array and a second bitline bias circuit associated with the second memory array.
- 11 . The method of claim 7 , wherein the at least one set of concurrent resources comprises a first sense amplifier associated with the first memory array and a second sense amplifier associated with the second memory array.
- 12 . The method of claim 7 , further comprising: selecting at least the portion of the respective memory access operations to perform in the time-multiplexed fashion using an arbitration scheme.
- 13 . The method of claim 12 , wherein the arbitration scheme comprises allocating the at least one shared resource based on a request to perform a memory access operation received first in time.
- 14 . The method of claim 12 , wherein the arbitration scheme comprises allocating the at least one shared resource based on priority levels associated with the first and second memory access operations.
- 15 . A memory device comprising: a plurality of memory arrays; and a page cache circuit disposed within the memory device and comprising bitline bias circuitry, at least one sense amplifier, and a plurality of registers, wherein the page cache circuit is directly coupled to the plurality of memory arrays to temporarily store data being read from or written to the plurality of memory arrays, wherein the page cache circuit comprises at least one set of concurrent resources configured to perform operations on the plurality of memory arrays concurrently, and at least one shared resource configured to perform operations on the plurality of memory arrays successively in time, the at least one shared resource comprising a cache register and one or more data registers disposed within the page cache circuit, wherein the cache register and the one or more data registers are accessible in a time-multiplexed fashion by the plurality of memory arrays such that the cache register and the one or more data registers perform operations on the plurality of memory arrays successively in time.
- 16 . The memory device of claim 15 , wherein the page cache circuit is configured to select an order of the operations to be performed successively by the at least one shared resource according to an associated arbitration scheme.
- 17 . The memory device of claim 16 , wherein the arbitration scheme comprises allocating the at least one shared resource based on a request to perform a memory access operation received first in time.
- 18 . The memory device of claim 16 , wherein the arbitration scheme comprises allocating the at least one shared resource based on priority levels associated with the operations.
- 19 . The memory device of claim 15 , wherein the plurality of memory arrays are disposed on a single memory plane of the memory device.
- 20 . The memory device of claim 15 , wherein the plurality of memory arrays are disposed separate memory planes of the memory device.
Description
RELATED APPLICATION This application is a continuation application of co-pending U.S. patent application Ser. No. 17/547,818, filed Dec. 10, 2021, which claims the benefit of U.S. Provisional Patent Application Ser. No. 63/202,287, filed Jun. 4, 2021, each of which is incorporated herein by reference. TECHNICAL FIELD Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to concurrent page cache resource access in a multi-plane memory device in a memory sub-system. BACKGROUND A memory sub-system can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory sub-system to store data at the memory devices and to retrieve data from the memory devices. BRIEF DESCRIPTION OF THE DRAWINGS The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. FIG. 1 illustrates an example computing system that includes a memory sub-system in accordance with some embodiments of the present disclosure. FIG. 2 is a block diagram of a memory device in communication with a memory sub-system controller of a memory sub-system, according to an embodiment. FIG. 3 is a block diagram illustrating a multi-plane memory device configured for concurrent page cache resource access in accordance with some embodiments of the present disclosure. FIG. 4 is a block diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 5 is a timing diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 6 is a block diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 7 is a block diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 8 is a block diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 9 is a block diagram illustrating concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 10 is a flow diagram of an example method of providing concurrent page cache resource access in a multi-plane memory device in accordance with some embodiments of the present disclosure. FIG. 11 is a block diagram of an example computer system in which embodiments of the present disclosure can operate. DETAILED DESCRIPTION Aspects of the present disclosure are directed to concurrent page cache resource access in a multi-plane memory device in a memory sub-system. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. A memory device can be made up of bits arranged in a two-dimensional or a three-dimensional grid. Memory cells are etched onto a silicon wafer in an array of columns (also hereinafter referred to as bitlines) and rows (also hereinafter referred to as wordlines). A wordline can refer to one or more rows of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline constitutes the address of the memory cell.