US-12625819-B1 - Storage system with namespaces and host-accessible subdivisions
Abstract
This disclosure provides techniques hierarchical address virtualization within a memory controller and configurable block device allocation. By performing address translation only at select hierarchical levels, a memory controller can be designed to have predictable I/O latency, with brief or otherwise negligible logical-to-physical address translation time. In one embodiment, address transition may be implemented entirely with logical gates and look-up tables of a memory controller integrated circuit, without requiring processor cycles. The disclosed virtualization scheme also provides for flexibility in customizing the configuration of virtual storage devices, to present nearly any desired configuration to a host or client.
Inventors
- Robert Lercari
- Alan Chen
- Mike Jadon
- Craig Robertson
- Andrey V. Kuzmin
Assignees
- Radian Memory Systems, LLC
Dates
- Publication Date
- 20260512
- Application Date
- 20251204
Claims (20)
- 1 . A storage system comprising: a host having a host-side interface; and a storage drive comprising: a drive-side interface; at least one namespace; flash memory having erase units organized into subdivisions, the subdivisions comprising respective, non-overlapping sets of erase units; and logic operable to cause the storage drive to transmit to the host, responsive to receipt, via the drive-side interface, of at least one query from the host, information to identify a specific namespace of the at least one namespace, one or more of the subdivisions which are associated with the specific namespace, and for each one of the one or more of the subdivisions, an associated subdivision size; wherein the host comprises logic operable to cause the host to: receive, via the host-side interface, the information transmitted by the storage drive in response to the at least one query, format write requests such that each of the write requests are addressed to the specific namespace and to a selected subdivision of the one or more subdivisions, and transmit, via the host-side interface, the write requests to the storage drive; wherein the storage drive further comprises logic operable to cause the storage drive to: maintain a logical-to-physical look-up table; receive the write requests from the host, via the drive-side interface, and, for each one of the write requests, to: derive, from address information associated with the one of the write requests, a first address portion, a second address portion, and a third address portion, identify, from the first address portion, the specific namespace, identify, from the second address portion, the selected subdivision, identify an offset associated with the selected subdivision, by using at least one operation to subdivide the offset to identify a physical storage location within a specific erase unit of the set of erase units which is respective to the selected subdivision, program data associated with the one of the write requests into the identified physical storage location, identify a logical block address from the third address portion; and update the logical to physical look-up table, such that the identified logical block address is indexed to the identified physical storage location; and wherein each said logic comprises at least one of circuitry or instructions stored on a physical storage medium that, when executed, are to control circuitry of the storage device.
- 2 . The storage system of claim 1 , wherein the storage device further comprises logic operable to cause the storage device to: track metadata respective to the one or more subdivisions; compare the metadata with at least one criterion; and responsive to satisfaction of the at least one criterion by the metadata for a given subdivision of the one or more subdivisions, automatically copy valid data from at least one erase unit in the set respective to the given subdivision to a new erase unit.
- 3 . The storage system of claim 2 , wherein the storage device further comprises logic operable to cause the storage device to: in association with the copy of the valid data, disassociate the at least one erase unit from the set which is respective to the given subdivision; and automatically control physical erasure of the at least one erase unit.
- 4 . The storage system of claim 3 , wherein the storage device further comprises logic operable to cause the storage device to: maintain a pool of free erase units; and select the new erase unit from the pool of free erase units, and assign the new erase unit to the set which is respective to the given subdivision.
- 5 . The storage system of claim 3 , wherein the storage device further comprises logic operable to cause the storage device to: track information representing defect status of each erase unit in the set which is respective to the given subdivision; in association with the automatically-controlled physical erasure of the at least one erase unit, detect an erasure error in a given erase unit of the at least one erase unit; and responsively update tracked information representing defect status to mark the given erase unit as bad.
- 6 . The storage system of claim 2 , wherein: the metadata of the storage device comprises data validity information, tracked by the storage device on a basis that is specific to a single erase unit of the set respective to the given subdivision; the at least one criterion of the storage device comprises a criterion associated with data stored in individual ones of the erase units of the flash memory; and the automatic copy of valid data is performed responsive to satisfaction, by the data validity information, of the criterion associated with data stored in individual ones of the erase units of the flash memory.
- 7 . The storage system of claim 2 , wherein the metadata represents times, respective to the one or more subdivisions, since data was programmed into at least one erase unit in the respective set of erase units.
- 8 . The storage system of claim 2 , wherein the metadata indicates wear of each erase unit in the sets respective to the one or more subdivisions.
- 9 . The storage system of claim 2 , wherein the metadata indicates data access frequencies respective to the one or more subdivisions.
- 10 . The storage system of claim 1 , wherein: the host further comprises logic operable to cause the host to transmit, via the host-side interface, a command to the storage device, to release stored data; and the storage device further comprises logic operable to cause the storage device to: receive, via the drive-side interface, the command; and responsively update tracked data validity information, to mark as released, at least one physical storage location corresponding to the released data.
- 11 . The storage system of claim 10 , wherein the storage device further comprises logic operable to cause the storage device to: in association with the update of the tracked data validity information, detect a condition where all storage locations of a given erase unit having previously-written data, provided by the host, are marked as released; and automatically control physical erasure of the given erase unit.
- 12 . The storage system of claim 1 , wherein: the at least one query is at least one first query; the host further comprises logic operable to cause the host to transmit a second query to the storage device; and the storage device further comprises logic operable to cause the storage device to: track metadata respective to different ones of the one or more subdivisions; receive, via the drive-side interface, the second query; and responsive to receipt of the second query, transmit second information, via the drive-side interface, to the host, which is dependent on the tracked metadata.
- 13 . The storage device of claim 12 , wherein the second information identifies a given subdivision of the one or more subdivisions and indicates a quantity of available space, associated with the set of erase units which are respective to the given subdivision, which can currently be written to.
- 14 . The storage device of claim 12 , wherein the metadata represents a time since data was programmed into the respective subdivision.
- 15 . The storage device of claim 1 , wherein: the host further comprises logic operable to cause the host to transmit, via the host-side interface, a second query to the storage device; and the storage device further comprises logic operable to cause the storage device to: store a value representing a maximum number of subdivisions; and receive, via the drive-side interface, the second query; and responsively transmit to the host, via the drive-side interface, information representing the maximum number of the subdivisions.
- 16 . The storage system of claim 1 , wherein: the flash memory comprises flash memory dies; the flash memory further comprises one or more die groups, each die group having a subset of one or more of the flash memory dies, the one or more of the flash memory dies in each die group being mutually-exclusive to the one or more of the flash memory dies in each other die group, wherein each die group is associated with a die group identifier (ID); the host further comprises logic operable to cause the host to transmit, via the host-side interface, a second query to the storage device; and the storage device further comprises logic operable to cause the storage device to receive the second query from the host, via the drive-side interface, and to responsively transmit to the host, via the drive-side interface, information representing each die group ID.
- 17 . The storage system of claim 16 , wherein: the host further comprises logic operable to cause the host to format each of the write requests to specify an ID associated with a specific die group of the one or more die groups; and the storage device logic is further operable to cause the storage device to perform a division operation, on the address information associated each given one of the write requests, to identify a die group ID specified by the host in association with the given one of the write requests.
- 18 . The storage system of claim 1 , wherein: the host further comprises logic operable to cause the host to transmit, to the storage drive, via the host-side interface, a configuration command accompanied with a setting; and the storage drive further comprises logic operable to cause the storage drive to: receive the configuration command from the host, via the drive-side interface; and responsively update a quantity, of the one or more subdivisions accessible by the host, dependent on the setting.
- 19 . The storage system of claim 1 , wherein: the host further comprises logic operable to cause the host to transmit read requests to the storage device, via the host-side interface; and the storage device further comprises logic operable to cause the storage device to: receive, via the drive-side interface, each of the read requests, wherein each of the read requests is directed to a corresponding logical block address; detect an error condition associated with reading of data from the corresponding logical block address; responsively copy valid data associated with the corresponding logical block address to a new erase unit; and update the logical-to-physical look-up table, such that the corresponding logical block address is indexed to a physical storage location within the new erase unit.
- 20 . The storage device of claim 19 , wherein the error condition corresponds to a bit error rate which exceeds a threshold.
Description
PRIORITY/INCORPORATION BY REFERENCE This document is a continuation of U.S. Utility patent application Ser. No. 19/001,292, filed on Dec. 24, 2024, on behalf of first-named inventor Robert Lercari, which is a continuation of U.S. Utility patent application Ser. No. 18/412,906, filed on Jan. 15, 2024, on behalf of first-named inventor Robert Lercari (now U.S. Pat. No. 12,306,766), which in turn is a continuation of U.S. Utility patent application Ser. No. 18/140,938, filed on Apr. 28, 2023, on behalf of first-named inventor Robert Lercari (now U.S. Pat. No. 11,914,523), which in turn is a continuation of U.S. Utility patent application Ser. No. 17/377,754, filed on Jul. 16, 2021, on behalf of first-named inventor Robert Lercari (now U.S. Pat. No. 11,675,708), which in turn is a continuation of U.S. Utility patent application Ser. No. 17/213,015, filed on Mar. 25, 2021 (now U.S. Pat. No. 11,086,789), on behalf of first-named inventor Robert Lercari, which in turn is a continuation of U.S. Utility patent application Ser. No. 16/841,402, filed on Apr. 6, 2020, on behalf of first-named inventor Robert Lercari, which in turn is a continuation of U.S. Utility patent application Ser. No. 15/690,006, filed on Aug. 29, 2017 (now U.S. Pat. No. 10,642,748), which in turn is a continuation of U.S. Utility patent application Ser. No. 15/074,778, filed on Mar. 18, 2016 (now U.S. Pat. No. 9,785,572), which in turn is a continuation of U.S. Utility patent application Ser. No. 14/880,529, filed on Oct. 12, 2015 (now U.S. Pat. No. 9,542,118). U.S. Utility patent application Ser. No. 14/880,529 in turn claims the benefit of: U.S. Provisional Patent Application No. 62/199,969, filed on Jul. 31, 2015, on behalf of first-named inventor Robert Lercari for “Expositive Flash Memory Control;” U.S. Provisional Patent Application No. 62/194,172, filed on Jul. 17, 2015, on behalf of first-named inventor Robert Lercari for “Techniques for Memory Controller Configuration;” and U.S. Provisional Patent Application No. 62/063,357, filed on Oct. 13, 2014, on behalf of first-named inventor Robert Lercari for “Techniques for Memory Controller Configuration.” U.S. Utility patent application Ser. No. 14/880,529 is also a continuation in-part of U.S. Utility patent application Ser. No. 14/848,273, filed on Sep. 8, 2015, on behalf of first-named inventor Andrey V. Kuzmin for “Techniques for Data Migration Based On Per-Data Metrics and Memory Degradation,” which in turn claims the benefit of U.S. Provisional Patent Application No. 62/048,162, filed on Sep. 9, 2014, on behalf of first-named inventor Andrey V. Kuzmin for “Techniques for Data Migration Based On Per-Data Metrics and Memory Degradation.” The foregoing patent applications are each hereby incorporated by reference, as are US Patent Publication 2014/0215129, for “Cooperative Flash Memory Control,” and U.S. Utility patent application Ser. No. 14/047,193, filed on Oct. 7, 2013, on behalf of first-named inventor Andrey V. Kuzmin for “Multi-Array Operation Support And Related Devices, Systems And Software.” TECHNICAL FIELD The disclosure herein relates to non-volatile data storage and retrieval within semiconductor memory. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements. FIG. 1 illustrates an embodiment of a multi-modal flash memory device and its interconnection to a host system. FIG. 2 illustrates an exemplary application and configuration of a flash device having a pseudo-expositive memory controller within a host system. FIG. 3 illustrates an exemplary flash memory device in which discrete block devices may be configured and allocated as described in FIG. 2. FIG. 4 illustrates an exemplary block device allocation and configuration within the flash device of FIG. 3 effected using the block device allocator described in reference to FIG. 2. FIG. 5 illustrates a host perspective of the exemplary block device allocations and configurations presented in FIG. 4. FIG. 6 illustrates an exemplary pair of block device configurations. FIG. 7 illustrates exemplary generation of a physical block address in response to an incoming LBA. FIG. 8 illustrates a conceptual implementation of an address generation module. FIG. 9 illustrates exemplary virtualization of erase units within a four-die block device. FIG. 10 demonstrates an exemplary sequence of operations coordinated between a pseudo-expositive flash memory controller and a host file server. FIG. 11 illustrates forward (and reverse) compatibility between successive flash generations enable by a pseudo-expositive flash architecture provided by the teachings herein. FIG. 12 illustrates detail regarding pseudo-physical geometry export options within a flash device having multi-plane flash dies. FIG. 13 illustrates additional operations that may be managed by embodiments of pseu