Search

US-12619375-B2 - Data storage device and method for accommodating data of a third-party system

US12619375B2US 12619375 B2US12619375 B2US 12619375B2US-12619375-B2

Abstract

A data storage device and method are provided for accommodating data of a third-party system. In one embodiment, a data storage device is provided comprising a memory and one or more processors. The one or more processors, individually or in combination, are configured to: receive data from a graphics processing unit (GPU) of a host; segregate the data; determine whether the host is pre-approved for a capacity condition; and in response to determining that the host is pre-approved for the capacity condition: associate the segregated data per a pre-approved destination; and consolidate logical-to-physical address entry updates into a logical-to-physical address data structure. Other embodiments are provided.

Inventors

  • Ramanathan Muthiah

Assignees

  • SanDisk Technologies, Inc.

Dates

Publication Date
20260505
Application Date
20240701

Claims (16)

  1. 1 . A data storage device comprising: a memory; and one or more processors, individually or in combination, configured to: receive data and a logical address directly from a graphics processing unit (GPU) of a host as opposed to indirectly through a central processing unit (CPU) of the host; store the data in a location of the memory configured to store only data received from the GPU as opposed to data received from the CPU; store an entry in a secondary logical-to-physical address data structure to reflect that the data is stored in the location in the memory, wherein the secondary logical-to-physical address data structure stores only entries for the location in the memory as opposed to other locations in the memory that store data received from the CPU; determine whether the data is pre-approved for a capacity condition; in response to determining that the data is not pre-approved for the capacity condition: discard the entry in the secondary logical-to-physical address data structure; and perform garbage collection only in the location in the memory as opposed to the other locations in the memory; and in response to determining that the data is pre-approved for the capacity condition: merge the secondary logical-to-physical address data structure into a primary logical-to-physical address data structure that stores entries for the other locations in the memory.
  2. 2 . The data storage device of claim 1 , wherein the data is received with a GPU identifier.
  3. 3 . The data storage device of claim 1 , wherein the logical address is part of a logical block address (LBA) range associated with the GPU.
  4. 4 . The data storage device of claim 1 , wherein the one or more processors, individually or in combination, are further configured to: move the data to capacity blocks.
  5. 5 . The data storage device of claim 1 , wherein the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and route the input-output data based on an instruction from the host.
  6. 6 . The data storage device of claim 1 , wherein the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and generate parity for the input-output data based on an instruction from the host.
  7. 7 . The data storage device of claim 1 , wherein the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and perform quality-of-service biasing of the input-output data based on an instruction from the host.
  8. 8 . The data storage device of claim 1 , wherein the memory comprises a three-dimensional memory.
  9. 9 . A method comprising: performing in a data storage device comprising a memory: receiving data and a logical address directly from a graphics processing unit (GPU) of a host as opposed to indirectly through a central processing unit (CPU) of the host; storing the data in a location of the memory configured to store only data received from the GPU as opposed to data received from the CPU; storing an entry in a secondary logical-to-physical address data structure to reflect that the data is stored in the location in the memory, wherein the secondary logical-to-physical address data structure stores only entries for the location in the memory as opposed to other locations in the memory that store data received from the CPU; determining whether the host is pre-approved for a capacity condition; in response to determining that the host is pre-approved for the capacity condition: discarding the entry in the secondary logical-to-physical address data structure; and performing garbage collection only in the location in the memory as opposed to the other locations in the memory; and in response to determining that the data is pre-approved for the capacity condition: merging the secondary logical-to-physical address data structure into a primary logical-to-physical address data structure that stores entries for the other locations in the memory.
  10. 10 . The method of claim 9 , wherein the data is received with a GPU identifier.
  11. 11 . The method of claim 9 , wherein the logical address is part of a logical block address (LBA) range associated with the GPU.
  12. 12 . The method of claim 9 , further comprising: moving the data to capacity blocks.
  13. 13 . The method of claim 9 , further comprising: monitoring input-output data of the GPU and at least one additional GPU; and routing the input-output data based on an instruction from the host.
  14. 14 . The method of claim 9 , further comprising: monitoring input-output data of the GPU and at least one additional GPU; and generating parity for the input-output data based on an instruction from the host.
  15. 15 . The method of claim 9 , further comprising: monitoring input-output data of the GPU and at least one additional GPU; and performing quality-of-service biasing of the input-output data based on an instruction from the host.
  16. 16 . A data storage device comprising: a memory; and means for: receiving data and a logical address directly from a graphics processing unit (GPU) of a host as opposed to indirectly through a central processing unit (CPU) of the host; storing the data in a location of the memory configured to store only data received from the GPU as opposed to data received from the CPU; storing an entry in a secondary logical-to-physical address data structure to reflect that the data is stored in the location in the memory, wherein the secondary logical-to-physical address data structure stores only entries for the location in the memory as opposed to other locations in the memory that store data received from the CPU; determining whether the host is pre-approved for a capacity condition; in response to determining that the host is pre-approved for the capacity condition: discarding the entry in the secondary logical-to-physical address data structure; and performing garbage collection only in the location in the memory as opposed to the other locations in the memory; and in response to determining that the host is pre-approved for the capacity condition: merging the secondary logical-to-physical address data structure into a primary logical-to-physical address data structure that stores entries for the other locations in the memory.

Description

BACKGROUND A host can write data to and read data from a data storage device. Some hosts have one or more graphics processing units (GPUs) in addition to a central processing unit (CPU). GPUs can be useful when the host runs applications related to artificial intelligence (AI) or high-performance computing (HPC), for example. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a block diagram of a data storage device of an embodiment. FIG. 1B is a block diagram illustrating a storage module of an embodiment. FIG. 1C is a block diagram illustrating a hierarchical storage system of an embodiment. FIG. 2A is a block diagram illustrating components of the controller of the data storage device illustrated in FIG. 1A according to an embodiment. FIG. 2B is a block diagram illustrating components of the data storage device illustrated in FIG. 1A according to an embodiment. FIG. 3 is a block diagram of a host and a data storage device of an embodiment. FIGS. 4A and 4B are illustrations of a storage environment of an embodiment. FIG. 5 is an illustration of a storage environment of an embodiment. FIG. 6 is an illustration of a storage environment of an embodiment that provide graphics processing unit (GPU) accommodation according to host instructions. FIG. 7 is an illustration of a host and data storage device of an embodiment. FIG. 8 is a flow chart of an embodiment. FIG. 9 is a flow chart of an embodiment. FIG. 10 is a flow chart of an embodiment. DETAILED DESCRIPTION The following embodiments generally relate to a data storage device and method for accommodating data of a third-party system. In one embodiment, a data storage device is provided comprising a memory and one or more processors. The one or more processors, individually or in combination, are configured to: receive data from a graphics processing unit (GPU) of a host; segregate the data; determine whether the host data (data from a GPU) is pre-approved for a capacity condition; and in response to determining that the host is pre-approved for the capacity condition: associate the segregated data per a pre-approved destination; and consolidate logical-to-physical address entry updates into a logical-to-physical address data structure. In some embodiments, the data is segregated based on a GPU identifier. In some embodiments, the data is segregated based on a logical block address (LBA) range. In some embodiments, the one or more processors, individually or in combination, are further configured to: in response to determining that the host data is not pre-approved for the capacity condition: associate the segregated data with separate blocks in the memory; and maintain logical-to-physical address entry updates until host approval. In some embodiments, the one or more processors, individually or in combination, are further configured to: in response to determining that the host data is not pre-approved for the capacity condition: associate the segregated data with separate blocks in the memory; and maintain logical-to-physical address entry updates until the logical-to-physical address entry updates are trimmed. In some embodiments, the one or more processors, individually or in combination, are further configured to: move the segregated data to capacity blocks. In some embodiments, the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and route the input-output data based on an instruction from the host. In some embodiments, the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and generate parity for the input-output data based on an instruction from the host. In some embodiments, the one or more processors, individually or in combination, are further configured to: monitor input-output data of the GPU and at least one additional GPU; and perform quality-of-service biasing of the input-output data based on an instruction from the host. In some embodiments, the memory comprises a three-dimensional memory. In another embodiment, a method is provided that is performed in a data storage device comprising a memory. The method comprises: receiving data from a graphics processing unit (GPU) of a host; determining whether the host is pre-approved for a capacity condition; and in response to determining that the host is pre-approved for the capacity condition: associating the data per a pre-approved destination; and consolidating logical-to-physical address entry updates into a logical-to-physical address data structure. In some embodiments, the method further comprises segregating the data based on a GPU identifier. In some embodiments, the method further comprises segregating the data based on a logical block address (LBA) range. In some embodiments, the method further comprises in response to determining that the host is not pre-approved for the capacity condition: