US-12619354-B2 - Balanced transfers on interfaces
Abstract
Randomly sending transactions across an interface can lead to idle interfaces and generally inefficient operations. Re-ordering transactions or data packets involves potentially changing the order of packet transmission to host devices. The re-ordering can ensure that each interface between a host device and the data storage device is saturated. The saturation is achieved by the re-ordering so that packet transmission is balanced across the interfaces. The balancing may be based on any number of factors such as namespace identification (ID), zone ID in a zoned namespace (ZNS) drive, submission and completion IDs, physical and virtual functions, and host addresses to name a few. The re-ordering and hence balancing results in packet level fairness and enables integration of asymmetric systems. In so doing, performance is optimized, reliability is maintained, and scalability is supported.
Inventors
- Shay Benisty
- Amir Segev
Assignees
- SanDisk Technologies, Inc.
Dates
- Publication Date
- 20260505
- Application Date
- 20241002
Claims (19)
- 1 . A data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: classify transactions that are to be sent to one or more host devices, wherein the transactions are at a packet level re-order the transactions based upon the classifications; transmit the re-ordered transactions to the one or more host devices, wherein the one or more host devices is a plurality of host devices; and ensure interfaces between the controller and the plurality of host devices are saturated.
- 2 . The data storage device of claim 1 , wherein the classifying is based on one or more of: namespace identification (ID), zone ID, submission ID, completion ID, peripheral component interconnect (PCI) express (PCIe) physical function, PCIe virtual function, and host device address.
- 3 . The data storage device of claim 1 , wherein the re-ordering comprises placing the classified transactions into different queues.
- 4 . The data storage device of claim 3 , wherein the controller is configured to select a queue from the different queues for the transmitting.
- 5 . The data storage device of claim 4 , wherein the controller is configured to delay posting a completion of a command corresponding to a transaction until after the transaction for the command has been transmitted.
- 6 . The data storage device of claim 1 , wherein the controller comprises a host interface module (HIM) and wherein the HIM comprises a re-ordering buffer and a completion and interrupt sync module.
- 7 . The data storage device of claim 6 , wherein the re-ordering buffer utilizes a round robin technique to determine when the re-ordered transactions are transmitted.
- 8 . The data storage device of claim 7 , wherein the re-ordering buffer is configured to add a delay before transmitting a re-ordered transaction to a host device.
- 9 . The data storage device of claim 1 , wherein a first host device of the plurality of host devices has a different saturation level compared with a second host device of the plurality of host devices.
- 10 . A data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller comprises: a host interface module (HIM) comprising a re-ordering buffer and a completion and interrupt sync module, wherein the HIM is configured to maintain a first interface between the controller and a first host device, wherein the HIM is configured to maintain a second interface between the controller and a second host device; a flash interface module (FIM) coupled to the memory device; and a command scheduler coupled between the HIM and FIM, wherein the controller is configured to: balance traffic between the first host device and the second host device, wherein the balancing comprises ensuring that the first interface and the second interface are saturated.
- 11 . The data storage device of claim 10 , wherein the first interface and the second interface are asymmetric.
- 12 . The data storage device of claim 10 , wherein the controller is configured to synchronize completion and interrupt messages towards the first host device and the second host device.
- 13 . The data storage device of claim 10 , wherein the first host device is a physical function and the second host device is a virtual function.
- 14 . The data storage device of claim 10 , wherein the re-ordering buffer is configured to maintain a plurality of queues for placement of classified transactions.
- 15 . The data storage device of claim 10 , wherein the re-ordering buffer is configured to add a delay before transmitting data to the first host device or the second host device.
- 16 . The data storage device of claim 10 , wherein the completion and interrupt sync module is configured to track data transfers over the first interface and the second interface to avoid race conditions.
- 17 . A data storage device, comprising: means to store data; and a controller coupled to the means to store data, wherein the controller is configured to: receive a transaction packet from the means to store data; classify the transaction packet; place the transaction packet in a queue of a plurality of queues; send the transaction packet to a host device; and maintain saturation on an interface between the controller and the host device and interfaces between the controller and other host devices.
- 18 . The data storage device of claim 17 , wherein the controller is configured to determine which host device of the host device and other host devices should be sent data.
- 19 . The data storage device of claim 18 , wherein the controller is configured to select from which queue of the plurality of queues data should be sent.
Description
BACKGROUND OF THE DISCLOSURE Field of the Disclosure Embodiments of the present disclosure generally relate to improving traffic balancing across interfaces. Description of the Related Art Non-volatile memory (NVM) express (NVMe) solid state drives (SSDs) are connected to host devices through a peripheral component interconnect (PCI) express (PCIe) interface. The interface is used to satisfy the NVMe protocol, while trying to reach maximum performance. To service host commands, the NVMe needs to use the interface for different tasks: reading commands, reading pointers, reading data, and on some products reading mapping tables. Traffic balancing in the context of storage devices using PCIe and NVMe interfaces optimizes performance, improves efficiency, and ensures equitable utilization of resources. Some of the key reasons for why traffic balancing is important include optimizing throughput, avoiding bottlenecks, utilizing multiple lanes efficiently, enhancing scalability, improving reliability, reducing latency, maximizing NVMe parallelism, supporting dynamic workloads, and meeting application demands. In summary, traffic balancing in PCIe and NVMe-based storage systems optimizes performance, maintains reliability, and supports scalability. Traffic balancing ensures that the full capabilities of high-speed interfaces are leveraged efficiently, contributing to an overall responsive and reliable storage infrastructure. Achieving the traffic balancing, however, is always a challenge. Therefore, there is a need in the art for improved traffic balancing across interfaces. SUMMARY OF THE DISCLOSURE Randomly sending transactions across an interface can lead to idle interfaces and generally inefficient operations. Re-ordering transactions or data packets involves potentially changing the order of packet transmission to host devices. The re-ordering can ensure that each interface between a host device and the data storage device is saturated. The saturation is achieved by the re-ordering so that packet transmission is balanced across the interfaces. The balancing may be based on any number of factors such as namespace identification (ID), zone ID in a zoned namespace (ZNS) drive, submission and completion IDs, physical and virtual functions, and host addresses to name a few. The re-ordering and hence balancing results in packet level fairness and enables integration of asymmetric systems. In so doing, performance is optimized, reliability is maintained, and scalability is supported. In one embodiment, a data storage device comprises: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: classify transactions that are to be sent to one or more host devices, wherein the transactions are at a packet level; re-order the transactions based upon the classifications; and transmit the re-ordered transactions to the one or more host devices. In another embodiment, a data storage device comprises: a memory device; and a controller coupled to the memory device, wherein the controller comprises: a host interface module (HIM) comprising a re-ordering buffer and a completion and interrupt sync module, wherein the HIM is configured to maintain a first interface between the controller and a first host device, wherein the HIM is configured to maintain a second interface between the controller and a second host device; a flash interface module (FIM) coupled to the memory device; and a command scheduler coupled between the HIM and FIM, wherein the controller is configured to: balance traffic between the first host device and the second host device, wherein the balancing comprises ensuring that the first interface and the second interface are saturated. In another embodiment, a data storage device comprises: means to store data; and a controller coupled to the means to store data, wherein the controller is configured to: receive a transaction packet from the means to storage data; classify the transaction packet; place the transaction packet in a queue of a plurality of queues; send the transaction packet to a host device; and maintain saturation on an interface between the controller and the host device and interfaces between the controller and other host devices. BRIEF DESCRIPTION OF THE DRAWINGS So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments. FIG. 1 is a schematic block diagram illustrating a storage system in which a data storage device may function as a storage device for a host device, according to certain embodime