Search

US-12619469-B2 - System having dynamic power management

US12619469B2US 12619469 B2US12619469 B2US 12619469B2US-12619469-B2

Abstract

A storage system is provided. The system includes a primary node having a processor and memory storing scheduling logic and a plurality of secondary nodes each of the secondary nodes having a processor and removable storage memory, wherein the processor of the primary node when executing the scheduling logic is configured to assign a priority to tasks executed by the storage system, monitor a processing load of the storage system, and monitor a capacity of the storage system. The processor is further configured to adjust power consumption of a processor of at least one of the plurality of secondary nodes based on the priority of tasks being executed by the storage system and the processing load and the capacity of the storage system.

Inventors

  • Hari Kannan
  • Peter Kirkpatrick

Assignees

  • PURE STORAGE, INC.

Dates

Publication Date
20260505
Application Date
20230301

Claims (20)

  1. 1 . A storage system, comprising: a primary storage node having a processor and memory storing scheduling logic; and a plurality of secondary storage nodes operatively coupled to the primary storage node, each of the secondary storage nodes having a processor and removable solid state storage memory, wherein the processor of the primary storage node when executing the scheduling logic is configured to: assign a priority to tasks executed by the storage system; monitor a processing load of the storage system; monitor a storage capacity of the storage system; and adjust power consumption of a processor of at least one of the plurality of secondary storage nodes based on the priority of tasks being executed by the storage system and the processing load and the capacity of the storage system.
  2. 2 . The storage system of claim 1 , wherein to adjust power consumption comprises: reducing one of a frequency or a clock speed for operation of the processor of the at least one of the plurality of secondary storage nodes.
  3. 3 . The storage system of claim 1 , wherein a priority of one of the tasks is changed responsive to the capacity of the storage system exceeding a threshold.
  4. 4 . The storage system of claim 1 , wherein the processor of the primary storage node and the processor of the secondary storage nodes is a same type of processor.
  5. 5 . The storage system of claim 1 , wherein to adjust power consumption comprises: reducing a network load by slowing activities within the storage system.
  6. 6 . The storage system of claim 1 , wherein power consumption for cooling one of the plurality of secondary storage nodes is adjusted by shifting processing among the solid state drives.
  7. 7 . The storage system of claim 1 , wherein the storage memory is flash memory and wherein adjusting power consumption is achieved by increasing an amount of planes written to in parallel.
  8. 8 . A method, comprising: assigning a first priority to front end tasks to be executed; assigning a second priority to background tasks to be executed; monitoring by a processor on a primary storage node of a storage system a processing load of secondary storage nodes of the storage system; monitoring by the processor on the primary storage node a storage capacity of the storage system; continuously adjusting a power consumption of a processor of a secondary storage node based on the monitoring and availability of compute resources on the secondary storage node for executing the tasks assigned the first priority.
  9. 9 . The method of claim 8 , wherein the front end tasks include tasks including interaction with an external device and background tasks include garbage collection and compression.
  10. 10 . The method of claim 8 , wherein exceeding a threshold for capacity triggers changing one background task from a second priority to a first priority.
  11. 11 . The method of claim 8 , wherein the processor of primary storage node is a same type of processor as the processor of the secondary storage node.
  12. 12 . The method of claim 8 , wherein adjusting the power consumption of the processor comprises idling a portion of cores of the processor.
  13. 13 . The method of claim 8 , wherein a remaining portion of cores is sufficient to execute the tasks having the first priority.
  14. 14 . The method of claim 8 , further comprising: shifting processing loads among the secondary storage nodes responsive to a temperature alert in one of the secondary storage nodes.
  15. 15 . The method of claim 8 , wherein one of the secondary storage node comprises flash memory and wherein adjusting power consumption is achieved by increasing an amount of planes written to in parallel.
  16. 16 . A non-transitory computer readable storage medium storing instructions, which when executed, cause a processing device of a storage controller to: assign a first priority to front end tasks to be executed; assign a second priority to background tasks to be executed; monitor by a processor on a primary storage node of a storage system a processing load of secondary storage nodes of the storage system; monitor by the primary storage node a storage capacity of the storage system; continuously adjust a power consumption of a processor of a secondary storage node based on the monitoring and availability of compute resources on the secondary storage node for executing the tasks assigned the first priority.
  17. 17 . The computer readable medium of claim 16 , wherein the front end tasks include tasks including interaction with an external device and background tasks include garbage collection and compression.
  18. 18 . The computer readable medium of claim 16 , wherein exceeding a threshold for capacity triggers changing one background task from a second priority to a first priority.
  19. 19 . The computer readable medium of claim 16 , wherein the processor of the primary storage node is a same type of processor as the processor of the secondary storage node.
  20. 20 . The computer readable medium of claim 16 , wherein adjusting the power consumption of the processor comprises idling a portion of cores of the processor.

Description

CROSS REFERENCE TO RELATED APPLICATION This is a continuation in-part application for patent entitled to a filing date and claiming the benefit of earlier-filed U.S. patent application Ser. No. 15/213,447, filed Jul. 19, 2016, titled, INDEPENDENT SCALING OF COMPUTE RESOURCES AND STORAGE RESOURCES IN A STORAGE SYSTEM, herein incorporated by reference in its entirety. TECHNICAL FIELD The technical field to which the invention relates is data storage systems. BACKGROUND Data storage needs continue to grow, as do capacities of storage systems. A scalable storage system architecture supports addition of memory, so that the storage system can grow in capacity to meet user needs. Yet, capacity is not the only factor to be considered in scalability. Communication delays among components in a storage system can worsen as more components are added in order to increase capacity. A fixed communication bandwidth can result in communication bottlenecks as added components increase the total number of communications for a given time span in a storage system. Communication delays are especially noticeable and can abruptly worsen when expanding from a single chassis to a multi-chassis storage system. Also, computing power can get strained as more memory is added to a storage system, contributing to lengthening data access times with storage system expansion. It is in this context that present embodiments for storage system scalability arise. BRIEF DESCRIPTION OF DRAWINGS FIG. 1A illustrates a first example system for data storage in accordance with some implementations. FIG. 1B illustrates a second example system for data storage in accordance with some implementations. FIG. 1C illustrates a third example system for data storage in accordance with some implementations. FIG. 1D illustrates a fourth example system for data storage in accordance with some implementations. FIG. 2A is a perspective view of a storage cluster with multiple storage nodes and internal storage coupled to each storage node to provide network attached storage, in accordance with some embodiments. FIG. 2B is a block diagram showing an interconnect switch coupling multiple storage nodes in accordance with some embodiments. FIG. 2C is a multiple level block diagram, showing contents of a storage node and contents of one of the non-volatile solid state storage units in accordance with some embodiments. FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes and storage units of some previous figures in accordance with some embodiments. FIG. 2E is a blade hardware block diagram, showing a control plane, compute and storage planes, and authorities interacting with underlying physical resources, in accordance with some embodiments. FIG. 2F depicts elasticity software layers in blades of a storage cluster, in accordance with some embodiments. FIG. 2G depicts authorities and storage resources in blades of a storage cluster, in accordance with some embodiments. FIG. 3A sets forth a diagram of a storage system that is coupled for data communications with a cloud services provider in accordance with some embodiments of the present disclosure. FIG. 3B sets forth a diagram of a storage system in accordance with some embodiments of the present disclosure. FIG. 3C sets forth an example of a cloud-based storage system in accordance with some embodiments of the present disclosure. FIG. 3D illustrates an exemplary computing device 350 that may be specifically configured to perform one or more of the processes described herein. FIG. 4 sets forth a diagram of a chassis for use in a storage system that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 5 sets forth a diagram of a hybrid blade useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 6 sets forth a diagram of an additional hybrid blade useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 7 sets forth a diagram of a storage blade useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 8 sets forth a diagram of a compute blade useful in storage systems that support independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 9 sets forth a diagram of a storage system that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 10 sets forth a diagram of a storage system that supports independent scaling of compute resources and storage resources according to embodiments of the present disclosure. FIG. 11 sets forth a diagram of a set of blades useful in a storage syste