US-12626110-B2 - Compact workload representation of memory system
Abstract
Compact representation for input workloads is generated in a memory system. The memory system includes a memory device; and a controller including a recurrent neural network coder. The recurrent neural network coder includes an encoder including recurrent encoding blocks. Each recurrent encoding block: receives one of the input commands in an input workload associated with the memory device; and generates a hidden state vector corresponding to the received input command by applying a set of activation functions on the received input command. A last encoding block generates a final hidden state vector as the compact representation vector.
Inventors
- Siarhei ZALIVAKA
- Alexander IVANIUK
Assignees
- SK Hynix Inc.
Dates
- Publication Date
- 20260512
- Application Date
- 20210211
Claims (20)
- 1 . A system comprising: a solid state drive memory device coupled to a host and configured such that data streams of a plurality of input memory commands are communicated between the host and a solid state drive of the solid state drive memory device; and a controller of the solid state drive memory device including a recurrent neural network coder, which includes an encoder including a plurality of recurrent encoding blocks, which includes first to last encoding blocks, wherein each recurrent encoding block is configured to: receive one of the plurality of input memory commands in an input workload associated with the data streams of the plurality of input memory commands which are communicated from the host to the solid state drive and sent for storage in a memory of the solid state drive memory device; generate a hidden state vector corresponding to the received input command by applying a set of activation functions on the received input command; generate at the last encoding block a final hidden state vector representing a compact representation vector of the plurality of input memory commands; store the compact representation vector in the memory of the solid state drive memory device; using the compact representation vector, recover commands from the memory which differ from corresponding ones of the input workload of commands sent to the memory; and train the recurrent neural network coder using a data set containing M workloads with different characteristics by tuning weight matrices used to generate the hidden state vector such that, after training, a difference between the input workload of commands sent to the memory for storage and the commands recovered from the memory is minimized.
- 2 . The system of claim 1 , wherein an activation function among the set of activation functions includes one of a hyperbolic tangent, a sigmoid, and a rectified linear unit.
- 3 . The system of claim 1 , wherein the first to last encoding blocks are connected in cascade.
- 4 . The system of claim 3 , wherein the first encoding block is configured to: receive a first input command among the plurality of input commands; perform the activation function on a combination of the first input command and a first weight matrix to generate a first vector; perform the activation function on a combination of an initial hidden state vector and a second weight matrix to generate a second vector; perform the activation function on a sum of the first and second vectors to generate a first hidden state vector; and perform the activation function on a combination of the first hidden state vector and a third weight matrix to generate a first output vector.
- 5 . The system of claim 4 , wherein a second encoding block among the plurality of recurrent encoding blocks is configured to: receive a second input command among the plurality of input commands; perform the activation function on a combination of the second input command and the first weight matrix to generate a third vector; perform the activation function on a combination of the first hidden state vector and the second weight matrix to generate a fourth vector; perform the activation function on a sum of the third and fourth vectors to generate a second hidden state vector; and perform the activation function on a combination of the second hidden state vector and the third weight matrix to generate a second output vector.
- 6 . The system of claim 1 , wherein the recurrent neural network coder further includes: a decoder including a plurality of recurrent decoding blocks and configured to receive the compact representation vector and generate the recovered commands based on the compact representation vector and the plurality of input commands.
- 7 . The system of claim 6 , wherein the plurality of recurrent decoding blocks includes first to last decoding blocks, which are symmetric to the first to last encoding blocks and have a cascade connection structure in descending order.
- 8 . The system of claim 7 , wherein the last decoding block is configured to: receive a last input command among the plurality of input commands and the compact representation vector; perform the activation function on a combination of the last input command and a first weight matrix to generate a first vector; perform the activation function on a combination of the compact representation vector and a second weight matrix to generate a second vector; perform the activation function on a sum of the first and second vectors to generate a last hidden state vector; and perform the activation function on a combination of the last hidden state vector and a third weight matrix to generate a last output vector.
- 9 . The system of claim 8 , wherein the first decoding block is configured to: receive a first input command among the plurality of input commands; perform the activation function on a combination of the first input command and the first weight matrix to generate a third vector; perform the activation function on a combination of a second hidden state vector, which is received from a second decoding block, and the second weight matrix to generate a fourth vector; perform the activation function on a sum of the third and fourth vectors to generate a first hidden state vector; and perform the activation function on a combination of the first hidden state vector and the third weight matrix to generate a first output vector.
- 10 . The system of claim 8 , wherein the first weight matrix, the second weight matrix, and the third weight matrix are trained such that the difference between the input workload of commands sent to the memory for storage and the commands recovered from the memory is minimized.
- 11 . The system of claim 6 , further comprising: a predictor including a plurality of recurrent predicting blocks and configured to receive the compact representation vector and generate predicted commands based on the compact representation vector and a last input command among the plurality of input commands, wherein the plurality of recurrent predicting blocks includes first to last predicting blocks, which have a cascade connection structure in ascending order.
- 12 . The system of claim 11 , wherein the first predicting block is configured to: receive a last input command among the plurality of input commands and the compact representation vector; perform the activation function on a combination of the last input command and a first weight matrix to generate a first vector; perform the activation function on a combination of the compact representation vector and a second weight matrix to generate a second vector; perform the activation function on a sum of the first vector and the second vector to generate a first hidden state vector; and perform the activation function on a combination of the first hidden state vector and a third weight matrix to generate a first output vector as a first predicted command following the last input command.
- 13 . The system of claim 12 , wherein the second predicting block is configured to: receive the first predicted command and the first hidden state vector; perform the activation function on a combination of the first predicted command and the first weight matrix to generate a third vector; perform the activation function on a combination of the first hidden state vector and the second weight matrix to generate a fourth vector; perform the activation function on a sum of the third vector and the fourth vector to generate a second hidden state vector; and perform the activation function on a combination of the second hidden state vector and the third weight matrix to generate a second output vector as a second predicted command following the first predicted command.
- 14 . The system of claim 11 , wherein the recurrent encoding, decoding and predicting blocks are implemented in different ways.
- 15 . A method for operating a controller of a memory system, the method comprising: providing a recurrent neural network coder which includes an encoder including a plurality of recurrent encoding blocks; receiving, by each recurrent encoding block, one of a plurality of input memory commands in an input workload associated with a solid state drive memory device coupled to a host and configured such that data streams of the plurality of input memory commands are communicated between the host and a solid state drive of the solid state drive memory device and sent for storage in a memory of the solid state drive memory device; generating, by each recurrent encoding block, a hidden state vector corresponding to the received input memory command by applying a set of activation functions on the received input command; generating at a last encoding block among the plurality of recurrent encoding blocks a final hidden state vector as a compact representation vector of the plurality of input memory commands; storing the compact representation vector in the memory of the solid state drive memory device; using the compact representation vector, recovering commands from the memory which differ from corresponding ones of the input workload of commands sent to the memory; and training the recurrent neural network coder using a data set containing M workloads with different characteristics by tuning weight matrices used to generate the hidden state vector such that, after training, a difference between the input workload of commands sent to the memory for storage and the commands recovered from the memory is minimized.
- 16 . The method of claim 15 , wherein an activation function among the set of activation functions includes one of a hyperbolic tangent, a sigmoid, and a rectified linear unit, and wherein first to last encoding blocks are connected in cascade.
- 17 . The method of claim 15 , further comprising: providing a decoder including a plurality of recurrent decoding blocks and configured to receive the compact representation vector and generate the recovered commands based on the compact representation vector and the plurality of input commands.
- 18 . The method of claim 17 , wherein the plurality of recurrent decoding blocks includes first to last decoding blocks, which are symmetric to the first to last encoding blocks and have a cascade connection structure in descending order.
- 19 . The method of claim 17 , wherein the generating of the hidden state vector corresponding to the received input command includes performing the activation function on a combination of the received input command and set one or more matrices, which are trained such that the difference between the input workload of commands sent to the memory for storage and the commands recovered from the memory is minimized.
- 20 . The method of claim 17 , further comprising: proving a predictor including a plurality of recurrent predicting blocks and configured to receive the compact representation vector and generate predicted commands based on the compact representation vector and a last input command among the plurality of input commands, wherein the plurality of recurrent predicting blocks includes first to last predicting blocks, which have a cascade connection structure in ascending order, and wherein the recurrent encoding, decoding and predicting blocks are implemented in different ways.
Description
BACKGROUND 1. Field Embodiments of the present disclosure relate to a scheme for analyzing workloads in a memory system. 2. Description of the Related Art The computer environment paradigm has shifted to ubiquitous computing systems that can be used anytime and anywhere. As a result, the use of portable electronic devices such as mobile phones, digital cameras, and notebook computers has rapidly increased. These portable electronic devices generally use a memory system having memory device(s), that is, data storage device(s). The data storage device is used as a main memory device or an auxiliary memory device of the portable electronic devices. Memory systems using memory devices provide excellent stability, durability, high information access speed, and low power consumption, since they have no moving parts. Examples of memory systems having such advantages include universal serial bus (USB) memory devices, memory cards having various interfaces such as a universal flash storage (UFS), and solid state drives (SSDs). Memory systems may perform operations associated with one or more workloads from a host. Workload analysis becomes important for performance and reliability improvements in memory systems. In this context, embodiments of the invention arise. SUMMARY Aspects of the present invention include a system and a method for compact representation of input workloads in a memory system, capable of separating workloads by certain features. In one aspect, a system includes a memory device; and a controller including a recurrent neural network coder, which includes an encoder including a plurality of recurrent encoding blocks, which includes first to last encoding blocks. Each recurrent encoding block is configured to: receive one of a plurality of input commands in an input workload associated with the memory device; and generate a hidden state vector corresponding to the received input command by applying a set of activation functions on the received input command. The last encoding block generates a final hidden state vector as the compact representation vector. In another aspect, a method for operating a controller of a memory system includes: providing a recurrent neural network coder which includes an encoder including a plurality of recurrent encoding blocks; receiving, by each recurrent encoding block, one of a plurality of input commands in an input workload associated with a memory device; and generating, by each recurrent encoding block, a hidden state vector corresponding to the received input command by applying a set of activation functions on the received input command. A last encoding block among the plurality of recurrent encoding blocks generates a final hidden state vector as a compact representation vector corresponding to the plurality of input commands. Additional aspects of the present invention will become apparent from the following description. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram illustrating a data processing system in accordance with an embodiment of the present invention. FIG. 2 is a block diagram illustrating a memory system in accordance with an embodiment of the present invention. FIG. 3 is a circuit diagram illustrating a memory block of a memory device in accordance with an embodiment of the present invention. FIG. 4 is a diagram illustrating a data processing system in accordance with an embodiment of the present invention. FIG. 5 is a diagram illustrating a recurrent neural network coder in accordance with an embodiment of the present invention. FIG. 6A is a diagram illustrating a plurality of recurrent blocks in accordance with an embodiment of the present invention. FIG. 6B is a diagram illustrating a recurrent block in accordance with an embodiment of the present invention. FIG. 7 is a diagram illustrating details of a recurrent neural network coder of FIG. 5. FIG. 8 is a diagram illustrating a recurrent neural network coder in accordance with an embodiment of the present invention. FIG. 9 is a diagram illustrating details of a recurrent neural network coder of FIG. 8. FIG. 10 is a diagram illustrating data set of workloads which are compacted and visualization by a recurrent neural network coder in accordance with an embodiment of the present invention. DETAILED DESCRIPTION Various embodiments are described below in more detail with reference to the accompanying drawings. The present invention may, however, be embodied in different forms and thus should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure is thorough and complete and fully conveys the scope of the present invention to those skilled in the art. Moreover, reference herein to “an embodiment,” “another embodiment,” or the like is not necessarily to only one embodiment, and different references to any such phrase are not necessarily to the same embodiment(s). Throughout the disclosure, like reference numerals