US-12626122-B2 - Methods of providing trained hyperdimensional machine learning models having classes with reduced elements and related computing systems
Abstract
A method of providing a trained machine learning model can include providing a trained non-binary hyperdimensional machine learning model that includes a plurality of trained hypervector classes, wherein each of the trained hypervector classes includes N elements, and then, eliminating selected ones of the N elements from the trained non-binary hyperdimensional machine learning model based on whether the selected element has a similarity with other ones of the N elements, to provide a sparsified trained non-binary hyperdimensional machine learning model.
Inventors
- Behnam Khaleghi
- Tajana Simunic Rosing
- Mohsen Imani
- Sahand Salamat
Assignees
- THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Dates
- Publication Date
- 20260512
- Application Date
- 20210407
Claims (13)
- 1 . A method of generating a sparsified non-binary hyperdimensional machine learning model configured for classification using a sparse-data memory structure, the method comprising: accessing, by one or more processor circuits, a trained non-binary hyperdimensional machine learning model stored in memory, the trained non-binary hyperdimensional machine learning model comprising a plurality of class hypervectors, each class hypervector comprising a plurality of numeric-valued elements at respective index positions; computing, by the one or more processor circuits, a variation metric for each index position, the variation metric representing a statistical difference among the numeric-valued elements at that index position across the plurality of class hypervectors, the computed variation metrics forming a set of variation metric values respectively associated with the index positions; evaluating, by the one or more processor circuits, the index positions based at least in part on the set of variation metric values; identifying, by the one or more processor circuits, a subset of the index positions selected based at least in part on the set of variation metric values; modifying the trained non-binary hyperdimensional machine learning model by assigning a zero value to each numeric-valued element located at each index position in the identified subset, across each of the class hypervectors; and outputting a sparsified trained non-binary hyperdimensional machine learning model comprising the plurality of class hypervectors including zero-valued elements at the index positions in the identified subset, the sparsified trained non-binary hyperdimensional machine learning model including a structure that enables compression or skipping of the zero-valued elements during memory access or computation in classification operations.
- 2 . The method of claim 1 , wherein each numeric-valued element located at each index position in the identified subset of the index positions corresponds to an element having a same index within all class hypervectors of the trained non-binary hyperdimensional machine learning model and wherein the numeric-valued elements at the corresponding index positions have respective values that are all equal or about equal to one another.
- 3 . The method of claim 2 , further comprising: receiving a sparsification level input representing a target sparsification level for the sparsified trained non-binary hyperdimensional machine learning model; determining a distribution of the numeric-valued elements located at the index positions across the plurality of class hypervectors in the trained non-binary hyperdimensional machine learning model; identifying a range of the values within the distribution that provides the target sparsification level for the sparsified trained non-binary hyperdimensional machine learning model if the elements within the range of values are eliminated from the trained non-binary hyperdimensional machine learning model; and providing the elements within the range of values as the numeric-valued elements located at index positions to be assigned a zero value.
- 4 . The method of claim 3 , wherein each numeric-valued element located at an index position within the identified range of values is assigned a zero value.
- 5 . The method of claim 1 , wherein each numeric-valued element located at an index position in the identified subset of the index positions includes each element with a same index within all class hypervectors of the trained non-binary hyperdimensional machine learning model and all have an equal or about equal effect on a cosine similarity score with a query hypervector.
- 6 . The method of claim 1 , wherein the numeric-valued elements located at index positions in the identified subset include elements from the same trained hypervector class and are all equal to zero or about equal to zero.
- 7 . The method of claim 6 , further comprising: loading the sparsified trained non-binary hyperdimensional machine learning model into a compress-sparse-column circuit.
- 8 . The method of claim 1 , wherein the numeric-valued elements located at index positions in the identified subset include elements from a same trained hypervector class and all have an equal or about equal effect on a cosine similarity score with a query hypervector.
- 9 . The method of claim 1 , wherein each of the numeric-valued elements is represented by at least two bits of data.
- 10 . The method of claim 1 , wherein the numeric-valued elements located at index positions in the identified subset include: dimension-wise elements that each have a same index within all class hypervectors of the trained non-binary hyperdimensional machine learning model and all have respective values that equal or about equal to one another; and class-wise elements located at different index positions within a same class hypervector and are all equal to zero or about equal to zero.
- 11 . The method of claim 10 , wherein the dimension-wise elements and class-wise numeric-valued elements have mutually exclusive indexes within the class hypervectors of the trained non-binary hyperdimensional machine learning model.
- 12 . The method of claim 1 , further comprising: (a) applying training data to the sparsified trained non-binary hyperdimensional machine learning model; (b) detecting that a query hypervector included in the training data is mis-classified as a first class hypervector included in the sparsified trained non-binary hyperdimensional machine learning model rather than correctly as a second class hypervector included in the sparsified trained non-binary hyperdimensional machine learning model; (c) subtracting the query hypervector from the first class hypervector and adding the query hypervector to the second class hypervector; and (d) repeating operations (a) through (c) until all training data has been applied to the sparsified trained non-binary hyperdimensional machine learning model to provide an error-corrected sparsified trained non-binary hyperdimensional machine learning model.
- 13 . The method of claim 12 , further comprising: performing the method using the error-corrected sparsified trained non-binary hyperdimensional machine learning model as the trained non-binary hyperdimensional machine learning model.
Description
CLAIM FOR PRIORITY This application claims priority to Provisional Application Ser. No. 63/006,419, filed on Apr. 7, 2020 titled SparseHD: Sparsity-Based Hyperdimensional Computing For Efficient Hardware Acceleration, the entire disclosure of which is hereby incorporated herein by reference. STATEMENT OF GOVERNMENT SUPPORT This invention was made with government support under Grant No. HR0011-18-3-0004 awarded by the Department of Defense Advanced Research Projects Agency (DARPA). The government has certain rights in the invention. BACKGROUND With the emergence of the Internet of Things (IoT), many applications run machine learning algorithms to perform cognitive tasks. The learning algorithms have been shown effectiveness for many tasks, e.g., object tracking, speech recognition, image classification, etc. However, since sensory and embedded devices are generating massive data streams, it poses huge technical challenges due to limited device resources. For example, although Deep Neural Networks (DNNs) such as AlexNet and GoogleNet have provided high classification accuracy for complex image classification tasks, their high computational complexity and memory requirement hinder usability to a broad variety of real-life (embedded) applications where the device resources and power budget is limited. Furthermore, in IoT systems, sending all the data to the powerful computing environment, e.g., cloud, cannot guarantee scalability and real-time response. It is also often undesirable due to privacy and security concerns. Thus, we need alternative computing methods that can run the large amount of data at least partly on the less-powerful IoT devices. Brain-inspired Hyperdimensional (HD) computing has been proposed as the alternative computing method that processes the cognitive tasks in a more light-weight way. The HD computing is developed based on the fact that brains compute with patterns of neural activity which are not readily associated with numerical numbers. Recent research instead have utilized high dimension vectors (e.g., more than a thousand dimension), called hypervectors, to represent the neural activities, and showed successful progress for many cognitive tasks such as activity recognition, object recognition, language recognition, and bio-signal classification. SUMMARY Embodiments according to the invention can provide methods of providing trained hyperdimensional machine learning models having classes with reduced elements and related computing systems. Pursuant to these embodiments, a method of providing a trained machine learning model can include providing a trained non-binary hyperdimensional machine learning model that includes a plurality of trained hypervector classes, wherein each of the trained hypervector classes includes N elements, and then, eliminating selected ones of the N elements from the trained non-binary hyperdimensional machine learning model based on whether the selected element has a similarity with other ones of the N elements, to provide a sparsified trained non-binary hyperdimensional machine learning model. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows an overview of the HD classification including encoding and associative memory modules in some embodiments according to the invention. FIG. 2 shows how an encoding module maps a feature vector to a high-dimensioned space using pre-generated base hypervectors in some embodiments according to the invention. FIG. 3 is a block diagram depicting generating base hypervectors in some embodiments according to the invention. FIG. 4 is a table showing classification accuracy and efficiency of HD using binarized and non-binarized model in some embodiments according to the invention. FIG. 5 is a flowchart illustrating operations of a SparseHD framework enabling sparsity in HD computing model in some embodiments according to the invention. FIG. 6 is a chart showing a SparseHD dimension-wise sparsity model and distribution of the values variation (Δ(V)) in all dimensions of the class hypervectors in some embodiments according to the invention. FIG. 7 is a chart showing a trained SparseHD class-wise sparsity model and the distribution of the absolute class values in a trained model in some embodiments according to the invention. FIGS. 8A-D are graphs showing classification accuracy of the SparseHD during different retraining iterations in some embodiments according to the invention. FIG. 9 is a block diagram of an FPGA implementation of the encoding module associative memory for baseline HD and SparseHD with dimension-wise sparsity in some embodiments according to the invention. FIG. 10 is a block diagram of an FPGA implementation of the SparseHD with class-wise sparsity in some embodiments according to the invention. FIGS. 11A-D are graphs illustrating the impact of sparsity on the classification accuracy of the class-wise and dimension-wise sparse models where the curves depicted by the triangles correspond to dense HD models with sm