KR-20260065600-A - Apparatus and method for defending AI models against side-channel attacks via low-power dummy signal synthesis and selective interval protection
Abstract
The present invention relates to an artificial intelligence computing device and method for protecting the weights of an artificial intelligence model from side-channel attacks. The artificial intelligence computing device according to the present invention includes a security control unit that selectively identifies a specific computation section with a high risk of side-channel signal leakage within the entire sequence of artificial intelligence computations, a noise generation unit that generates a low-power dummy signal whose electrical characteristics match the actual weight computation within said section, and a signal synthesis unit that superimposes the dummy signal only on the electromagnetic fingerprint emitted externally without affecting the actual computation result. The computational amount of the dummy signal is limited to within a preset threshold of the total actual computation amount to minimize security overhead, and through layer-specific differential protection concentrating security on the input and output layers, non-computational dummy signal generation based on a dummy load circuit, ultra-lightweight dummy signal generation using a bit inversion method, adaptive security control based on idle resources, and timing jitter insertion functions, it is possible to practically defend against side-channel attacks even on battery-powered edge devices.
Inventors
- 안범주
Assignees
- 안범주
Dates
- Publication Date
- 20260508
- Application Date
- 20260421
Claims (1)
- In an artificial intelligence model protection device, A security control unit that identifies specific operation sections requiring security within the entire sequence of artificial intelligence operations; A noise generator that generates a low-power dummy signal matching the actual weighting operation and electrical characteristics within the above-mentioned specific operation interval; and It includes a signal synthesis unit that superimposes the dummy signal only on the electromagnetic wave fingerprint measured externally without affecting the actual calculation result, and An artificial intelligence computing device characterized in that the amount of computation of the above dummy signal is limited to within a preset threshold of the total actual amount of computation.
Description
Apparatus and method for defending AI models against side-channel attacks via low-power dummy signal synthesis and selective interval protection The present invention relates to the protection of inference operations of an artificial intelligence model, and more specifically, to an artificial intelligence computing device and method that, in order to respond to side-channel attacks (SCA) using electromagnetic waves (EM) and power consumption patterns emitted externally during the process of processing weights by the artificial intelligence computing device, selectively identifies specific computational sections requiring security, generates a low-power dummy signal that matches the electrical characteristics of the actual weight computation, and synthesizes it only with the electromagnetic wave fingerprint (EM Fingerprint) emitted externally, thereby neutralizing the attacker's signal analysis without affecting the actual computation result. Weights in deep learning-based artificial intelligence models are a company's core intellectual property, acquired through training by investing massive datasets and vast computational resources. In fields such as medical diagnostic models, autonomous driving recognition models, and financial anomaly detection models, the economic value of these weights reaches billions of won, and the leakage of such weights can fatally undermine a company's technological competitiveness. Consequently, the importance of technology that safely protects weights in AI computing devices is being significantly highlighted. During the process in which an artificial intelligence computing unit loads weight data into registers or memory and performs arithmetic operations such as matrix multiplication and convolution, power consumption and electromagnetic emission patterns change depending on the bit values and bit transitions of the data being processed. Specifically, according to the Hamming Weight Model, power consumption increases proportionally to the number of bits with a logical value of '1' in the data being processed, while according to the Hamming Distance Model, power is also determined by the number of bit transitions from the previous clock cycle value. This physical signal leakage stems from the fundamental characteristics of semiconductor circuits. Side-Channel Attacks (SCA) are attack techniques that extract confidential information from computing devices by exploiting such physical signal leakage. Types include Simple Power Analysis (SPA), Differential Power Analysis (DPA), Electromagnetic Analysis (EMA), and Correlation Power Analysis (CPA). An attacker can reverse engineer the processing weight values by placing an inductive probe or a shunt resistor around the target device, measuring operations for the same or similar inputs thousands of times, and then statistically analyzing the collected trace data. Among conventional countermeasures, hardware shielding involves physically encasing the device in a metal case; however, this method is difficult to apply to small edge devices due to issues such as reduced heat dissipation efficiency, increased size, and high costs. The full pseudo-computation insertion method performs additional fake operations of the same scale as actual weighting operations, leading to a drastic increase in computational overhead and limiting its use in battery-powered devices or systems requiring real-time processing. Furthermore, existing fixed dummy operation insertion methods have a fundamental limitation, as attackers can identify and filter the periodicity of dummy signals through statistical averaging based on repeated measurements. In particular, among the layers constituting an artificial intelligence model, the input layer and output layer are directly connected to raw input data and final prediction results, so the risk of exposure of sensitive information is relatively high if weights are leaked. On the other hand, the weights of the intermediate hidden layer are responsible for abstract representation, so the risk of leakage is relatively low. However, conventional technology applied a uniform security strength to all layers, resulting in the waste of unnecessary computational resources. Therefore, there is a strong demand for the development of a side-channel attack defense technology using a selective section protection method that simultaneously optimizes security strength and computational efficiency. FIG. 1 is an overall configuration block diagram of an artificial intelligence computing device (100) according to one embodiment of the present invention. FIG. 2 is an exemplary diagram illustrating differential application of security strength by layer according to an embodiment of the present invention, showing a structure in which the frequency of inserting dummy signals varies depending on the input layer, the hidden layer, and the output layer. FIG. 3 is a diagram of a non-computational disturbance signal generation structure based on