Search

EP-4736041-A1 - TECHNIQUES TO MITIGATE CACHE-BASED SIDE-CHANNEL ATTACKS

EP4736041A1EP 4736041 A1EP4736041 A1EP 4736041A1EP-4736041-A1

Abstract

Examples include techniques to mitigate or prevent cache-based side-channel attacks to a cache. Examples include use of assigned class of service (COS) assigned to cores of a process to determine whether to notify an OS of a potential malicious application attempting to access a cache line cached to a processor cache. Examples also include marking pages in an application memory address space of a processor cache as unflushable to prevent a potentially malicious application from accessing sensitive data loaded to the application memory address space of the processor cache.

Inventors

  • CORNU, MARCEL
  • KANTECKI, TOMASZ
  • BROWNE, John J.

Assignees

  • INTEL Corporation

Dates

Publication Date
20260506
Application Date
20231102

Claims (20)

  1. 1. An apparatus comprising: a cache; and circuitry to execute logic to: receive a request to access a cache line cached to the cache from a first core of a multi-core processor, the request to access the cache line for the first core to support execution of an application workload; identify a class of service (COS) tagged to the cache line; compare the COS tagged to the cache line to a COS assigned to the first core for the first core’s use of the cache; and notify an operating system if the COS tagged to the cache line does not match the COS assigned to the first core.
  2. 2. The apparatus of claim 1, wherein the operating system, responsive to being notified, is to take no action, monitor a granted access to the cache, or generate a segmentation fault to stop execution of the application workload.
  3. 3. The apparatus of claim 1, wherein the COS tagged to the cache line not matching the COS assigned to the first core indicates that the application workload is for a malicious application attempting a side-channel cache attack against the cache.
  4. 4. The apparatus of claim 1 , to identify the COS tagged to the cache line is based on metadata included with data in the cache line cached in the cache, the metadata to indicate a COS assigned to a second core of the multi-core processor, wherein the data in the cache line was cached to the cache for the second core to support execution of a second application workload.
  5. 5. The apparatus of claim 1 , wherein the cache includes a last level cache (LLC) shared by the first core and a second core of the multi-core processor.
  6. 6. A method comprising: receiving a request to access a cache line cached to a processor’s cache from a first core of the processor, the request to access the cache line for the first core to support execution of an application workload; identifying a class of service (COS) tagged to the cache line; comparing the COS tagged to the cache line to a COS assigned to the first core for the first core’s use of the processor’s cache; and notifying an operating system if the COS tagged to the cache line does not match the COS assigned to the first core.
  7. 7. The method of claim 6, wherein the operating system, responsive to being notified, is to take no action, monitor a granted access to the processor’s cache, or generate a segmentation fault to stop execution of the application workload.
  8. 8. The method of claim 6, wherein the COS tagged to the cache line not matching the COS assigned to the first core indicates that the application workload is for a malicious application attempting a side-channel cache attack against the processor’s cache.
  9. 9. The method of claim 6, identifying the COS tagged to the cache line is based on metadata included with data cached in the cache line cached in the cache, the metadata to indicate a COS assigned to a second core of the processor, wherein the data in the cache line that was cached to the processor’s cache for the second core to support execution of a second application workload.
  10. 10. The method of claim 6, wherein the processor’s cache includes a last level cache (LLC) shared by the first core and a second core of the processor.
  11. 11. At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system cause the system to: receive a request to access a cache line cached to a processor’s cache from a first core of the processor, the request to access the cache line for the first core to support execution of an application workload; identify a class of service (COS) tagged to the cache line; compare the COS tagged to the cache line to a COS assigned to the first core for the first core’s use of the processor’s cache; and notify an operating system if the COS tagged to the cache line does not match the COS assigned to the first core.
  12. 12. The at least one machine readable medium of claim 11, wherein the operating system, responsive to being notified, is to take no action, monitor a granted access to the processor’s cache, or generate a segmentation fault to stop execution of the application workload.
  13. 13. The at least one machine readable medium of claim 11, wherein the COS tagged to the cache line not matching the COS assigned to the first core indicates that the application workload is for a malicious application attempting a side-channel cache attack against the processor’s cache.
  14. 14. The at least one machine readable medium of claim 11, to identify the COS tagged to the cache line is based on metadata included with data cached in the cache line cached in the cache, the metadata to indicate a COS assigned to a second core of the processor, wherein the data in the cache line that was cached to the processor’s cache for the second core to support execution of a second application workload.
  15. 15. The at least one machine readable medium of claim 11, wherein the processor’s cache includes a last level cache (LLC) shared by the first core and a second core of the processor.
  16. 16. At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system cause the system to: receive a request to load sensitive data to an application memory address space of a processor cache from an application; cause the sensitive data to load to the application memory address space of the processor cache; and mark pages in the application memory address space as unflushable based on the sensitive data being loaded to the application memory address space, wherein to mark the pages as unflushable causes any cache flush instructions received from the application to be ignored or retired.
  17. 17. The at least one machine readable medium of claim 16, wherein to mark pages in the application memory address space comprises using an encoding indicated in a page attribute table that indicates an unflushable memory type.
  18. 18. The at least one machine readable medium of claim 16, to mark the pages as unflushable to cause any cache flush instructions received from the application to be ignored or retired is to mitigate or prevent a side-channel cache attack against the processor cache by the application to obtain the sensitive data.
  19. 19. The at least one machine readable medium of claim 16, wherein the sensitive data comprises an OpenSSL library.
  20. 20. The at least one machine readable medium of claim 16, wherein the processor cache includes a last level cache (LLC).

Description

TECHNIQUES TO MITIGATE CACHE-BASED SIDE-CHANNEL ATTACKS CLAIM OF PRIORITY [0001] This application claims priority under 35 U.S.C. § 365(c) to U.S. Application No. 18/214,870, filed June 27, 2023, the entire contents of which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] Examples described herein are generally related to mitigating cache-based side-channel attacks made against a cache hierarchy of a processor such as a central processing unit (CPU). BACKGROUND [0003] A processor of a computing platform coupled to a network (e.g., in a datacenter) can be associated with various types of resources that can be allocated to an application, virtual machine (VM) or process hosted by the computing platform. The various types of resources can include, but are not limited to, central processing unit (CPU) cores, system memory such as random access memory, network bandwidth or processor cache (e.g., last level cache (LLC)). Performance requirements for the application that can be based on service level agreements (SLAs) or general quality of service (QoS) requirements can make it necessary to reserve or allocate one of more of these various types of resources to ensure SLAs and/or QoS requirements are met. One such resource allocation to the application can include allocated portions of a processor cache hierarchy to maintain cache line data for use during execution of an application workload. BRIEF DESCRIPTION OF THE DRAWINGS [0004] FIG. 1 illustrates an example system. [0005] FIG. 2 illustrates an example class of service (COS) map. [0006] FIG. 3 illustrates an example first process flow. [0007] FIG. 4 illustrates an example scheme. [0008] FIG. 5 illustrates an example page attribute table. [0009] FIG. 6 illustrates an example second process flow. [0010] FIG. 7 illustrates example operating states of a computing platform. [0011] FIG. 8 illustrates an example block diagram for a first apparatus. [0012] FIG. 9 illustrates an example of a first logic flow. [0013] FIG. 10 illustrates an example of a first storage medium. [0014] FIG. 11 illustrates an example block diagram for a second apparatus. [0015] FIG. 12 illustrates an example of a second logic flow. [0016] FIG. 13 illustrates an example of a second storage medium. [0017] FIG. 14 illustrates an example computing platform. DETAILED DESCRIPTION [0018] Relatively new technologies such as Intel® Resource Director Technology (RDT) allow for monitoring usage and allocation of processor cache that is mainly focused on defining cache classes of service (COS or CLOS) and how to use bit masks such as capacity bitmasks (CBMs) to partition the processor cache to support the COS. In some implementations for these new technologies such as Intel® RDT, users can be able to use model specific registers (MSRs) directly to partition the processor cache to support the COS. In other implementations, users can use kernel support such as Intel® developed Linux kernel support or access software libraries to assist in partitioning the processor cache to support the COS. An application, VM or process hosted by the computing platform can then be assigned to a COS and this assignment can enable use (sometimes exclusive use) of partitioned portions of a processor cache hierarchy that can include, but is not limited to, level 2 (L2) cache, or level 3 (L3)/LLC cache. In addition to allocation of a processor cache hierarchy based on COS, memory attributes included in a page attribute table (PAT) can be used to dictate or indicate how applications can access and/or affect cache lines cached in a processor cache hierarchy. [0019] Modern types of processors, such as but not limited to, Intel® Corporation or Advanced Micro Devices (AMD®) processors, can be vulnerable to cache-based timing attacks. For example, a FLUSH+RELOAD instruction attack where unprivileged malicious applications can effectively extract-sensitive information from a victim application by exploiting common operating system (OS) optimizations such as content-based page share (e.g., memory deduplication). Examples described in this disclosure can mitigate or eliminate some or possibly most types of cache-based side-channel attacks by generating an exception when a processor core executing a workload for an application attempts to access cache lines maintained in a cache hierarchy that is outside the processor core’s assigned COS or by adding a new memory type to a PAT that makes specified memory pages of a potential victim application “unflushable” from a processor’s cache hierarchy. [0020] FIG. 1 illustrates an example system 100. In some examples, as shown in FIG. 1, system 100 includes a computing platform 101. For these examples, computing platform 101 can be coupled to a network (not shown) and can be part of a datacenter that includes a plurality of interconnected computing platforms, servers or nodes included in the datacenter. According to some examples, computing platform 101 can be a node