Search

EP-4742047-A1 - METHOD FOR MANAGING SHARED CACHE, AND ELECTRONIC DEVICE

EP4742047A1EP 4742047 A1EP4742047 A1EP 4742047A1EP-4742047-A1

Abstract

This application provides a shared cache management method and an electronic device, and relates to the field of electronic device control technologies. The method may match a processor and a running frequency of a shared cache, so that the processor can obtain data from the shared cache in a timely manner, thereby ensuring a running speed of an application. The method includes: setting, in response to a first event, the running frequency of the shared cache to a first frequency based on a focus application, user operation information, performance information of each processing core, and thread running information. The focus application is an application last operated by a user, the user operation information includes types and a quantity of operations performed by a user on the focus application in preset time, the performance information of each processing core is used to reflect a shared cache miss rate of the processing core, the thread running information is used to reflect running time proportions of a plurality of thread groups on each of N processing cores, and each thread group includes one or more threads running on the electronic device.

Inventors

  • MU, Zhenguo
  • ZHANG, SHICHU
  • XIAO, JUN

Assignees

  • Honor Device Co., Ltd.

Dates

Publication Date
20260513
Application Date
20240816

Claims (16)

  1. A shared cache management method, applied to an electronic device, wherein the electronic device comprises a processor, the processor comprises a shared cache and N processing cores, N is an integer greater than or equal to 1, and the method comprises: receiving a first event used to trigger a focus application to change; and setting, in response to the first event, a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information, wherein the focus application is an application last operated by a user, the user operation information comprises types and a quantity of operations performed by the user on the focus application in preset time, the performance information of each processing core is used to reflect a shared cache miss rate of the processing core, the thread running information is used to reflect running time proportions of a plurality of thread groups on each of the N processing cores, and each thread group comprises one or more threads running on the electronic device.
  2. The method according to claim 1, wherein before the setting a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information, the method further comprises: determining a user scenario based on the focus application and the user operation information; and determining resource configuration information based on the user scenario, wherein the resource configuration information comprises space proportions of the plurality of thread groups in the shared cache; and the setting a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information comprises: setting the running frequency of the shared cache to the first frequency based on the user scenario, the resource configuration information, the performance information of each processing core, and the thread running information.
  3. The method according to claim 2, wherein the setting the running frequency of the shared cache to the first frequency based on the user scenario, the resource configuration information, the performance information of each processing core, and the thread running information comprises: obtaining, for the i th processing core in the N processing cores, a running frequency of the i th processing core based on the user scenario, the resource configuration information, performance information of the i th processing core, and running time proportions respectively of the plurality of thread groups on the i th processing core, wherein i=1~N; respectively obtaining, by querying first configuration information based on running frequencies of the N processing cores, required shared cache frequencies corresponding to the N processing cores, wherein a required shared cache frequency corresponding to the i th processing core is positively correlated with the running frequency of the i th processing core; using a maximum value in the required shared cache frequencies corresponding to the N processing cores as the first frequency; and setting the running frequency of the shared cache to the first frequency.
  4. The method according to claim 3, wherein the performance information of the i th processing core further comprises a clock cycle of the i th processing core, the plurality of thread groups comprise a first control group and a second control group, a correlation degree between a thread comprised in the first control group and user interaction experience is greater than a correlation degree between a thread comprised in the second control group and the user interaction experience, and there is no intersection set between the thread comprised in the first control group and the thread comprised in the second control group; and the running frequency of the i th processing core is positively correlated with the clock cycle of the i th processing core, a space proportion of the first control group in the shared cache, and a running time proportion of the first control group on the i th processing core, and the running frequency of the i th processing core is negatively correlated with a space proportion of the second control group in the shared cache and a running time proportion of the second control group on the i th processing core.
  5. The method according to claim 4, wherein the running frequency of the i th processing core is further positively correlated with a shared cache miss rate of the i th processing core.
  6. The method according to claim 4 or 5, wherein when the user scenario is a first user scenario, the running frequency of the i th processing core, the clock cycle of the i th processing core, the space proportion of the first control group in the shared cache, the space proportion of the second control group in the shared cache, the running time proportion of the first control group on the i th processing core, and the running time proportion of the second control group on the i th processing core meet the following: Fcpu i = cycle_count i / sample_ms * Function 1 P 1 , P 2 , Pt , x i , y i Function 1 P 1 , P 2 , Pt , x i , y i = C 1 * 1 + P 1 / Pt * 1 + x i * 1 − P 2 / Pt * 1 − y i wherein Fcpu[i] is the running frequency of the i th processing core, cycle_count[i] is the clock cycle of the i th processing core, sample_ms is a time interval for the electronic device to obtain performance information of the N processing cores, P1 is the space proportion of the first control group in the shared cache, P2 is the space proportion of the second control group in the shared cache, Pt is a sum of P1 and P2, x[i] is the running time proportion of the first control group on the i th processing core, y[i] is the running time proportion of the second control group on the i th processing core, and C1 is a preset first constant.
  7. The method according to claim 4 or 5, wherein when the user scenario is a second user scenario, the running frequency of the i th processing core, the clock cycle of the i th processing core, the space proportion of the first control group in the shared cache, the space proportion of the second control group in the shared cache, the running time proportion of the first control group on the i th processing core, and the running time proportion of the second control group on the i th processing core meet the following: Fcpu i = cycle_count i / sample_ms * Function 2 P 1 , P 2 , Pt , x i , y i Function2(P1,P2,Pt,x[i],y[i])=C2*(1+P1/Pt)*(1+x[i])*(1+(ipm_meas[i]-ipm_ceil)/ipm_mea s[i])*(1-P2/Pt)*(1-y[i]) wherein Fcpu[i] is the running frequency of the i th processing core, cycle_count[i] is the clock cycle of the i th processing core, sample_ms is a time interval for the electronic device to obtain performance information of the N processing cores, P1 is the space proportion of the first control group in the shared cache, P2 is the space proportion of the second control group in the shared cache, Pt is a sum of P1 and P2, x[i] is the running time proportion of the first control group on the i th processing core, y[i] is the running time proportion of the second control group on the i th processing core, ipm_meas[i] is the shared cache miss rate of the i th processing core, ipm_ceil is a preset miss rate waterline value, and C2 is a preset second constant.
  8. The method according to any one of claims 4-7, wherein the user scenario comprises the first user scenario and the second user scenario; and when the user scenario is the first user scenario, the space proportion of the first control group in the shared cache is a first space proportion, and the space proportion of the second control group in the shared cache is a second space proportion; or when the user scenario is the second user scenario, the space proportion of the first control group in the shared cache is a third space proportion, and the space proportion of the second control group in the shared cache is a fourth space proportion, wherein the first space proportion is less than the third space proportion, and the second space proportion is greater than the fourth space proportion.
  9. The method according to any one of claims 2-8, wherein the method further comprises: in response to the first event, configuring cache partitions respectively of the plurality of thread groups in the shared cache based on the space proportions of the plurality of thread groups in the shared cache.
  10. The method according to any one of claims 2-9, wherein the resource configuration information further comprises a priority of using the shared cache by each of the plurality of thread groups, and the method further comprises: configuring the plurality of thread groups based on the priority of using the shared cache by each thread group.
  11. The method according to claim 10, wherein the thread running information further comprises a shared cache miss rate of a first thread, the first thread is a thread with load ranking in top M, M is a positive integer, and the method further comprises: adjusting a priority of a second thread in the first thread, so that the priority of the second thread is higher than a priority of another thread, different from the second thread, in a thread group to which the second thread belongs, wherein the second thread is a thread whose shared cache miss rate is greater than a first threshold in the first thread.
  12. The method according to any one of claims 3-11, wherein the obtaining, for the i th processing core in the N processing cores, a running frequency of the i th processing core based on the user scenario, the resource configuration information, performance information of the i th processing core, and running time proportions respectively of the plurality of thread groups on the i th processing core comprises: for the i th processing core in the N processing cores, if the shared cache miss rate of the i th processing core is less than or equal to a miss rate waterline value, obtaining the running frequency of the i th processing core based on the user scenario, the resource configuration information, the performance information of the i th processing core, and the running time proportions respectively of the plurality of thread groups on the i th processing core; or if the shared cache miss rate of the i th processing core is greater than the miss rate waterline value, determining the running frequency of the i th processing core based on a rated running frequency of the i th processing core.
  13. The method according to any one of claims 2-11, wherein the resource configuration information further comprises a load waterline, and the method further comprises: obtaining system load; and the setting, in response to the first event, a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information comprises: in response to the first event and the system load being less than or equal to the load waterline, configuring the cache partitions respectively of the plurality of thread groups in the shared cache, and setting the running frequency of the shared cache to the first frequency based on the focus application, the user operation information, the performance information of each processing core, and the thread running information.
  14. The method according to claim 13, wherein the load waterline comprises a first load waterline and a second load waterline, the first load waterline is a load waterline used when the user scenario is the first user scenario, the second load waterline is a load waterline used when the user scenario is the second user scenario, and the first load waterline is less than the second load waterline.
  15. An electronic device, wherein the electronic device comprises a storage and a processor, the processor comprises a shared cache and N processing cores, and N is an integer greater than or equal to 1; the processor is coupled to the storage, wherein the storage is configured to store a computer program code, and the computer program code comprises computer instructions; and when the computer instructions are executed by the processor, the electronic device is enabled to perform the method according to any one of claims 1-14.
  16. A computer-readable storage medium, comprising computer instructions, wherein when the computer instructions are run on an electronic device, the electronic device is enabled to perform the method according to any one of claims 1-14.

Description

This application claims priority to Chinese Patent Application No. 202311399261.0, filed with the China National Intellectual Property Administration on October 25, 2023 and entitled "SHARED CACHE MANAGEMENT METHOD AND ELECTRONIC DEVICE", which is incorporated herein by reference in its entirety. TECHNICAL FIELD This application relates to the field of electronic device control technologies, and in particular, to a shared cache management method and an electronic device. BACKGROUND Cache sharing means that a plurality of entities (for example, a plurality of applications or a plurality of processing cores (core) of an electronic device) in a cache architecture share a cache resource to meet different cache requirements. For example, for a central processing unit (central processing unit, CPU) that includes a plurality of cores (core) in the electronic device, the plurality of cores share a level 3 cache (Level 3 Cache) of the CPU. In a related technology, when a plurality of applications share a cache, to avoid cache contention, partitioning may be performed on the cache shared by the plurality of applications, so that different partitions are used to buffer to-be-buffered data of different applications. This method can equalize occupancy of the level 3 cache by different applications to a specific extent. However, when actual occupancy of the level 3 cache by an application is sufficient, if a frequency of the level 3 cache does not match a frequency of the CPU, the CPU cannot obtain data from the level 3 cache in a timely manner, which affects a response speed of a foreground application, and further affects user experience. SUMMARY Embodiments of this application provide a shared cache management method and an electronic device, so that a CPU is enabled to obtain data from a shared cache in a timely manner. To achieve the foregoing objective, the following technical solutions are used in the embodiments of this application. According to a first aspect, a shared cache management method is provided and applied to an electronic device. The electronic device includes a processor, where the processor includes a shared cache and N processing cores, and N is an integer greater than or equal to 1. The method includes: receiving a first event used to trigger a focus application to change; and setting, in response to the first event, a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information. The focus application is an application last operated by a user, the user operation information includes types and a quantity of operations performed by the user on the focus application in preset time, the performance information of each processing core is used to reflect a shared cache miss rate of the processing core, the thread running information is used to reflect running time proportions of a plurality of thread groups on each of the N processing cores, and each thread group includes one or more threads running on the electronic device. It may be understood that the focus application and the user operation information may be used to reflect a resource requirement of the focus application in a current user scenario. The running frequency of the shared cache is obtained based on the focus application, the user operation information, the thread running information, and the performance information of each processing core. Therefore, the obtained running frequency of the shared cache may match a running frequency of one or more processing cores and adapt to the current user scenario and a thread grouping situation, so that the one or more processing cores can obtain data or an instruction from the shared cache in a timely manner, thereby reducing impact caused to a running speed of a processing core due to a mismatch between a frequency of the shared cache and a running frequency of the processing core, increasing the running speed of the processing core, and further increasing a response speed of an application and improving user experience. In an implementation provided in the first aspect, before the setting a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information, the method further includes: determining a user scenario based on the focus application and the user operation information; and determining resource configuration information based on the user scenario, where the resource configuration information includes space proportions of the plurality of thread groups in the shared cache. The setting a running frequency of the shared cache to a first frequency based on the focus application, user operation information, performance information of each processing core, and thread running information includes: setting the running frequency of